Skip to content

IncQueryLabs/trainbenchmark

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Train Benchmark

Build Status

Note. The Train Benchmark has a fork for the 2015 Transformation Tool Contest, primarily targeting EMF tools. This repository contains the original Train Benchmark which also supports RDF, SQL and property graph databases.

Warning. The Train Benchmark is designed to run in a server environment. Some implementations may shut down or delete existing databases, so only run it on your developer workstation if you understand the consequences.

For theoretical and implementation details, check out the following documents:

Projects

Generator projects

The generator projects are responsible for generating instance models. Currently, the following formats are supported:

  • EMF
  • Property graph
  • RDF
  • SQL

Benchmark projects

The benchmark projects are responsible for running the benchmarks.

  • EMF
    • Drools 5 & 6
    • EMF API
    • EMF-IncQuery
    • Eclipse OCL
  • Property graph
    • Neo4j
    • OrientDB
  • RDF
    • Blazegraph
    • Jena
    • Sesame
    • Virtuoso
  • SQL
    • MySQL

Getting started

The framework provides a set of scripts for building the projects, generating the instance models and running the benchmark.

Installation guide

The benchmark requires a 64-bit operating system. We recommend Ubuntu-based Linux systems.

Setup

Automatic

Provided that you start with a fresh Ubuntu server installation, you may use the provided install scripts:

scripts/init-jdk.sh && \
scripts/init-maven.sh && \
scripts/init-python.sh && \
scripts/dep-mysql.sh && \
scripts/dep-neo4j.sh && \
scripts/dep-virtuoso.sh && \
scripts/dep-mondo-sam.sh

Manual

Alternatively, install the following software:

Usage

Initialize the configuration file by running:

scripts/init-config.sh

This creates config/config.yml which defines the configuration for the benchmark. The documentation is provided as comments in the file.

The scripts directory contains the run.py script which is used for the following purposes:

  • scripts/run.py -b -- build the projects
  • scripts/run.py -b -s -- build the projects without testing
  • scripts/run.py -g -- generates the instance models
  • scripts/run.py -m -- runs the benchmark
  • scripts/run.py -h -- displays the help

Importing to Eclipse

The projects are developed and tested with Eclipse Mars.

To import and develop the Train Benchmark, you need the m2e Eclipse plugin, included in Eclipse for Java developers. If you use another distribution (e.g. Eclipse Modeling), you can install it from the Mars update site or the m2e update site (http://download.eclipse.org/technology/m2e/releases).

Naming conventions

To avoid confusion between the different implementations, we decided to use the Smurf Naming convention (see #21). For example, the classes in the Java implementation are named JavaBenchmarkCase, JavaPosLength, JavaPosLengthMatch, JavaPosLengthTransformation, while the classes in the EMF-IncQuery implementation are named EMFIncQueryBenchmarkCase, EMFIncQueryPosLength, etc. We found that relying on the package names to differentiate class names is error-prone and should be avoided.

Reporting tools

Install R packages

Follow the instructions here.

Convert the results

It is possible to convert the measurement results from JSON to CSV with the following script:

scripts/convert-results.sh

Interactive reporting

In order to use the interactive interface in MONDO-SAM, install additional R packages as described here, then run the following:

scripts/interactive.py

Generating diagrams

Adjust the config/reporting.json file and run the scripts/report.sh script. The possible configuration values are listed in the MONDO-SAM wiki.

Instance models

The Train Benchmark provides two sorts of instance models:

  • Minimal models, used only for testing
  • Scalable models, used both for testing and benchmarking

The minimal models contain only a few (8-10) model elements to provide simple models for development and testing.

The scalable models are generated for each scenario in sizes denoted by the powers of two, e.g. railway-repair-1, railway-repair-2, railway-repair-4, etc.

About

The Train Benchmark framework for evaluating incremental model validation performance

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 97.4%
  • Other 2.6%