Each Mono configuration requires a .conf
file. The files in the configs
directory are examples. The $DIR
variable points to the benchmarker root directory. Neither the results directories nor the mono executable need to be in subdirectories, like they are in the example configs.
To run the suite for a specific revision, use the runner.sh
script. It must be run in the benchmarker root directory:
./runner.sh [-c <commit-sha1>] <revision> <config-file> ...
The revision can be an arbitrary string, but revision strings must be string-comparedly ascending. This blog post describes a method for deriving such a revision string from git commits. We would have to use more than four digits for the commit counter, of course. If the SHA1 is available, pass it on. It is used by the collect script for user-friendliness.
The script will place the result files in the directories $RESULTS_DIR/$CONFIG_NAME/r$REVISION
.
To collect benchmarking results from all configurations and revisions, use collect.pl
, like so:
./collect.pl [--conf <config-file> ...] <root-dir> <config-subdir> ...
Where each of the config-subdir
is a subdirectory of root-dir
. Typically root-dir
would be $RESULTS_DIR
and config-subdir
would be $CONFIG_NAME
from the configuration files.
You can specify any number of config-file
s, using the --conf
option. Config files can specify revisions to ignore in the resulting output.
The script will generate in index.html
in root-dir
and further HTML and image files in the subdirectories. Note that each of the individual original result files is linked to, so the whole root-dir
tree is necessary for viewing, not just the files generated by collect.pl
.
To compare two or more revisions and/or configurations directly, use compare.py
:
./compare.py [--output <image-file>] <revision-dir> <revision-dir> ...
Where each revision-dir
is a directory containing the .times
files generated by runner.sh
. If an image-file
is given, the graph is written to that file, otherwise it is displayed on the screen.