Last updated: $Date: 2011-09-07 11:08:21 -0400 (Wed, 07 Sep 2011) $ by $Author: CloyceS $
(To check for possible updates to this document, please see http://www.spec.org/cpu2006/Docs/ )
Contents
1 Introduction
1.1 Who Needs runspec?
1.2 About Config Files
1.2.1 Finding a config file
1.2.2 Naming your config file
1.2.3 If you change only one thing...
1.3 About Defaults
1.4 About Disk Usage and Support for Multiple Users
1.4.1 Directory tree
1.4.2 Hey! Where did all my disk space go?
1.5 Multi-user support
1.5.1 Recommended sharing method: output_root
1.5.2 Alternative sharing methods
2 Before Using runspec
2.1 Install kit
2.2 Have a config file
2.3 Undefine SPEC
2.4 Set your path: Unix
2.5 Set your path: Windows
2.6 Check your disk space
3 Using runspec
3.1 Simplest usage
3.1.1 Reportable run
3.1.2 Running selected benchmarks
3.1.3 Output files
3.2 Syntax
3.2.1 Benchmark names in run lists
3.2.2 Run order for reportable runs
3.2.3 Run order when more than one tuning is present
3.2.4 Run order when more than one suite is present
3.3 Actions
3.4 Commonly used options
--action --check_version --config --copies --flagsurl --help --ignore_errors --iterations --loose --output_format --rate --rawformat --rebuild --reportable --tune
3.5 Less commonly used options
--basepeak --nobuild --comment --define --delay --deletework --extension --fake --fakereport --fakereportable --[no]feedback --[no]graph_auto --graph_max --graph_min --http_proxy --http_timeout --info_wrap_column --keeptmp --log_timestamp --machine --make_no_clobber --make_bundle --maxcompares --notes_wrap_column --parallel_setup --parallel_setup_prefork --parallel_setup_type --parallel_test --preenv --reportonly --review --[no]setprocgroup --size --speed --test --[no]table --undef --update --update_flags --unpack_bundle --use_bundle --username --verbose --version
4 Quick reference
Note: links to SPEC CPU2006 documents on this web page assume that you are reading the page from a directory that also contains the other SPEC CPU2006 documents. If by some chance you are reading this web page from a location where the links do not work, try accessing the referenced documents at one of the following locations:
Everyone who uses SPEC CPU2006 needs runspec. It is the primary tool in the suite. It is used to build the benchmarks, run them, and report on their results. All users of CPU2006 should read this document.
If you are a beginner, please start out by reading from the beginning through section 3.1 Simplest Usage. That will probably be enough to get you started.
In order to use runspec, you need a "config file", which contains detailed instructions about how to build and run the benchmarks. You may not have to learn much about config files in order to get started. Typically, you start off using a config file that someone else has previously written.
Where can you find such a config file? There are various sources:
Look in the directory $SPEC/config/ (Unix) or %SPEC%\config\ (Windows). You may find that there is a already a config file there with a name that indicates that it is appropriate for your system. You may even find that default.cfg already contains settings that would be a good starting place for your system.
Look at the SPEC web site (http://www.spec.org/cpu2006/) for a CPU2006 result submission that used your system, or a system similar to yours. You can download the config file from that submission.
Alternatively, you can write your own, using the instructions in config.html
Once you have found a config file that you would like to use as a starting point, you will probably find it convenient to copy it and modify it according to your needs. There are various options:
You can copy the config file to default.cfg. Doing so means that you won't even need to mention --config on your runspec command line.
You might find it useful to name config files after the date and the test attempt: jan07a.cfg, jan07b.cfg, and so forth. This is alleged to make it easier to trace the history of an experiment set.
If you are sharing a testbed with other users, it is probably wise to name the config file after yourself. For example, if Yusuf is trying out the new Solaris Fortran95 compiler, he might say:
and edit the new config file to add whatever options he wishes to try out in the new compiler.
At first, you may hesitate to change settings in config files, until you have a chance to read config.html. But there is one thing that you might want to change right away. Look for a line that says:
That line determines what extension will be added to your binaries. If there are comments next to that line giving instructions ("# set ext to A for this, or to B for that"), then set it accordingly. But if there are no such instructions, then usually you are free to set the extension to whatever you like, which can be very useful to ensure that your binaries are not accidentally over-written. You might add your name in the extension, if you are sharing a testbed with others. Or, you may find it convenient to keep binaries for a series of experiments, to facilitate later analysis; if you're naming your config files with names such as jan07a.cfg, you might choose to use "ext=jan07a" in the config file.
The SPEC tools have followed two principles regarding defaults:
This means (the good news) that something sensible will usually happen, even when you are not explicit about what you want. But it also means (the bad news) that if something unexpected happens, you may have to look in several places in order to figure out why it behaves differently than you expect.
The order of precedence for settings is:
Highest precedence: | runspec command |
Middle: | config file |
Lowest: | the tools as shipped by SPEC |
Therefore, when this document tells you that something is the default, bear in mind that your config file may have changed that setting. With luck, the author of the config file will tell you so (perhaps in the comments to the config file).
The structure of the CPU2006 directory tree is:
$SPEC or %SPEC% - the root directory benchspec - some suite-wide files CPU2006 - the benchmarks bin - tools to run and report on the suite config - config files result - log files and reports tools - sources for the CPU2006 tools
Within each of the individual benchmarks, the structure is:
nnn.benchmark - root for this benchmark Spec - SPEC metadata about the benchmark data all - data used by all runs (if needed by the benchmark) ref - the real data set, required for all result reporting test - data for a simple test that an executable is functional train - data for feedback-directed optimization exe - compiled versions of the benchmark run - all builds and runs take place here src - the sources for the benchmark
When you find yourself wondering "Where did all my disk space go?", the answer is "The run directories." Most (*) activity takes place in automatically created subdirectories of $SPEC/benchspec/CPU2006/*/run/ (Unix) or %SPEC%\benchspec\CPU2006\*\run\ (Windows).
For example, suppose Bob has a config file that he is using to test some new memory optimizations, and has set
in his config file. In that case, the tools would create directories such as these:
$ pwd /Users/bob/cpu2006/benchspec/CPU2006/401.bzip2/run $ ls list run_base_test_BobMemoryOpt.0001 run_base_train_BobMemoryOpt.0001 run_base_ref_BobMemoryOpt.0001 run_peak_test_BobMemoryOpt.0001 run_peak_train_BobMemoryOpt.0001 run_peak_ref_BobMemoryOpt.0001 $
To get your disk space back, see the documentation of the various cleaning options, below; or issue a command such as the following (on Unix systems; Windows users can select the files with Explorer):
rm -Rf $SPEC/benchspec/CPU2006/*/run/run*BobMemory*
The effect of the above command would be to delete all the run directories associated with the benchmarks which used extension *BobMemory*. Note that the command did not delete the directories where the benchmarks were built (...CPU2006/*/build/*); sometimes it can come in handy to keep the build directories, perhaps to assist with debugging.
(*) Other space: In addition to the run directories, other consumers of disk space include: (1) temporary files; for a listing of these, see the documentation of keeptmp; and (2) the build directories. For the example above, underneath:
/Users/bob/cpu2006/benchspec/CPU2006/401.bzip2/build/
will be found:
$ ls build_base_BobMemoryOpt.0001 build_peak_BobMemoryOpt.0001
(History: As of the release of SPEC CPU2006 V1.0, directory names include the string "build" or "run" followed by the extension; in CPU2000, they simply used numbers. As of SPEC CPU2006 1.1, there is a changed location for the build directories; if you prefer the old location, see build_in_build_dir.)
(If you are not sharing a SPEC CPU2006 installation with other users, you can skip ahead to section 2.)
The SPEC CPU2006 toolset provides support for multiple users of a single installation, but the tools also rely upon users to make appropriate choices regarding setup of operating-system file protections. This section describes the multi-user features and describes ways of organizing protections. First, to the features that are always enabled:
The SPEC-distributed source directories and data directories are not changed during testing. Instead, working directories are created as needed for builds and runs.
Each user's build and run directories are tagged with the name of the user that they belong to (in the file nnn.benchmark/run/list). Directories created for one user are not re-used for a different user.
Multiple users can run tests at the same time. (Of course, if the jobs compete with each other for resources, it is likely that they will run more slowly.)
Multiple users can even run the "same" test at the same time, and they will automatically be given separate run directories.
If you have more than one user of SPEC CPU2006, you can use additional features and choose from several different ways to organize the on-disk layout to share usage of the product. The recommended way is described first.
The recommended method for sharing a SPEC CPU2006 installation among multiple users has 4 steps:
Step | Example (Unix) |
Protect most of the SPEC tree read-only | chmod -R ugo-w $SPEC |
Allow shared access to the config directory | chmod 1777 $SPEC/config |
Keep your own config files | cp config/assignment1.cfg config/alan1.cfg |
Add an output_root to your config file | output_root=/home/${username}/spec |
More detail about the steps is below.
Most of the CPU2006 tree is shared, and can be protected read-only. For example, on a Unix system, you might set protections with:
chmod -R ugo-w $SPEC
The one exception is the config directory, $SPEC/config/ (Unix) or %SPEC%\config\ (Windows), which needs to be a read/write directory shared by all the users. It is written to by users when they create config files, and by the tools themselves: config files are updated after successful builds to associate them with their binaries.
On Unix, the above protection command needs to be supplemented with:
chmod 1777 $SPEC/config
which will have the effect (on most Unix systems) of allowing users to create config files which they can choose to protect to allow access only by themselves.
Config files usually would not be shared between users. For example, students might create their own copies of a config file.
Alan enters:
cd /cs403 . ./shrc cp config/assignment1.cfg config/alan1.cfg chmod u+w config/alan1.cfg runspec --config alan1 --action build 456.hmmer
Wendy enters:
cd /cs403 . ./shrc cp config/assignment1.cfg config/wendy1.cfg chmod u+w config/wendy1.cfg runspec --config wendy1 --action build 456.hmmer
Set output_root in the config files to change the destinations of the outputs.
To see the effect of output_root, consider an example with and without the feature. If $SPEC is set to /cs403 and if ext=feb27a, then normally the build directory for 456.hmmer with base tuning would be:
But if the config files include (near the top, before any occurrence of a section marker):
output_root=/home/${username}/spec ext=feb27a
then Alan's build directory for 456.hmmer will be
and Wendy's will be
With the above setting of output_root, log files and reports that would normally go to /cs403/result instead will go to /home/alan/spec/result and /home/wendy/spec/result. Alan will find hmmer executables underneath /home/alan/spec/benchspec/CPU2006/456.hmmer/exe. And so forth.
Summary: output_root is the recommended way to separate users. Set the protection on the original tree to read-only, except for the config directory, which should be set to allow users to write, and protect, their own config files.
(History: the output_root feature was added in SPEC CPU2006 V1.0.)
An alternative is to keep all the files in a single directory tree. In this case:
The directory tree must be writable by each of the users, which means that they have to trust each other not to modify or delete each others' files.
Directories such as result, nnn.benchmark/exe and nnn.benchmark/run are not segregated by user, so you can only have one version of (for example) benchspec/CPU2006/400.perlbench/exe/perlbench_base.jan07a
Note that user names do not appear in the directory names. For example, if Lizy, Aashish, and Ajay are sharing a directory tree on a Windows system, and each of them runs the ref workload for 401.bzip2 with base tuning and a config file that sets ext=wwc9, there will be three directories created:
To discover which 401.bzip2 run directories belong to Lizy:
F:\> cd %SPEC%\benchspec\CPU2006\401.bzip2\run
F:\cpu2006\benchspec\CPU2006\401.bzip2\run> findstr lizy list
To discover which result files belong to Aashish:
F:\cpu2006> cd %SPEC%\result
F:\cpu2006\result> findstr aashish *log
(Of course, on Unix, that would be grep instead of findstr).
Name convention: Users sharing a tree can adopt conventions to make their files more readily identifiable. As mentioned above, you can set your config file name to match your own name, and do the same for the extension.
Expid convention: Another alternative is to tag directories with labels that help to identify them based on an "experiment ID", with the config file feature expid, as described in config.html. (History: The expid was added in SPEC CPU2006 V1.0.)
Spend the disk space: A final alternative, of course, is to not share. You can simply give each user their own copy of the entire SPEC CPU2006 directory tree. This may be the easiest way to ensure that there are no surprises (at the expense of extra disk space.)
Before using runspec, you need to:
The runspec tool uses perl version 5.8.7, which is installed as specperl when you install CPU2006. If you haven't already installed the suite, then please see system-requirements.html followed by:
You won't get far unless you have a config file, but fortunately you can get started by using a pre-existing config file. See About Config Files, above.
If the environment variable SPEC is already defined (e.g. from a run of some other SPEC benchmark suite), it may be wise to undefine it first, e.g. by logging out and logging in, or by using whatever commands your system uses for removing definitions (such as unset).
To check whether the variable is already defined, type
echo $SPEC (Unix) or
echo %SPEC% (Windows)
On Unix systems, the desired output is nothing at all; on Windows systems, the desired output is %SPEC%.
Similarly, if your PATH includes tools from some other SPEC suite, it may be wise to remove them from your path.
Next, you need to
set your path appropriately for your system type.
See section 2.4 for Unix
or section 2.5 for Windows.
If you are using a Unix system, change your current directory to the top-level SPEC directory and source either shrc or cshrc:
Note that it is, in general, a good idea to ensure that you understand what is in your path, and that you have only what you truly need. If you have non-standard versions of commonly used utilities in your path, you may avoid unpleasant surprises by taking them out.
q. Do you have to be root? Occasionally, users of Unix systems have asked whether it is necessary to elevate privileges, or to become 'root', prior to entering the above command. SPEC recommends (*) that you do not become root, because: (1) To the best of SPEC's knowledge, no component of SPEC CPU needs to modify system directories, nor does any component need to call privileged system interfaces. (2) Therefore, if you find that it appears that there is some reason why you need to be root, the cause is likely to be outside the SPEC toolset - for example, disk protections, or quota limits. (3) For safe benchmarking, it is better to avoid being root, for the same reason that it is a good idea to wear seat belts in a car: accidents happen, humans make mistakes. For example, if you accidentally type:
when you meant to say:
then you will very grateful if you are not privileged at that moment.
(*) This is only a recommendation, not a requirement nor a rule.
If you are using a Microsoft Windows system, start a Command Prompt Window (previously known as an "MSDOS window"). Change to the directory where you have installed CPU2006, then edit shrc.bat, following the instructions contained therein. For example:
C:\> f:
F:\> cd diego\cpu2006
F:\diego\cpu2006\> copy shrc.bat shrc.bat.orig
F:\diego\cpu2006\> notepad shrc.bat
and follow the instructions in shrc.bat to make the appropriate edits for your compiler paths.
Caution: you may find that the lines are not correctly formatted (the text appears to be all run together) when you edit this file. If so, see the section "Using Text Files on Windows" in the Windows installation guide.
You will have to uncomment one of two lines:
rem set SHRC_COMPILER_PATH_SET=yes
or
rem set SHRC_PRECOMPILED=yes
by removing "rem" from the beginning of the desired line.
If you uncomment the first line, you will have to follow the instructions a few lines further on, to set up the environment for your compiler.
If you uncomment the second line, you must have pre-compiled binaries for the benchmarks
Note that it is, in general, a good idea to ensure that you understand what is in your path, and that you have only what you truly need. If you have non-standard versions of commonly used utilities in your path, you may avoid unpleasant surprises by taking them out. In order to help you understand your path, shrc.bat will print it after it is done.
When you are done, set the path using your edited shrc.bat, for example:
F:\diego\cpu2006> shrc
Presumably, you checked to be sure you had enough space when you read system-requirements.html, but now might be a good time to double check that you still have enough. Typically, you will want to have at least 8 GB free disk space at the start of a run. Windows users can say "dir", and will find the free space at the bottom of the directory listing. Unix users can say "df -k ." to get a measure of freespace in KB.
If you have done some runs, and you are wondering where your space has gone, see section 1.4.2.
It is easiest to use runspec when:
Some kind person has already compiled the benchmarks.
That kind person provides both the compiled images and their corresponding config file (see About Config Files above).
The config file does not change the defaults in surprising or esoteric ways (see About Defaults above).
In this lucky circumstance, all that needs to be done is to name the config file, select which benchmark suite is to be run: int for the SPECint2006 benchmarks or fp for the SPECfp2006 benchmarks, and say --reportable to attempt a full run.
For example, suppose that Wilfried wants to give Ryan a config file and compiled binaries with some new integer optimizations for a Unix system. Wilfried might type something like this:
[/usr/wilfried]$ cd $SPEC [/bigdisk/cpu2006]$ spectar -cvf - be*/C*/*/exe/*newint* config/newint.cfg | specxz > newint.tar.xz
and then Ryan might type something like this:
ryan% cd /usr/ryan/cpu2006 cpu2006% bash bash-2.05$ . ./shrc bash-2.05$ specxz -dc newint.tar.xz | spectar -xf - bash-2.05$ runspec --config newint.cfg --nobuild --reportable int
In the example above, the --nobuild emphasizes that the tools should not attempt to build the binaries; instead, the prebuilt binaries should be used. If there is some reason why the tools don't like that idea (for example: the config file does not match the binaries), they will complain and refuse to run, but with --nobuild they won't go off and try to do a build.
As a another example, suppose that Reinhold has given Kaivalya a Windows config file with changes from 12 August, and Kaivalya wants to run the floating point suite. He might say something like this:
F:\kaivalya\cpu2006\> shrc F:\kaivalya\cpu2006\> specxz -dc reinhold_aug12a.tar.xz | spectar -xf - F:\kaivalya\cpu2006\> runspec --config reinhold_aug12a --reportable fp
If you want to run a subset of the benchmarks, rather than running the whole suite, you can name them. Since a reportable run uses an entire suite, you will need to turn off reportable:
[/usr/mat/cpu2006]$ runspec --config mat_dec25j.cfg --noreportable 482.sphinx3
Look for the output of your runspec command in the directory $SPEC/result (Unix) or %SPEC%\result (Windows). There, you will find log files and result files. More information about log files can be found in config.html.
The format of the result files depends on what was selected in your config file, but will typically include at least .txt for ASCII text, and will always include .rsf, for raw (unformatted) run data. More information about result formats can be found below, under --output_format. Note that you can always re-generate the output, using the --rawformat option, also documented below.
This concludes the section on simplest usage.
If simple commands such as the above are not enough to meet your needs,
you can find out about commonly used options by continuing to read the next 3 sections (3.2, 3.3, and 3.4).
The syntax for the runspec command is:
runspec [options] [list of benchmarks to run]
Options are described in the following sections. There, you will notice that many options have both long and short names. The long names are invoked using two dashes, and the short names use only a single dash. For long names that take a parameter, you can optionally use an equals sign. Long names can also be abbreviated, provided that you still enter enough letters for uniqueness. For example, the following commands all do the same thing:
runspec --config=dianne_july25a --debug=99 fp runspec --config dianne_july25a --debug 99 fp runspec --conf dianne_july25a --deb 99 fp runspec -c dianne_july25a -v 99 fp
The list of benchmarks to run can be:
For a reportable run, you must specify int, fp, or all.
Individual benchmarks can be named, numbered, or both; and they can be abbreviated, as long as you enter enough characters for uniqueness. For example, each of the following commands does the same thing:
runspec -c jason_july09d --noreportable 459.GemsFDTD 465.tonto runspec -c jason_july09d --noreportable 459 465 runspec -c jason_july09d --noreportable GemsFDTD tonto runspec -c jason_july09d --noreportable Gem ton
It is also possible to exclude a benchmark, using a hat (^, also known as carat, typically found as shift-6). For example, suppose your system lacks a C++ compiler, and you therefore cannot run the integer benchmarks 471.omnetpp, 473.astar, and 483.xalancbmk. You could run all of the integer benchmarks except these by entering a command such as this one:
runspec --noreportable -c kathy_sep14c int ^omnet ^astar ^xalanc
Note that if hat has significance to your shell, you may need to protect it from interpretation by the shell, for example by putting it in single quotes. On Windows, you will need to use both a hat and double quotes for each benchmark you want to exclude, like this:
E:\cpu2006> runspec --noreportable -c cathy_apr21b int "^omnet" "^astar" "^xalanc"
A reportable run runs all the benchmarks in a suite with the test and train data sets as an additional verification that the benchmark binaries get correct results. The test and train workloads are not timed. Then, the reference workloads are run three times, so that median run time can be determined for each benchmark. For example, here are the runs for a reportable integer run of CPU2006:
$ grep runspec: *036.log runspec: runspec -c mar26a -T base --reportable int $ grep Running *036.log Running Benchmarks Running 400.perlbench test base mar21a.native32 default Running 401.bzip2 test base mar21a.native32 default Running 403.gcc test base mar21a.native32 default Running 429.mcf test base mar21a.native32 default Running 445.gobmk test base mar21a.native32 default Running 456.hmmer test base mar21a.native32 default Running 458.sjeng test base mar21a.native32 default Running 462.libquantum test base mar21a.native32 default Running 464.h264ref test base mar21a.native32 default Running 471.omnetpp test base mar21a.native32 default Running 473.astar test base mar21a.native32 default Running 483.xalancbmk test base mar21a.native32 default Running 999.specrand test base mar21a.native32 default Running Benchmarks Running 400.perlbench train base mar21a.native32 default Running 401.bzip2 train base mar21a.native32 default Running 403.gcc train base mar21a.native32 default Running 429.mcf train base mar21a.native32 default Running 445.gobmk train base mar21a.native32 default Running 456.hmmer train base mar21a.native32 default Running 458.sjeng train base mar21a.native32 default Running 462.libquantum train base mar21a.native32 default Running 464.h264ref train base mar21a.native32 default Running 471.omnetpp train base mar21a.native32 default Running 473.astar train base mar21a.native32 default Running 483.xalancbmk train base mar21a.native32 default Running 999.specrand train base mar21a.native32 default Running Benchmarks Running (#1) 400.perlbench ref base mar21a.native32 default Running (#1) 401.bzip2 ref base mar21a.native32 default Running (#1) 403.gcc ref base mar21a.native32 default Running (#1) 429.mcf ref base mar21a.native32 default Running (#1) 445.gobmk ref base mar21a.native32 default Running (#1) 456.hmmer ref base mar21a.native32 default Running (#1) 458.sjeng ref base mar21a.native32 default Running (#1) 462.libquantum ref base mar21a.native32 default Running (#1) 464.h264ref ref base mar21a.native32 default Running (#1) 471.omnetpp ref base mar21a.native32 default Running (#1) 473.astar ref base mar21a.native32 default Running (#1) 483.xalancbmk ref base mar21a.native32 default Running (#1) 999.specrand ref base mar21a.native32 default Running (#2) 400.perlbench ref base mar21a.native32 default Running (#2) 401.bzip2 ref base mar21a.native32 default Running (#2) 403.gcc ref base mar21a.native32 default Running (#2) 429.mcf ref base mar21a.native32 default Running (#2) 445.gobmk ref base mar21a.native32 default Running (#2) 456.hmmer ref base mar21a.native32 default Running (#2) 458.sjeng ref base mar21a.native32 default Running (#2) 462.libquantum ref base mar21a.native32 default Running (#2) 464.h264ref ref base mar21a.native32 default Running (#2) 471.omnetpp ref base mar21a.native32 default Running (#2) 473.astar ref base mar21a.native32 default Running (#2) 483.xalancbmk ref base mar21a.native32 default Running (#2) 999.specrand ref base mar21a.native32 default Running (#3) 400.perlbench ref base mar21a.native32 default Running (#3) 401.bzip2 ref base mar21a.native32 default Running (#3) 403.gcc ref base mar21a.native32 default Running (#3) 429.mcf ref base mar21a.native32 default Running (#3) 445.gobmk ref base mar21a.native32 default Running (#3) 456.hmmer ref base mar21a.native32 default Running (#3) 458.sjeng ref base mar21a.native32 default Running (#3) 462.libquantum ref base mar21a.native32 default Running (#3) 464.h264ref ref base mar21a.native32 default Running (#3) 471.omnetpp ref base mar21a.native32 default Running (#3) 473.astar ref base mar21a.native32 default Running (#3) 483.xalancbmk ref base mar21a.native32 default Running (#3) 999.specrand ref base mar21a.native32 default
The above order can be summarized as:
test train ref1, ref2, ref3
Sometimes, it can be useful to understand when directory setup occurs. So, let's expand the list to include setup:
setup for test test setup for train train setup for ref ref1, ref2, ref3
If you run both base and peak tuning, base is always run first. If you do a reportable run with both base and peak, the order is:
setup for test base test, peak test setup for train base train, peak train setup for ref base ref1, base ref2, base ref3 peak ref1, peak ref2, peak ref3
If you use all in your list of benchmarks to run, integer is run first, followed by floating point. If you use all in a reportable run, the order is:
setup for test int base test, fp base test, int peak test, fp peak test setup for train int base train, fp base train, int peak train, fp peak train setup for ref int base ref1, fp base ref1 int base ref2, fp base ref2 int base ref3, fp base ref3 int peak ref1, fp peak ref1 int peak ref2, fp peak ref2 int peak ref3, fp peak ref3
When runspec is used, it normally (*) takes some kind of action for the set of benchmarks specified at the end of the command line (or defaulted from the config file). The default action is validate, which means that the benchmarks will be built if necessary, the run directories will be set up, the benchmarks will be run, and reports will be generated.
(*) Exception: if you use the --rawformat switch, then --action is ignored.
If you want to cause a different action, then you can enter one of the following runspec options:
--action build | Compile the benchmarks. More information about compiling may be found in config.html, including information about additional files that are output during a build. |
--action buildsetup | Set up build directories for the benchmarks, but do not attempt to compile them. (History: the buildsetup action was added in SPEC CPU2006 V1.0.) |
--action configpp | Preprocess the config file and dump it to stdout (History: config file preprocessing was added in SPEC CPU2006 V1.0) |
--action onlyrun | Run the benchmarks but do not bother to verify that they got the correct answers. Reports are always marked "invalid", since the correctness checks are skipped. Therefore, this option is rarely useful, but it can be selected if, for example, you are generating a performance trace and wish to avoid tracing some of the tools overhead. (History: for SPEC CPU2000, this option was spelled "run", but for SPEC CPU2006 the name was changed to clarify what it does.) |
--action report | Synonym for --fakereport; see also --fakereportable. |
--action run | Synonym for --action validate. (History: as of SPEC CPU2006 V1.0, the meaning of "--action run" changed in an attempt to better match what users expect "run" to do.) |
--action runsetup | Synonym for --action setup |
--action setup | Set up the run directories. Copy executables and data to work directories. |
--action validate | Build (if needed), run, check for correct answers, and generate reports. |
In addition, the following cleanup actions are available (in order by level of vigor):
--action clean | Empty all run and build directories for the specified benchmark set for the
current user. For example, if the current OS username is set to jeff and this
command is entered:
D:\cpu2006\> runspec --action clean --config may12a fp
then the tools will remove run directories with username jeff for fp
benchmarks generated by config file may12a.cfg (in nnn.benchmark\run and nnn.benchmark\build).
|
--action clobber | Clean + remove all executables of the current type for the specified benchmark set. |
--action trash | Same as clean, but do it for all users of this SPEC directory tree, and all types, regardless of what's in the config file. |
--action realclean | A synonym for --action trash |
--action scrub | Remove everybody's run and build directories and all executables for the specified benchmark set. |
Alternative cleaning method:
If you prefer, you can clean disk space by entering commands such as the following (on Unix systems):
rm -Rf $SPEC/benchspec/C*/*/run rm -Rf $SPEC/benchspec/C*/*/exe
Notes
The above commands not only empty the contents of the run and exe directories; they also delete the directories themselves. That's fine; the tools will re-create the run and exe directories if they are needed again later on.
The above commands do NOT clean the build directories (unless you've set build_in_build_dir=0). Often, it's useful to preserve the build directories for debugging purposes, but if you'd like to get rid of them too, just add $SPEC/benchspec/C*/*/build to your list of directories.
Windows users:
Windows users can achieve a similar effect using Windows Explorer.
I have so much disk space, I'll never use all of it:
Run directories are automatically re-used for subsequent runs. If you prefer, you can ask the tools to never touch a used run directory. Do this by setting the environment variable:
SPEC_CPU2006_NO_RUNDIR_DEL
In this case, you should be prepared to do frequent cleaning, perhaps after reviewing the results of each run.
Most users of runspec will want to become familiar with the following options.
runspec --check_version --http_proxy http://webcache.tom.spokewrenchdad.com:8080or, equivalently, for those who prefer to abbreviate to the shortest possible amount of typing:
runspec --ch --http_p http://webcache.tom.spokewrenchdad.com:8080The command downloads a small file (~15 bytes) from www.spec.org which contains information about the most recent release, and compares that to your release. If your version is out of date, a warning will be printed. (History: the ability to check your version vs. www.spec.org was added in SPEC CPU2006 V1.0.
Meaning: Use number copies for a SPECrate run. See also --rate.
Note 1: Specifying just --copies N is not alone enough to cause the run to be in SPECrate mode. You need to also specify the command line option --rate or the config file option rate.
Note 2: specifying the number of copies on the command line will override a config file setting of copies for base; but it will not override any per-benchmark peak settings for copies.
(History: in SPEC CPU2000, "--copies" was called "--users".)
Meaning: A "flags file" provides information about how to interpret and report on the flags (e.g. -O5, -fast, etc.) that are used in a config file. (History: Flag reporting was added in SPEC CPU2006 V1.0, and has continued to evolve in subsequent releases.)
The --flagsurl switch says that a flags file may be found at the specified URL (such as http://myflags.com/flags.xml). URL schemes supported are http, ftp, and file. A URL without a scheme is assumed to be a file or path name. If you need to specify an http proxy, you can do so in your config file, by using the --http_proxy command line switch, or via the environment variable http_proxy.
This example formats a result with two flags files on Windows:
rawformat --flagsurl %SPEC%\config\flags\tmp1.xml,%SPEC%\config\flags\tmp2.xml CINT2006.059.ref.rsf
The special value noflags may be used to cause rawformat to remove a stored flags file when re-formatting a previously run result.
Flags files are required by run rule 4.2.5. If a run is marked "invalid" because some flags are "unknown", you may be able to resolve the invalid marking by finding, or creating, a flags file with proper descriptions and entering commands such as:
cp CINT2006.559.ref.rsf retry rawformat --flagsurl myfixedflags.xml --output_format pdf,raw retry
The first command preserves the original raw file, which is always recommended before doing any operations that create a new raw file. The second command creates retry.rsf and retry.pdf, both of which will include descriptions of flags from myfixedflags.xml. If you are submitting a result to SPEC, the newly-generated rawfile is the one to submit.
Note that saying rawformat is equivalent to saying runspec --rawformat, as described below.
On Windows systems, the first command above would use copy instead of cp. Also, if Windows refuses to accept the syntax with a comma in it, you might have to generate just the rawfile as a first step, then generate other format(s).
You can find out more about how to write flag description files in flag-description.html. You will find there a complete example of a flags file update using rawformat --flagsurl.
You can format a single result using multiple flags files. This feature is intended to make it easier for multiple results to share what should be shared, while separating what should be separated. Common elements (such as a certain version of a compiler) can be placed into one flags file, while the elements that differ from one system to another (such as platform notes) can be maintained separately.
New with CPU2006 V1.2, multiple flags files will typically be needed, because flags files are now separated into two types. See the overview in changes-in-v1.2.html.
[/usr/mwong/cpu2006]$ runspec --config golden --iterations 1 483.xalancbmkas the SPEC tools will inform you that you cannot change the number of iterations on a reportable run. But either of the following commands will override the config file and just run 483.xalancbmk once:
[/usr/mwong/cpu2006]$ runspec --config golden --iterations 1 --loose 483.xalancbmk [/usr/mwong/cpu2006]$ runspec --config golden --iterations 1 --noreportable 483.xalancbmk
all | implies all of the following except screen, check, and mail |
---|---|
cfg|config|conffile configfile|cfgfile |
config file used for this run (e.g. CINT2006.030.ref.cfg) (History: as of the release of SPEC CPU2006 V1.0, the config file is adjusted to include any changes you may have made to fields for readers, as described in )utility.html.) |
check|chk|sub| subcheck|subtest|test |
Submission syntax check (automatically enabled for reportable runs). Causes many fields to be checked for acceptable formatting - e.g. hardware available "Nov-2007", not "11/07"; memory size "4 GB", not "4Gb"; and so forth. SPEC enforces consistent syntax for results submitted to its website as an aid to readers of results, and to increase the likelihood that queries find the intended results. If you select --output_format subcheck on your local system, you can find out about most formatting problems before you submit your results to SPEC. Even if you don't plan to submit your results to SPEC, the Submission Check format can help you create reports that are more complete and readable. (History: the ability to check syntax locally, prior to submitting a result to SPEC, was added in SPEC CPU2006 V1.0. As of SPEC CPU2006 V1.1, Submission Check is automatically enabled when doing rawformat.) |
csv|spreadsheet | Comma-separated variable (e.g. CINT2006.030.ref.csv). If you populate spreadsheets from your runs, you probably shouldn't be doing cut/paste of text files; you'll get more accurate data by using --output_format csv. History: the CSV format was added in SPEC CPU2006 V1.0. The format was updated with SPEC CPU2006 V1.1 to include more information: CSV output includes much of the information in the other reports. All runs times are included, and the selected run times are listed separately. The flags used are also included. |
default | implies HTML and text |
flag|flags | Flag report (e.g. CINT2006.030.flags.ref.html). Will also be produced when formats that use it are requested (PDF, HTML). (History: flag reporting was added in SPEC CPU2006 V1.0.) |
html|xhtml|www|web | web page (e.g. CINT2006.030.ref.html) |
mail|mailto|email | All generated reports will be sent to an address specified in the config file. (History: the ability to email reports was added in SPEC CPU2006 V1.0.) |
pdf|adobe | Portable Document Format (e.g. CINT2006.030.pdf). This format is the design center for SPEC CPU2006 reporting. Other formats contain less information: text lacks graphs, postscript lacks hyperlinks, and HTML is less structured. (It does not appear as part of "default" only because some systems may lack the ability to read PDF.) |
postscript|ps| printer|print |
PostScript (e.g. CINT2006.030.ref.ps) |
raw|rsf | raw results, e.g. CINT2006.030.ref.rsf. Note: you will automatically get an rsf file for commands that run a test or that update a result (such as rawformat --flagsurl). (History: for SPEC CPU2000, raw results were written to ".raw" files; for SPEC CPU2006, they are written to ".rsf" files.) |
screen|scr|disp| display|terminal|term |
ASCII text output to stdout. (History: report generation to stdout was added in SPEC CPU2006 V1.0.) |
text|txt|ASCII|asc | ASCII text, e.g. CINT2006.030.ref.txt. (History: for SPEC CPU2000, ascii output went to files of type .asc; for SPEC CPU2006, the type is .txt.) |
Many of the synonyms above are newly accepted in CPU2006. | Now, you don't have to scratch your head and try to remember whether to spell your desired output format as "ps" or as "postscript". SPEC CPU2006 causes less dandruff than SPEC CPU2000. Your scalp may vary. |
Meaning: Select SPECrate run instead of SPECspeed. If a parameter is supplied, it specifies the number of copies to run. (This is identical to specifying the number of copies to run via the --copies command-line switch (which see for some important additional detail). For example, the following commands both would do a 4-copy SPECint_rate2006 run:
/bigdisk/cpu2006$ runspec --config tony_may12a --rate 4 int /bigdisk/cpu2006$ runspec --config tony_may12a --rate --copies 4 int
(History: in SPEC CPU2000, you could specify that you wanted, say, 4 copies by entering "--rate --users 4"; for SPEC CPU2006 the syntax was simplified to just "--rate 4".)
If you have also entered --rawformat, then the effect of --rate is to format the rawfile for SPECrate metrics even if it was originally a SPECspeed run. That is, it is valid to report a single copy run as both SPECspeed and SPECrate metrics. See the example in utility.html.
Meaning: Do not attempt to do a run; instead, take an existing result file and just generate the reports. Using this option will cause any specified --actions to be ignored, and instead the result formatter will be invoked. This option is useful if (for example) you are just doing ASCII output during most of your runs, but now you would like to create additional reports for one or more especially interesting runs. To create the html and postscript files for experiment number 324, you could say:
runspec --rawformat --output_format html,ps $SPEC/result/CPU2006.324.ref.rsf
You can achieve the same effect by invoking rawformat directly:
rawformat --output_format html,ps $SPEC/result/CPU2006.324.ref.rsf
These two commands achieve the same effect because, in fact, saying runspec --rawformat just causes runspec to exit, invoking rawformat in its stead, and passing it whatever was on the command line - in this case, the --output_format html,ps string.
Note that when runnng rawformat, you will always get format "Submission Check", which encourages consistent formatting for various result fields when preparing final (submittable) reports. In addition, you will get the formats that you mention on the command line, or, if none are mentioned there, then you will get the defaults documented under output_format. (History: the automatic addition of subcheck to the list of outputs was added in SPEC CPU2006 V1.1)
For more information about rawformat, please see utility.html.
Meaning: Do not build binaries, even if they don't exist or MD5 sums don't match. This feature can be very handy if, for example, you have a long script with multiple invocations of runspec, and you would like to ensure that the build is only attempted once. (Perhaps your thought process might be, "If it fails the first time, fine, just forget about it until I come in Monday and look things over.") By adding --nobuild --ignore_errors to all runs after the first one, no attempt will be made to build the failed benchmarks after the first attempt.
The --nobuild feature also comes in handy when testing whether proposed config file options would potentially force an automatic rebuild.
(History: --nobuild was added in SPEC CPU2006 V1.0.)
Meaning: Defines a preprocessor macro named SYMBOL (for use in your config file) and optionally gives it the value VALUE. If no value is specified, the macro is defined with no value. SYMBOL may not contain equals signs ("=") or colons (":"). This option may be used multiple times. For example if a config file says:
%ifdef %{use_sparc_v9} ext = darryl.native64 mach = native64 ARCH_SELECT = -xtarget=native64 %else ext = darryl.native32 mach = default ARCH_SELECT = -xtarget=native %endif default=base: OPTIMIZE = -O ${ARCH_SELECT}
Then saying runspec --define use_sparc_v9=1 will cause base optimization to be -O -xtarget=native64
(History: the ability to define symbols for use in the config file was added in SPEC CPU2006 V1.0.)
Meaning: In some cases, such as when doing version checks and loading flag description files, runspec will attempt to fetch a file, using http. If your web browser needs a proxy server in order to access the outside world, then runspec will probably want to use the same proxy server. The proxy server can be set by:
For example, a failure of this form:
$ runspec --rawformat --output_format txt \ --flagsurl http://portlandcyclers.net/evan.xml CFP2006.007.ref.rsf ... Retrieving flags file (http://portlandcyclers.net/evan.xml)... ERROR: Specified flags URL (http://portlandcyclers.net/evan.xml) could not be retrieved. The error returned was: 500 Can't connect to portlandcyclers.net:80 (Bad hostname 'portlandcyclers.net')
improves when a proxy is provided:
$ runspec --rawformat --output_format txt \ --flagsurl http://portlandcyclers.net/evan.xml \ --http_proxy=http://webcache.tom.spokewrenchdad.com:8080 CFP2006.007.ref.rsf
Note that this setting will override the value of the http_proxy environment variable, as well as any setting in the config file.
By default, no proxy is used. The special value none may be used to unset any proxies set in the environment or via config file. (History: support for proxies was added in SPEC CPU2006 V1.0.)
Meaning: The machines to build for or to run. Normally used only if the config file has been written to handle more than one machine type. The config file author should tell you what machines are supported by the config file.
The machine name may only consist of alphanumerics, underscores, hyphens, and periods.
If you specify multiple machine types, multiple runs will be performed, on most systems. On Microsoft Windows systems, because of the command-line preprocessing performed by cmd.exe, it is not possible to run more than one machine type.
Warning: The "machine" feature is relatively rarely used, and is only lightly documented. The key limitation is that benchmark binary names contain only the extension. Therefore, it is quite possible, even easy, to cause binaries built in one run to be overwritten by subsequent runs. A workaround for this limitation is described in the description of "section specifiers", in config.html.
(History: the ability to run more than one machine type in a single invocation of runspe was added in SPEC CPU2006 V1.0.)
Meaning: Do not delete existing object files before attempting to build. This option should only be used for troubleshooting a problematic compile. It is against the run rules to use it when building binaries for an actual submission.
For a better way of troubleshooting a problematic compile, see the information about specmake in utility.html
Meaning: Package up the currently selected set of binaries, config files and other support files into a bundle that can be used to re-create the current run on a different system or installation.
When --make_bundle is present on the command line, most other switches have no immediate effect. The tools do not actually do the run at bundle creation time. Instead, a control file is written to the bundle to allow the run to occur on the destination system. The runspec command on the destination system will include all of your options other than those related to bundling.
Optional: additional files or directories may be specified on the command line for inclusion in the bundle.
Any such additional files must be underneath the $SPEC/ directory (or %SPEC%\ on Windows), but may not reside under any of the top-level subdirectories that ship with the suite (such as benchspec, bin, config, or result). Create a new subdirectory, such as %SPEC%\extras\ (on Windows) or $SPEC/extras/ (Unix). If your compiler license allows redistribution of run time libraries, you could place copies of them in that subdirectory, and use preenv variables to point $LD_LIBRARY_PATH at them.
In the following example, we begin by building a binary, check its runtime requirements, and populate a directory with the needed run time libraries:
$ cat config/jul21a.cfg ext = jul21a int=default: CXX = g++ OPTIMIZE = -O $ runspec --config jul21a --action build 473.astar runspec v6624 - Copyright 1999-2011 Standard Performance Evaluation Corporation ... Compiling Binaries Building 473.astar base jul21a default: (build_base_jul21a.0000) Build successes: 473.astar(base) Build Complete The log for this run is in /Volumes/CPU2006/cpu2006/result/CPU2006.011.log runspec finished at Thu Jul 21 17:31:04 2011; 5 total seconds elapsed $ go astar exe /Volumes/CPU2006/cpu2006/benchspec/CPU2006/473.astar/exe $ otool -L astar_base.jul21a astar_base.jul21a: /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.9.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.10) $ mkdir $SPEC/extras $ cp /usr/lib/libstdc++.6.dylib $SPEC/extras $ cp /usr/lib/libSystem.B.dylib $SPEC/extras
Next, add the needed preENV line for the run time library to the config file. Then, bundle it all up:
$ go config /Volumes/CPU2006/cpu2006/config $ cat > tmp preENV_LD_LIBRARY_PATH = $[top]/extras/:\$LD_LIBRARY_PATH $ cat jul21a.cfg >> tmp $ mv tmp jul21a.cfg $ runspec --config jul21a --size test --iterations 1 \ 473.astar --make_bundle mumble $SPEC/extras runspec v6624 - Copyright 1999-2011 Standard Performance Evaluation Corporation ... Bundling finished. The completed bundle is in /Volumes/CPU2006/cpu2006/mumble.cpu2006bundle.xz ...
The above command causes the entire contents of $SPEC/extras/ to be added to the bundle, along with the binary for 473.astar and the config file may08.cfg.
Bundle verification: If you would like to verify the contents of a bundle, you can do so with "specxz -dc" and "spectar -tf -", like so:
$ specxz -dc mumble.cpu2006bundle.xz | spectar -tf - config/mumble.control extras/libSystem.B.dylib extras/libstdc++.6.dylib config/jul21a.cfg benchspec/CPU2006/473.astar/exe/astar_base.jul21a config/MD5.mumble.control.d557435032b173a0c3caf3dd72ad2fff extras/MD5.libSystem.B.dylib.46d5f08f7785db37ba6df082f8a33a9e extras/MD5.libstdc++.6.dylib.a4d07340212e0cb8cd89b5d0c12347cc config/MD5.jul21a.cfg.9de0e2511df53692105ae1e2f7e9f6b6 benchspec/CPU2006/473.astar/exe/MD5.astar_base.jul21a.e54097975ed1ffd4a96d54a76fd8d673 $
Don't worry about the odd looking extra files in the bundle; these are md5 checksums, which are used to help ensure bundle integrity.
Using a bundle: See the descriptions of --use_bundle or --unpack_bundle for information on what to do with a bundle when you've got one.
Note: --make_bundle can't bundle files up that aren't underneath the top-level $SPEC directory. If you use the output_root config file option with --make_bundle, please make sure that it points to somewhere under $SPEC.
WARNING: Although the features to create and use bundles are intended to make it easier to run SPEC CPU2006, the tester remains responsible for compliance with the run rules. And, of course, both the creators and the users of bundles are responsible for compliance with any applicable software licenses.
(History: the --make-bundle feature was added in SPEC CPU2006 V1.1)
Meaning: The number of run directories to setup in parallel. Parallelism is only applied per-benchmark; the maximum number of setups that will be done in parallel is equal to the number of copies being run. This has no effect for SPECspeed runs or 1-copy SPECrate runs. A setting of "1" (the default) effectively disables this feature.
Notes:
The parallel setup / test features control parallelism during the preparation phase for running the benchmarks, not the actual runs. Therefore, they have no effect on the setting of the report field "Auto Parallel: Yes/No", discussed in config.html.
Prior to experimenting with the command line features for parallel setup/test, you might find it easier to try out the corresponding config file features, at least while you are debugging your methods and ensuring that you have your quote levels correct. Once you have a config file that works, then you can experiment with command line overrides.
The parallel setup / test features can not be used on Windows.
(History: The parallel setup / test features were added in SPEC CPU2006 V1.1.)
Meaning: Do a SPECspeed run, even if the config file calls for a SPECrate run. The config file settings for rate and copies will be silently ignored. (SPECspeed runs always, by definition, use only a single copy.)
Support for --speed with runspec is new in V1.2; formerly, it affected only rawformat: The change is intended to make the switch more consistent with other switches, because, in general, the command line takes precedence over the config file.
When formatting results with --rawformat, causes the rawfile to be formatted with SPECspeed metrics, even if it was from a 1-copy SPECrate run. That is, it is valid to report a single copy run with both SPECspeed and SPECrate metrics. Attempting to format a multi-copy SPECrate run as a SPECspeed result is an error. See the example in utility.html. (History: the ability to format a 1-copy SPECrate result as a SPECspeed result was added in SPEC CPU2006 V1.1.)
$ runspec --newflags --verbose 7 runspec v5603 - Copyright 1999-2007 Standard Performance Evaluation Corporation Using 'solaris-sparc' tools Reading MANIFEST... 18342 files Loading runspec modules................ Locating benchmarks...found 31 benchmarks in 13 benchsets. Checking for flag updates for 400.perlbench Checking for flag updates for 401.bzip2 Checking for flag updates for 403.gcc . . . Checking for updates to Docs/flags/flags-advanced.xml Checking for updates to Docs/flags/flags-simple.xml Flag and config file update successful! There is no log file for this run. runspec finished at Thu Jan 10 06:22:13 2008; 9 total seconds elapsed $(History: flag reporting, and the ability to update flag description files, are features that were added in SPEC CPU2006 V1.0.)
Meaning: Unpack a previously-created bundle of binaries and config file, but do not attempt to start a run using the settings in the bundle. For your reference, the command that would have been used is printed out. See --make_bundle for more information about bundles. (History: The --unpack_bundle option was added in SPEC CPU2006 V1.1.)
Meaning: Use a previously-created bundle of binaries and config file for the current run. Unless overridden, the run will use the set of extension, machine name, tuning levels, and benchmarks that were in effect when the bundle was created. If you specify a run that would use binaries that the current bundle doesn't contain, they will attempt to be built as usual before the run. See --make_bundle for more information about bundles.
The following is an excerpt from the output that is printed when we use the bundle that was created in the example at --make_bundle:
$ runspec --use_bundle /Volumes/CPU2006/cpu2006/mumble.cpu2006bundle.xz runspec v6674 - Copyright 1999-2011 Standard Performance Evaluation Corporation Using 'macosx' tools Reading MANIFEST... 19145 files Loading runspec modules................ Locating benchmarks...found 31 benchmarks in 6 benchsets. Use Bundle: /Volumes/CPU2006/cpu2006/mumble.cpu2006bundle.xz Uncompressing bundle file "/Volumes/CPU2006/cpu2006/mumble.cpu2006bundle.xz"...done! Reading bundle table of contents...5 files Unpacking bundle file...done Bundle unpacking complete. About to run: /Volumes/CPU2006/cpu2006/bin/specperl /Volumes/CPU2006/cpu2006/bin/runspec --config=jul21a.cfg --ext=jul21a --mach=default --size=test --iterations=1 --tune=base 473.astar runspec v6674 - Copyright 1999-2011 Standard Performance Evaluation Corporation Using 'macosx' tools Reading MANIFEST... 19145 files Loading runspec modules................ Locating benchmarks...found 31 benchmarks in 6 benchsets. Reading config file '/Volumes/CPU2006/cpu2006/config/jul21a.cfg' Setting up environment for runspec... About to re-exec runspec... ------------------------------------------------------------------------------ ------------------------------------------------------------------------------ runspec v6674 - Copyright 1999-2011 Standard Performance Evaluation Corporation Using 'macosx' tools Reading MANIFEST... 19145 files Loading runspec modules................ Locating benchmarks...found 31 benchmarks in 6 benchsets. Reading config file '/Volumes/CPU2006/cpu2006/config/jul21a.cfg' Benchmarks selected: 473.astar Compiling Binaries Up to date 473.astar base jul21a default Setting Up Run Directories Setting up 473.astar test base jul21a default: existing (run_base_test_jul21a.0000) Running Benchmarks Running 473.astar test base jul21a default Success: 1x473.astar
Note in the example that runspec restarted itself twice. It unpacked the bundle, then restarted itself to run the command that had been entered at the time that the bundle was created. Upon doing so, it discovered the preENV line for $DYLD_LIBRARY_PATH in the config file. It applied the environment setting, then began all over again.
WARNING: Although the features to create and use bundles are intended to make it easier to run SPEC CPU2006, the tester remains responsible for compliance with the run rules. And, of course, both the creators and the users of bundles are responsible for compliance with any applicable software licenses.
(History: the --use_bundle option was added in SPEC CPU2006 V1.1.)
(This table is organized alphabetically, without regard to upper/lower case, and without regard to the presence of a leading "no").
-a | Same as --action |
---|---|
--action action | Do: build|buildsetup|clean|clobber|configpp| onlyrun|realclean|report|run|runsetup|scrub| setup|trash|validate |
--basepeak | Copy base results to peak (use with --rawformat) |
--nobuild | Do not attempt to build binaries |
-c | Same as --config |
-C | Same as --copies |
--check_version | Check whether an updated version of CPU2006 is available |
--comment "text" | Add a comment to the log and the stored configfile. |
--config file | Set config file for runspec to use |
--copies | Set the number of copies for a SPECrate run |
-D | Same as --rebuild |
-d | Same as --deletework |
--debug | Same as --verbose |
--define SYMBOL[=VALUE] | Define a config preprocessor macro |
--delay secs | Add delay before and after benchmark invocation |
--deletework | Force work directories to be rebuilt |
--dryrun | Same as --fake |
--dry-run | Same as --fake |
-e | Same as --extension |
--ext | Same as --extension |
--extension ext[,ext...] | Set the extensions |
-F | Same as --flagsurl |
--fake | Show what commands would be executed. |
--fakereport | Generate a report without compiling codes or doing a run. |
--fakereportable | Generate a fake report as if "--reportable" were set. |
--[no]feedback | Control whether builds use feedback directed optimization |
--flagupdate | Same as --update |
--flagsupdate | Same as --update |
--flagsurl url | Location (url or filespec) where to find your flags file |
--getflags | Same as --update |
--graph_auto | Let the tools pick minimum and maximum for the graph |
--graph_min N | Set the minimum for the graph |
--graph_max N | Set the maximum for the graph |
-h | Same as --help |
--help | Print usage message |
--http_proxy | Specify the proxy for internet access |
--http_timeout | Timeout when attempting http access |
-I | Same as --ignore_errors |
-i | Same as --size |
--ignore_errors | Continue with benchmark runs even if some fail |
--ignoreerror | Same as --ignore_errors |
--info_wrap_column N | Set wrap width for non-notes informational items |
--infowrap | Same as --info_wrap_column |
--input | Same as --size |
--iterations N | Run each benchmark N times |
--keeptmp | Keep temporary files |
-l | Same as --loose |
--loose | Do not produce a reportable result |
--noloose | Same as --reportable |
-m | Same as --machine |
-M | Same as --make_no_clobber |
--mach | Same as --machine |
--machine name[,name...] | Set the machine types |
--make_bundle | Create a package of binaries and config file |
--make_no_clobber | Do not delete existing object files before building. |
--max_active_compares | Same as --maxcompares |
--maxcompares N | Set the number of concurrent compares to N |
--mockup | Same as --fakereportable |
-n | Same as --iterations |
-N | Same as --nobuild |
--newflags | Same as --update |
--notes_wrap_column N | Set wrap width for notes lines |
-noteswrap | Same as --notes_wrap_column |
-o | Same as --output_format |
--output_format format[,format...] | Generate: all|cfg|check|csv|flags|html|mail|pdf|ps|raw|screen|text |
--parallel_setup | Number of run directories to set up in parallel |
--parallel_setup_prefork | Command to run before setup |
--parallel_setup_type | Method to use for parallel setup |
--parallel_test | Number of test/train workloads to run in parallel |
--preenv | Allow environment settings in config file to be applied |
-R | Same as --rawformat |
-r | Same as --rate |
--rate [N] | Do throughput (SPECrate) run, or rawformat with SPECrate metrics |
--rawformat | Format raw file |
--rebuild | Force a rebuild of binaries |
--reportable | Produce a reportable result |
--noreportable | Same as --loose |
--reportonly | Same as --fakereport |
--[no]review | Format results for review |
-s | Same as --reportable |
-S SYMBOL[=VALUE] | Same as --define |
-S SYMBOL:VALUE | Same as --define |
--[no]setprocgroup | [Don't] try to create all processes in one group. |
--size size[,size...] | Select data set(s): test|train|ref |
--speed | Convert SPECrate run to SPECspeed, or do a SPECspeed run |
--strict | Same as --reportable |
--nostrict | Same as --loose |
-T | Same as --tune |
--[no]table | Do [not] include a detailed table of results |
--test | Run various perl validation tests on specperl |
--train_with | Change the training workload |
--tune | Set the tuning levels to one of: base|peak|all |
--tuning | Same as --tune |
--undef SYMBOL | Remove any definition of this config preprocessor macro |
-U | Same as --username |
--unpack_bundle | Unpack a package of binaries and config file |
--use_bundle | Use a package of binaries and config file |
--update | Check www.spec.org for updates to benchmark and example flag files, and config files |
--update_flags | Same as --update |
--username | Name of user to tag as owner for run directories |
-v | Same as --verbose |
--verbose | Set verbosity level for messages to N |
-V | Same as --version |
--version | Output lots of version information |
-? | Same as --help |
Copyright 1999-2011 Standard Performance Evaluation Corporation
All Rights Reserved