| Latest: www.spec.org/cpu2026/Docs/ |
|
1.1 Defaults 1.2 Syntax 1.3 Benchmarks and suites 1.4 Run order |
1.5 Storage Usage 1.5.1 Directory tree 1.5.2 Hey! Where did all my disk space go? 1.6 Multi-user support and limitations
|
|
--action --check_version --config --copies --flagsurl --help --ignore_errors --iterations --loose --output_format --rawformat --rebuild --reportable --threads --tune
--baseonly --basepeak --nobuild --comment --define --delay --deletework --[no]enable_monitor --expid --fake --fakereport --fakereportable --[no]feedback --[no]force_monitor --[no]graph_auto --graph_max --graph_min --http_proxy --http_timeout --info_wrap_column --keeptmp --label --log_timestamp --make_no_clobber --notes_wrap_column --output_root --parallel_test --parallel_test_workloads --[no]power --preenv --reportonly --review --rrrrate New --rrrrate_inc[=N] New --[no]setprocgroup --size --[no]table --test --undef --update --use_submit_for_compare --use_submit_for_speed --username --verbose --version
What is runcpu? runcpu is the primary tool for the SPEC CPU®2026 Benchmark Suite, a product of the SPEC® non-profit corporation (about SPEC). You use it from a Linux or Unix shell or the Microsoft Windows command line to build and run benchmarks, with commands such as these:
runcpu --config=eniac.cfg --action=build 735.gem5_r runcpu --config=colossus.cfg --threads=16 872.marian_s runcpu --config=z3.cfg --copies=64 fprate
The first command compiles the benchmark named 735.gem5_r. The second runs the OpenMP benchmark 872.marian_s using 16 threads. The third runs 64 copies of all the SPECrate Floating Point benchmarks.
Before reading this document: If you have not already done so, please install and test your SPEC CPU 2026 distribution (ISO image). This document assumes that you have already:
If you have not done the above, please see the brief instructions in the Quick Start guide, or the more detailed section "Testing Your Installation" Unix, Windows.
The SPEC CPU default settings described in this document may be adjusted by config files.
The order of precedence for settings is:
| Highest precedence: | runcpu command |
| Middle: | config file |
| Lowest: | the tools as shipped by SPEC |
Therefore, when this document tells you that something is the default, bear in mind that your config file may have changed that setting. With luck, the author of the config file will tell you so.
The syntax for the runcpu command is:
runcpu [options] [list of benchmarks to run]
Options are described in the following sections. There, you will notice that many options have both long and short names. The long names are invoked using two dashes, and the short names use only a single dash. For long names that take a parameter, you can optionally use an equals sign. Long names can also be abbreviated, provided that you still enter enough letters for uniqueness. For example, the following commands all do the same thing:
runcpu --config=dianne_july25a --debug=99 fprate runcpu --config dianne_july25a --debug 99 fprate runcpu --conf dianne_july25a --deb 99 fprate runcpu -c dianne_july25a -v 99 fprate
In the list of benchmarks to run, you can use one or more individual benchmarks, such as 723.llvm_r, or you can run entire suites, using one of the Short Tags below.
| Short Tag |
Suite | Contents | Metrics | How many copies? What do Higher Scores Mean? |
| intspeed | SPECspeed®2026 Integer | 13 integer benchmarks | SPECspeed2026_int_base SPECspeed2026_int_peak |
SPECspeed suites always run one copy of each benchmark.
Higher scores indicate that less time is needed. |
| fpspeed | SPECspeed®2026 Floating Point | 13 floating point benchmarks | SPECspeed2026_fp_base SPECspeed2026_fp_peak |
|
| intrate | SPECrate®2026 Integer | 14 integer benchmarks | SPECrate2026_int_base SPECrate2026_int_peak |
SPECrate suites run multiple concurrent copies of
each benchmark.
The tester selects how many. Higher scores indicate more throughput (work per unit of time). |
| fprate | SPECrate®2026 Floating Point | 12 floating point benchmarks | SPECrate2026_fp_base SPECrate2026_fp_peak |
|
|
The "Short Tag" is the canonical abbreviation for use with runcpu, where context
is defined by the tools. In a published document, context may not be clear.
To avoid ambiguity in published documents, the Suite Name or the Metrics should be spelled as shown above. |
||||
Supersets: There are several supersets which run more than one of the above.
Synonyms -
Suite selection is done with the short tags: intrate fprate intspeed fpspeed
You can also use full metric names. You can say: runcpu SPECspeed2026_int_base
Some alternates (such as int_rate or CPU2026) may provoke
runcpu to say that it is trying to DWIM (wikipedia) but these are not recommended.
Benchmark names: Individual benchmarks can be named, numbered, or both.
Separate them with a space.
Names can be abbreviated, as long as you enter enough characters for uniqueness.
Each of the following commands does the same thing:
runcpu -c jason_july09d 750.sealcrypto_r 723.llvm_r 772.marian_r runcpu -c jason_july09d 750 723 772 runcpu -c jason_july09d sealcrypto_r llvm_r marian_r runcpu -c jason_july09d seal llvm marian_r
To exclude a benchmark: Use a hat (^, also known as carat, typically found as shift-6). Note that if hat has significance to your shell, you may need to protect it from interpretation by the shell, for example by putting it in single quotes. On Windows, you will need to use both a hat and double quotes for each benchmark you want to exclude.
bash-n.n.n$ runcpu -c cathy_apr21c fprate ^748 ^nest_r pickyShell% runcpu -c cathy_apr21c sep14c fprate '^748' '^nest_r' E:\cpu2026> runcpu -c cathy_apr21c sep14c fprate "^748" "^nest_r"
Turning off reportable: If your config file sets reportable=yes then you cannot run a subset unless you turn that option off.
[/usr/cathy/cpu2026]$ runcpu --config cathy_apr21b --noreportable fprate ^parest
A reportable run does these steps:
Test: Set up all of the benchmarks using the test workload. Run them. Verify that they get correct answers. The test workloads are run merely as an additional verification of correct operation of the generated executables; their times are not reported and do not contribute to overall metrics. Therefore multiple benchmarks can be run simultaneously, as in the example below where the tester has set --parallel_test to allow up to 20 simultaneous tests.
Train: Do the same steps for the train workload, for the same reasons, with the same verification, non-reporting, and parallelism.
Ref: Run the refrate (7xx benchmarks) or the refspeed (8xx) workload
If running refspeed, multiple --threads are optionally allowed.
If running refrate multiple --copies are optionally allowed, as in
the example below which uses 256 copies in base.
(*) For reportable runs, --iterations must be 2 or 3.
Report:
Summarizing reportable run order: The order can be summarized as:
setup for test
test (*)
setup for train
train (*)
setup for ref
ref1, ref2 [, ref3] (**)
(*) Multiple benchmarks may overlap if --parallel_test > 1
(**) One benchmark at a time. Third run only if --iterations=3.
Reportable order when more than one tuning is present: If you run both base and peak tuning, base is always run first.
setup for test
test base and peak (*)
setup for train
train base and peak (*)
setup for ref
base ref1, base ref2 [, base ref3] (**)
peak ref1, peak ref2 [, peak ref3] (**)
(*) Multiple benchmarks may overlap if --parallel_test > 1
Peak and base may also overlap.
(**) One benchmark at a time. Third run only if --iterations=3.
Reportable order when more than one suite is present: If you start a reportable using more than one suite, all the work is done for one suite before proceeding to the next.
For example runcpu --iterations=3 --reportable intspeed fprate would cause:
intspeed setup test
intspeed test
intspeed setup train
intspeed train
intspeed setup refspeed
intspeed refspeed #1
intspeed refspeed #2
intspeed refspeed #3
fprate setup test
fprate test
fprate setup train
fprate train
fprate setup refrate
fprate refrate #1
fprate refrate #2
fprate refrate #3
If you request more than one suite (for example, by using all) then a table is printed to show you the run order:
Action Run Mode Workload Report Type Benchmarks ------ -------- -------- ----------------- ---------------------------- report rate refrate SPECrate2026_fp fprate report speed refspeed SPECspeed2026_fp fpspeed report rate refrate SPECrate2026_int intrate report speed refspeed SPECspeed2026_int intspeed
Reportable example: A log from a reportable run with copies=4 is excerpted below. The Unix grep command picks out lines that match one of the quoted strings; Microsoft Windows users could try findstr instead.
$ grep -e 'Running B' -e 'Starting' -e '(#' CPU2026.007.log
Starting runcpu for intrate...
Running Benchmarks (up to 4 concurrent processes)
Starting runcpu for 706.stockfish_r test base branden_oct15b
Starting runcpu for 707.ntest_r test base branden_oct15b
Starting runcpu for 708.sqlite_r test base branden_oct15b
Starting runcpu for 710.omnetpp_r test base branden_oct15b
Starting runcpu for 714.cpython_r test base branden_oct15b
Starting runcpu for 721.gcc_r test base branden_oct15b
Starting runcpu for 723.llvm_r test base branden_oct15b
Starting runcpu for 727.cppcheck_r test base branden_oct15b
Starting runcpu for 729.abc_r test base branden_oct15b
Starting runcpu for 734.vpr_r test base branden_oct15b
Starting runcpu for 735.gem5_r test base branden_oct15b
Starting runcpu for 750.sealcrypto_r test base branden_oct15b
Starting runcpu for 753.ns3_r test base branden_oct15b
Starting runcpu for 777.zstd_r test base branden_oct15b
Running Benchmarks (up to 4 concurrent processes)
Starting runcpu for 706.stockfish_r train base branden_oct15b
Starting runcpu for 707.ntest_r train base branden_oct15b
Starting runcpu for 708.sqlite_r train base branden_oct15b
Starting runcpu for 710.omnetpp_r train base branden_oct15b
Starting runcpu for 714.cpython_r train base branden_oct15b
Starting runcpu for 721.gcc_r train base branden_oct15b
Starting runcpu for 723.llvm_r train base branden_oct15b
Starting runcpu for 727.cppcheck_r train base branden_oct15b
Starting runcpu for 729.abc_r train base branden_oct15b
Starting runcpu for 734.vpr_r train base branden_oct15b
Starting runcpu for 735.gem5_r train base branden_oct15b
Starting runcpu for 750.sealcrypto_r train base branden_oct15b
Starting runcpu for 753.ns3_r train base branden_oct15b
Starting runcpu for 777.zstd_r train base branden_oct15b
Running Benchmarks
Running (#1) 706.stockfish_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:42:58]
Running (#1) 707.ntest_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:45:49]
Running (#1) 708.sqlite_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:49:05]
Running (#1) 710.omnetpp_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:53:00]
Running (#1) 714.cpython_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:55:40]
Running (#1) 721.gcc_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:57:55]
Running (#1) 723.llvm_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:02:16]
Running (#1) 727.cppcheck_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:05:45]
Running (#1) 729.abc_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:08:17]
Running (#1) 734.vpr_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:11:01]
Running (#1) 735.gem5_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:13:55]
Running (#1) 750.sealcrypto_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:16:19]
Running (#1) 753.ns3_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:19:40]
Running (#1) 777.zstd_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:26:19]
Running (#2) 706.stockfish_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:30:54]
Running (#2) 707.ntest_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:33:41]
Running (#2) 708.sqlite_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:36:58]
Running (#2) 710.omnetpp_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:40:49]
Running (#2) 714.cpython_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:43:30]
Running (#2) 721.gcc_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:45:43]
Running (#2) 723.llvm_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:49:57]
Running (#2) 727.cppcheck_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:53:25]
Running (#2) 729.abc_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:55:56]
Running (#2) 734.vpr_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:58:43]
Running (#2) 735.gem5_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 01:01:41]
Running (#2) 750.sealcrypto_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 01:04:06]
Running (#2) 753.ns3_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 01:07:27]
Running (#2) 777.zstd_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 01:14:07]
$ (white space adjusted for readability)
The structure of the CPU 2026 directory tree is:
$SPEC or %SPEC% - the root directory
benchspec - Some suite-wide files
CPU - The benchmarks
bin - Tools to run and report on the suite
config - Config files
Docs - HTML and plaintext documentation
result - Log files and reports
tmp - Temporary files
tools - Sources for the CPU 2026 tools
Within each of the individual benchmarks, the structure is:
nnn.benchmark - root for this benchmark
build - Benchmark binaries are built here
data
all - Data used by all runs (if needed by the benchmark)
ref - The timed data set
test - Data for a simple test that an executable is functional
train - Data for feedback-directed optimization
Docs - Documentation for this benchmark
exe - Compiled versions of the benchmark
run - Benchmarks are run here
Spec - SPEC metadata about the benchmark
src - The sources for the benchmark
Many SPECspeed benchmarks (8nn.benchmark_s) share content that is located under a corresponding SPECrate benchmark (7nn.benchmark_r). Shared sources files may be compiled differently for SPECspeed vs. SPECrate. For example, the sources for 849.fotonik3d_s can be found at 749.fotonik3d_r/src/ and only 849.fotonik3d_s can be compiled with OpenMP.
Look for the output of your runcpu command in the directory $SPEC/result (Unix) or %SPEC%\result (Windows). There, you will find log files and result files. More information about log files can be found in the Config Files document.
The format of the result files depends on what was selected in your config file, but will typically include at least .txt for ASCII text, and will always include .rsf, for raw (unformatted) run data. More information about result formats can be found below, under --output_format. Note that you can always re-generate the output, using the --rawformat option, also documented below.
When you find yourself wondering "Where did all my disk space go?", the answer is usually "The run directories." Most activity takes place in automatically created subdirectories of $SPEC/benchspec/CPU/*/run/ (Unix) or %SPEC%\benchspec\CPU\*\run\ (Windows). Other consumers of disk space underneath individual nnn.benchmark directories include the build/ and exe/ directories.
At the top of the directory tree, space is used by your config/ and result/ directories, and for temporary directories
$SPEC/tmp output_root/tmp
Usually, the largest amount of space is in the run directories. For example, the tester who generated the result excerpted above is lazy about cleaning, and at the moment this paragraph is written, there are many SPECrate run directories on the system:
--------------------------------------- One lazy user's space. Yours will vary. --------------------------------------- Directories GB ---------------------------- ----- Top-level (config,result,tmp) 0.1 Benchmarks $SPEC/benchspec/CPU/*/exe 2 $SPEC/benchspec/CPU/*/build 9 $SPEC/benchspec/CPU/*/run 198 ---------------------------------------
If you use the config file label feature, then directories are named to try to make it easy for you to hunt them down, For example, suppose Andres has a config file that he is using to test some new memory optimizations using SPECrate (multi-copy) mode. He has set
in his config file. In that case, the tools would create directories such as these:
$ pwd /Users/andres/cpu2026/benchspec/CPU/750.sealcrypto_r $ ls -d */*Andres* build/build_base_AndresMemoryOpt.0000 exe/sealcrypto_r_base.AndresMemoryOpt run/run_base_refrate_AndresMemoryOpt.0000 run/run_base_refrate_AndresMemoryOpt.0001 run/run_base_refrate_AndresMemoryOpt.0002 run/run_base_refrate_AndresMemoryOpt.0003 run/run_base_refrate_AndresMemoryOpt.0004 run/run_base_refrate_AndresMemoryOpt.0005 run/run_base_refrate_AndresMemoryOpt.0006 run/run_base_refrate_AndresMemoryOpt.0007 run/run_base_refrate_AndresMemoryOpt.0008 run/run_base_refrate_AndresMemoryOpt.0009 run/run_base_refrate_AndresMemoryOpt.0010 run/run_base_refrate_AndresMemoryOpt.0011 run/run_base_refrate_AndresMemoryOpt.0012 run/run_base_test_AndresMemoryOpt.0000 run/run_base_train_AndresMemoryOpt.0000 $
To get your disk space back, see the documentation of the various cleaning options, below.
SPEC CPU 2026 benchmark suites support multiple users sharing an installation; however you must choose carefully regarding file protections. This section describes the multi-user features and protection options.
Features that are always enabled:
Limitations: The default methods impose two key limitations, which will not be safe in some environments:
Partial solution(?) expid+conventions:
You can deal with limitation #2 if users adopt certain habits. For example, Sunil could name all his config files Sunil-something.cfg. He could use runcpu --expid=Sunil or the
corresponding config file could set expid=Sunil to cause his results to be placed under
$SPEC/result/Sunil (or %SPEC%\result\Sunil\) and binaries under
nnn.benchmark/exe/Sunil/. Unfortunately, this alleged solution still requires that the tree be writeable by all
users, and will not help Sunil if Mat comes along and blithely does one of the alternate cleaning methods.
Solution(?) Give up:
You could just choose to spend the disk space to give each person their own
tree. For SPEC CPU 2026 v1.0, this may increase disk space requirement by about 10 GB per user.
Recommended Solution: output_root. The recommended method uses 4 steps:
| Step | Example (Unix) |
| (1) Protect most of the SPEC tree read-only | chmod -R ugo-w $SPEC |
| (2) Allow shared access to the config directory | chmod 1777 $SPEC/config chmod u+w $SPEC/config/*cfg |
| (3) Keep your own config files | cp config/assignment1.cfg config/kevin1.cfg |
| (4) Use the --output_root switch or
add an output_root to your config file. |
runcpu --output_root=~/cpu2026
output_root = /home/${username}/cpu2026 |
More detail:
Most of the CPU 2026 tree is shared, and can be protected read-only. For example, on a Unix system, you might set protections with:
chmod -R ugo-w $SPEC
The one exception is the config directory, $SPEC/config/ (Unix) or %SPEC%\config\ (Windows), which needs to be a read/write directory shared by all the users, and config files must be writeable. On most Unix system, chmod 1777 is very useful: it lets anyone create files, which they own, control, and protect. (1777 is commonly used for /tmp for this very reason.)
chmod 1777 $SPEC/config chmod u+w $SPEC/config/*cfg
Config files usually would not be shared between users. For example, students might create their own copies of a config file:
|
Kevin enters: cd /cs403/cpu2026 . ./shrc cd config cp assignment1.cfg kevin1.cfg chmod u+w kevin1.cfg runcpu --config=kevin1 --action=build 782.lbm_r |
Christoph enters: cd /cs403/cpu2026 . ./shrc cd config cp assignment1.cfg christoph1.cfg chmod u+w christoph1.cfg runcpu --config=christoph1 --action=build 782.lbm_r |
Set output_root in the config files to change the destinations of the outputs. For example, if config files include (near the top):
output_root=/home/${username}/spec
label=feb27a
then these directories will be used for the above runcpu command:
| Kevin's directories |
build: /home/kevin/spec/benchspec/CPU/782.lbm_r/build/build_base_feb27a.0001 Logs: /home/kevin/spec/result |
| Christoph's |
build: /home/christoph/spec/benchspec/CPU/782.lbm_r/build/build_base_feb27a.0000 Logs: /home/christoph/spec/result |
Navigation: Unix users can easily navigate an output_root tree using ogo
Most runcpu commands perform an action on a set of benchmarks.
The default action is validate.
The actions are described in two tables below: first, actions that relate to building and running; and then actions
regarding cleanup.
| --action build | Compile the benchmarks, using the config file specmake options. |
| --action buildsetup | Set up build directories for the benchmarks.
This option may be useful when debugging a build: you can set up a directory and play with it as a private sandbox. |
| --action onlyrun | Run the benchmarks but do not verify that they got the correct answers.
This option may be useful while applying CPU 2026 for some other purpose, such as tracing instructions for a hardware simulator, or generating a system load while debugging an operating system feature. |
| --action report | Synonym for --fakereport; see also --fakereportable. |
| --action run | Synonym for --action validate. |
| --action runsetup | Set up the run directory (or directories).
This option may be useful when debugging a run.
|
| --action setup | Synonym for --action runsetup |
| --action validate | Build (if needed), set up directories, run, check for correct answers, generate reports.
This is the default action. |
Cleaning actions are listed in order from least thorough to most:
| --action clean | Empty run and build directories for the specified benchmark set for the current user. For example, if the current OS username is set to jeff and this command is entered: D:\cpu2026\> runcpu --action clean --config may12a fprate then the tools will remove build and run directories with username jeff for fprate benchmarks generated by config file may12a.cfg. |
| --action clobber | Clean + remove the corresponding executables. |
| --action trash | Remove run and build directories for all users and all labels for the specified benchmarks. |
| --action realclean | A synonym for --action trash |
| --action scrub | Trash + remove the corresponding executables. |
| Caution | Fake mode is not implemented for the cleaning actions.
For example, if you say runcpu --fake --action=clean the cleaning really happens. |
Clean by hand:
If you prefer, you can clean disk space by entering commands such as the following (on Unix systems):
rm -Rf $SPEC/benchspec/C*/*/run rm -Rf $SPEC/benchspec/C*/*/build rm -Rf $SPEC/benchspec/C*/*/exe
The above commands not only empty the contents of the run and exe directories; they also delete the directories themselves. That's fine; the tools will re-create the run and exe directories if they are needed again later on.
result directories can be cleaned or renamed. Don't worry about creating a new directory;
runcpu will do so automatically. You should be careful to ensure no surprises for any currently-running users.
If you move result directories, it is a good idea to also clean temporary directories at the same time.
Example:
cd $SPEC
mv result old-result
rm -Rf tmp/
cd output_root # (If you use an output_root)
rm -Rf tmp/
Windows users: Windows users can achieve similar effects using the rename command to move directories, and the rd command to remove directories.
I have so much disk space, I'll never use all of it:
Run directories are automatically re-used for subsequent runs. If you prefer, you can ask the tools to never touch a used run directory. Do this by setting the environment variable:
SPEC_CPU2026_NO_RUNDIR_DEL
In this case, you should be prepared to do frequent cleaning, perhaps after reviewing the results of each run.
New with CPU 2026: The runcpu utility includes the ability to run heterogeneous workloads, in addition to the traditional Homogeneous Capacity Method. As explained below, the heterogeneous method is known as "rolling round-robin rate" (rrr-rate); it produces a table of results that includes benchmark-by-benchmark data; the table does not include SPECratios, geometric means, nor official comparable SPEC metrics. The rrr-rate method may be of interest in academic and research contexts. In such cases, please bear in mind the rules about research/academic publication: (SPEC CPU 2026) (SPEC Fair Use rule).
Modern server systems, especially multi-tenant environments, do not operate under homogeneous loads: different virtual machines or containers may run different workloads simultaneously. The rrr-rate method characterizes performance under such heterogeneous conditions and reveals how load on adjacent cores affects individual benchmarks — the “noisy neighbor” effect.
Rolling round-robin rate (rrr-rate) mode is similar to normal rate mode with the following key differences:
The key properties of the rrr-rate mode are:
Comparison of rate and rrr-rate mode
| rate | rrr-rate | |
|---|---|---|
| Parallel benchmark characteristics | All benchmarks running in parallel are identical. | All benchmarks running in parallel are as different as possible (when --copies ≤ number of selected benchmarks; see key properties above for the wrap-around case). |
| copies | Number of parallel processes running the same benchmark. | Number of independent benchmark queues, each running a different benchmark. |
| iterations | Number of times each benchmark is repeated. All copies of a benchmark finish before the next benchmark starts. | Number of times each process repeats its full benchmark queue. A process runs all benchmarks in its queue before starting the next iteration. |
| Benchmark queue | One global queue shared by all processes. All copies of one benchmark finish before the next benchmark begins. | Each process has its own queue containing all benchmarks. Queues differ only in their starting benchmark and step size (--rrrrate_inc). |
| Synchronization | All copies synchronize after every benchmark. | All copies start at the same time. No further synchronization during the run. |
| Benchmark verification | After each individual benchmark run. | After all benchmarks in the queue have completed (see Benchmark verification below). |
| Result metric | Elapsed time from start of the first copy to end of the last copy to finish. | Mean run time and coefficient of variation (CV) per benchmark. |
Run order based on an example
runcpu --rrrrate -c oct29a --iterations=2 --copies=4 727.cppcheck_r 714.cpython_r 750.sealcrypto_r 734.vpr_r
This command will do the following:
The execution order of benchmarks across both iterations will be (the | marks the boundary between iteration 1 and iteration 2):
Process 0: 714.cpython_r 727.cppcheck_r 734.vpr_r 750.sealcrypto_r | 714.cpython_r 727.cppcheck_r 734.vpr_r 750.sealcrypto_r Process 1: 727.cppcheck_r 734.vpr_r 750.sealcrypto_r 714.cpython_r | 727.cppcheck_r 734.vpr_r 750.sealcrypto_r 714.cpython_r Process 2: 734.vpr_r 750.sealcrypto_r 714.cpython_r 727.cppcheck_r | 734.vpr_r 750.sealcrypto_r 714.cpython_r 727.cppcheck_r Process 3: 750.sealcrypto_r 714.cpython_r 727.cppcheck_r 734.vpr_r | 750.sealcrypto_r 714.cpython_r 727.cppcheck_r 734.vpr_r
When running the example above, the output to the screen includes:
Running Benchmarks Running (#1) 714.cpython_r refrate (ref) base oct29a for copy 0 [2025-11-07 07:50:39] Running (#1) 727.cppcheck_r refrate (ref) base oct29a for copy 1 [2025-11-07 07:50:39] Running (#1) 734.vpr_r refrate (ref) base oct29a for copy 2 [2025-11-07 07:50:39] Running (#1) 750.sealcrypto_r refrate (ref) base oct29a for copy 3 [2025-11-07 07:50:39] Running (#1) 734.vpr_r refrate (ref) base oct29a for copy 1 [2025-11-07 07:53:36] Running (#1) 727.cppcheck_r refrate (ref) base oct29a for copy 0 [2025-11-07 07:54:16] Running (#1) 750.sealcrypto_r refrate (ref) base oct29a for copy 2 [2025-11-07 07:54:30] Running (#1) 714.cpython_r refrate (ref) base oct29a for copy 3 [2025-11-07 07:54:55] Running (#1) 734.vpr_r refrate (ref) base oct29a for copy 0 [2025-11-07 07:57:15] Running (#1) 750.sealcrypto_r refrate (ref) base oct29a for copy 1 [2025-11-07 07:57:27] Running (#1) 727.cppcheck_r refrate (ref) base oct29a for copy 3 [2025-11-07 07:58:30] Running (#1) 714.cpython_r refrate (ref) base oct29a for copy 2 [2025-11-07 07:58:44] Running (#1) 750.sealcrypto_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:01:00] Running (#1) 734.vpr_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:01:29] Running (#1) 714.cpython_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:01:43] Running (#1) 727.cppcheck_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:02:28] Running (#2) 714.cpython_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:05:30] Running (#2) 750.sealcrypto_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:05:31] Running (#2) 727.cppcheck_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:05:32] Running (#2) 734.vpr_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:05:38] Running (#2) 734.vpr_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:08:39] Running (#2) 727.cppcheck_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:09:18] Running (#2) 750.sealcrypto_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:09:37] Running (#2) 714.cpython_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:09:55] Running (#2) 734.vpr_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:12:11] Running (#2) 750.sealcrypto_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:12:22] Running (#2) 727.cppcheck_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:13:29] Running (#2) 714.cpython_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:13:49] Running (#2) 750.sealcrypto_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:15:56] Running (#2) 734.vpr_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:16:25] Running (#2) 714.cpython_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:16:37] Running (#2) 727.cppcheck_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:17:23] Success: 8x714.cpython_r 8x727.cppcheck_r 8x734.vpr_r 8x750.sealcrypto_r
Quick validation
The special case --rrrrate_inc=0 --iterations=1 --copies=N (where N equals the number of selected benchmarks) runs all benchmarks in parallel, one per copy, without rolling. This is sometimes called quick-validation-mode: it exercises the full configuration (toolchain, compiler flags, libraries) in parallel to check for compile or validation errors, without the overhead of a full rrr-rate run. See --rrrrate_inc for details.
Console output
In normal rate mode, one line is printed when all copies of a benchmark start together, e.g.:
Running (#1) 714.cpython_r refrate (ref) base oct29a (4 copies)
In rrr-rate mode, each copy prints its own line (as shown in the log above), because copies run different benchmarks and start them independently. The success counter also counts each copy individually, as shown above: 8x714.cpython_r means 4 copies × 2 iterations = 8 successful runs.
Benchmark verification
To avoid tail effects, benchmark verification in rrr-rate mode is separated from the benchmark run. After all copies finish running every benchmark in their queue, a separate parallel verification phase runs — one process per copy. This ensures that a slow verification step does not stall other active cores during the main run.
Because verification is deferred, all run results must be preserved until verification completes. For this reason, rrr-rate creates one run directory per benchmark × copy × iteration, compared to one per copy in normal rate mode. Plan additional disk space accordingly for runs with many benchmarks, copies, or iterations.
Note: --minimize_rundirs is incompatible with rrr-rate mode. If set, it is silently overridden (a notice is printed at runtime), because rrr-rate requires all run directories to remain intact until deferred verification completes.
Monitor hooks
The monitor_pre_bench and monitor_post_bench hooks support $SPECCOPYNUM and $BIND variable expansion in rrr-rate mode, which allows monitoring commands to be restricted to specific copy numbers. See monitors.html for general information about hooks.
Rolling round-robin rate results
The key result metrics reported for each benchmark are:
Example calculation with 4 copies and 1 iteration, where the four copies of a benchmark take 220, 225, 228, and 231 seconds:
mean = (220 + 225 + 228 + 231) / 4 = 226 s
variance = ((220-226)^2 + (225-226)^2 + (228-226)^2 + (231-226)^2) / 4
= (36 + 1 + 4 + 25) / 4 = 16.5
std dev = sqrt(16.5) = 4.06 s
CV = 4.06 / 226 = 0.018
Energy metrics are not reported in rrr-rate mode; energy fields in the output appear as --.
The .rsf raw output file contains the following rrr-rate-specific fields for each benchmark and tuning level. Each field is written under the path spec.cpu2026.results.benchmark.tune.field, where dots in benchmark names are replaced with underscores (e.g. spec.cpu2026.results.714_cpython_r.base.time_avg). The individual per-copy, per-iteration run records are one level deeper: spec.cpu2026.results.benchmark.tune.NNN.field, where NNN is a zero-padded index ordered iteration-first, then copy: NNN = iteration × copies + copynum. For example, with --copies=2 --iterations=3, record 000 is (copy 0, iter 0), 001 is (copy 1, iter 0), 002 is (copy 0, iter 1), and so on. Each record includes copynum and iteration fields that identify which copy and iteration it belongs to. The copy 0 record for each iteration additionally carries the following run-time statistics, computed over the run times of all copies for that iteration:
| Field | Description |
|---|---|
| min, max | Minimum and maximum copy run time |
| mean | Mean copy run time |
| median | Median copy run time |
| variance, sigma | Variance and standard deviation of copy run times |
| cv | Coefficient of variation of copy run times (sigma / mean) |
| quartile_low, quartile_high, iqr | First quartile (Q1), third quartile (Q3), and interquartile range (Q3−Q1) |
| whisker_low, whisker_high | Tukey fences: Q1−1.5×IQR and Q3+1.5×IQR |
| Field | Description |
|---|---|
| valid | Overall validity: S (success), CE, RE, or VE |
| copies, iterations | Values of the corresponding runcpu parameters |
| cN_time | Median execution time across iterations for copy N |
| time_avg | Mean of per-copy median execution times (the value shown in the result table) |
| time_cv | Coefficient of variation of per-copy median execution times (the CV shown in the result table) |
An example result output (from the runcpu command above) contains two tables. The first shows each copy × iteration run time individually:
Estimated
Base
Benchmarks Copy Iter Run Time
---------------- ---- ---- --------
714.cpython_r 0 0 224 S
714.cpython_r 1 0 228 S
714.cpython_r 2 0 223 S
714.cpython_r 3 0 226 S
714.cpython_r 0 1 225 S
714.cpython_r 1 1 227 S
...
The trailing S is the per-run validity status (S = success). Unlike regular rate mode, no run is marked with * as “selected”; all runs contribute to the per-copy median.
The second table shows one summary row per benchmark with the avg time and CV:
Estimated
Base avg Base CV
Benchmarks Copies Iters Time Time
---------------- ------ ------ -------- --------
706.stockfish_r NR
707.ntest_r NR
708.sqlite_r NR
710.omnetpp_r NR
714.cpython_r 4 2 224 0.0232
721.gcc_r NR
723.llvm_r NR
727.cppcheck_r 4 2 183 0.0274
729.abc_r NR
734.vpr_r 4 2 234 0.0295
735.gem5_r NR
750.sealcrypto_r 4 2 261 0.0249
753.ns3_r NR
777.zstd_r NR
NR (Not Run) indicates a benchmark that belongs to the full suite but was not selected for this run. In the CSV output format, the status column for summary rows shows -- rather than a validity code, because rrr-rate has no single “selected” run at the per-benchmark level.
Most users of runcpu will want to become familiar with the following options.
This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.
runcpu --check_version --http_proxy http://webcache.tom.spokewrenchdad.com:8080or, equivalently, for those who prefer to abbreviate to the shortest possible amount of typing:
runcpu --ch --http_p http://webcache.tom.spokewrenchdad.com:8080The command downloads a small file (~15 bytes) from www.spec.org which contains information about the most recent release, and compares that to your release. If your version is out of date, a warning will be printed.
Meaning: Use number copies for a SPECrate run.
Note that specifying the number of copies on the command line will override any config file setting of copies.
Meaning: A "flags file" tells runcpu -- and the reader -- how to interpret tuning options, for
example -O3 or -Ofast.
If you want more than one, separate them with commas, or repeat the --flagsurl
switch.
These are equivalent:
runcpu --flagsurl=$SPEC/compiler.xml,$SPEC/platform.xml runcpu --flagsurl=$SPEC/compiler.xml --flagsurl=$SPEC/platform.xml
You can use either a file path or an http:// address. If needed, add an --http_proxy (or use the corresponding config file option).
The special value noflags may be used to cause rawformat to remove a stored flags file when re-formatting a previously run result.
Help, I got an error message about INVALID RUN:
############################################################################## # INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN # # # # Your run was marked invalid because it has one or more flags in the # # "unknown" category. You might be able to resolve this problem without # # re-running your test; see # # https://www.spec.org/cpu2026/Docs/runcpu.html#flagsurl # # for more information. # # # # INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN # ##############################################################################
Flags files are required by rule 4.6. If you don't
have one, or if your flags file is obsolete, you will see the above error.
To fix it:
Find the sections of the report marked "Unknown".
You can ask your compiler vendor for help, or you can adapt a flags file from other results, or you can fix it yourself.
Once your new flags files are available, make a copy of your rawfile.
Then, insert the flags files, using either of these two equivalent commands:
rawformat --flagsurl=... runcpu --rawformat --flagsurl=...
cp CPU2026.138.intrate.rsf retry.rsf rawformat --flagsurl $SPEC/new.compiler.xml,$SPEC/new.platform.xml retry.rsf
copy CPU2026.138.intrate.rsf retry.rsf rawformat --flagsurl %SPEC%\new.compiler.xml,%SPEC%\new.platform.xml retry.rsf
Meaning: How many times to run each benchmark.
Reportable runs must use 2 or 3 iterations. Here is how the settings for iterations and reportable affect each other:
[/usr/mwong/cpu2026]$ runcpu --config golden --iterations 1 xalancbmk_ras the SPEC tools will inform you that you cannot change the number of iterations on a reportable run. But either of the following commands will override the config file and just run 723.llvm_r once:
[/usr/mwong/cpu2026]$ runcpu --config golden --iterations 1 --loose xalancbmk_r [/usr/mwong/cpu2026]$ runcpu --config golden --iterations 1 --noreportable xalancbmk_r
| Name|synonyms... | Meaning |
| all | implies all of the following except screen, check, and mail |
|---|---|
config cfg|cfgfile configfile conffile |
config file used for this run, written as a numbered file in the result directory, for example, $SPEC/result/CPU2026.030.fprate.refrate.cfg
|
check subcheck reportcheck reportable reportablecheck chk|sub|subtest|test |
Reportable syntax check (automatically enabled when using --rawformat).
|
csv spreadsheet |
Comma-separated variable. If you populate spreadsheets from your runs, you probably should not cut/paste data from text files; you'll get more accurate data by using --output_format csv. The csv report includes all runs, more decimal places, system information, and even the compiler flags. |
default |
implies HTML and text |
flag|flags |
Flag report. Will also be produced when formats that use it are requested (PDF, HTML). |
html xhtml|www|web |
web page |
mail mailto|email |
All generated reports will be sent to an address specified in the config file. |
pdf adobe |
Portable Document Format. This format is the design center for SPEC CPU 2026 reporting. Other formats contain less information: text lacks graphs, postscript lacks hyperlinks, and HTML is less structured. (PDF does not appear as part of "default" only because some systems may lack the ability to read it.) |
postscript ps|printer|print |
PostScript |
raw rsf |
The unformatted raw results, written to a numbered file in the result directory that ends with .rsf (e.g. /spec/cpu2026/rc4/result/CPU2026.042.fpspeed.rsf). Your raw result files are your most important, because the other formats are generated from them. |
screen|scr|disp display|terminal|term |
ASCII text output to stdout. |
text txt|ASCII|asc |
Plain ASCII text file |
Meaning: Do not attempt to do a run; instead, just generate reports from an existing rawfile.
Output will always include the results of format check unless you add nocheck to your list of output_formats.
Using this option will cause any specified --actions to be ignored. The runcpu program is actually exited and rawformat is executed instead. These commands do the same thing:
runcpu --rawformat something rawformat something
The rawformat utility or the --rawformat switch can be useful if (for example) you are just doing ASCII output during most of your runs, but now you would like to create additional reports for one or more especially interesting runs. To create the html and PDF files for experiment number 77, you could say either of these:
runcpu --rawformat --output_format html,ps $SPEC/result/CPU2026.077.fpspeed.rsf rawformat --output_format html,ps $SPEC/result/CPU2026.077.fpspeed.rsf
For more information about rawformat, please see utility.html.
Meaning: Wherever it is practical to do so in an automated fashion, enforce the CPU 2026 run rules, so as to produce a result which is suitable for public reporting and/or submission to SPEC. This option forces various other options, for example sysinfo is required. When you do a reportable run, the list of benchmarks to run must be an entire suite. The order of events for reportable runs is described above.
Reportable runs must use 2 or 3 iterations. Here is how the settings for iterations and reportable affect each other:
Meaning: When the benchmarks are run, set the environment variable OMP_NUM_THREADS=N
Notes
This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.
Meaning: Do not build binaries, even if they don't exist or checksums don't match.
The --nobuild feature can be very handy if, for example, you have script with multiple invocations of runcpu, and you would like to ensure that the build is only attempted once. (Perhaps your thought process might be, "If it fails the first time, fine, just forget about it until I come in Monday and look things over.") By adding --nobuild --ignore_errors to all runs after the first one, no attempt will be made to build the failed benchmarks after the first attempt.
The --nobuild feature also comes in handy when testing whether proposed config file options would potentially force an automatic rebuild.
Example: It can be useful to keep notes as you try different experiments. In the example shown here, notes are recorded in the runcpu comment; and these are saved in the .csv report:
$ runcpu -c newSys.cfg -i test -n 1 --comment="Ronen's tweak to Rohit's version of Prasad's suggested tuning" 782.lbm . . . $ grep 'runcpu command:' CPU2026.007.*csv "runcpu command:","runcpu ... --comment Ronen's tweak to Rohit's version of Prasad's suggested tuning $
Meaning: Define a preprocessor macro named SYMBOL
and optionally give it the value VALUE.
If no value is specified, the macro is defined with no value.
SYMBOL
may not contain equals signs ("=") or colons (":").
This option may be used multiple times.
Many of the Example config files in your config/ directory have sections similar to this:
%ifndef %{build_ncpus}
% define build_ncpus 8
%endif
.
.
.
makeflags = --jobs=%{build_ncpus}
If you have a large server and want compiles to complete more quickly, you could say runcpu --define build_ncpus=99 and specmake will create up to 99 compile jobs at a time.
Meaning: Use monitor_* hooks. The monitoring hooks are disabled for reportable runs.
Meaning: enable or disable FDO options in the config file.
Normally, when Feedback-Directed Optimization (FDO) options are set in the
config file, multiple-pass compilation is done, along with training
runs. Using --nofeedback will cause the config file FDO settings to be ignored and a single-pass
compilation will occur.
Explicitly specifying --feedback will have an effect only if
there are appropriate FDO options in the configuration file.
The command line wins unconditionally over the config file.
Meaning: Same effect as the force_monitor config file setting. The monitoring hooks are disabled for reportable runs.
Meaning: In some cases, such as when doing version checks and loading flag description files, runcpu will attempt to fetch a file, using http. If your web browser needs a proxy server in order to access the outside world, then runcpu will probably want to use the same proxy server. The proxy server can be set by:
For example, a failure of this form:
$ runcpu --rawformat --output_format txt \
--flagsurl http://portlandcyclers.net/evan.xml CPU2026.007.fprate.rsf
...
Retrieving flags file (http://portlandcyclers.net/evan.xml)...
ERROR: Specified flags URL (http://portlandcyclers.net/evan.xml) could not be retrieved.
The error returned was: 500 Can't connect to portlandcyclers.net:80
(Bad hostname 'portlandcyclers.net')
improves when a proxy is provided:
$ runcpu --rawformat --output_format txt \ --flagsurl http://portlandcyclers.net/evan.xml \ --http_proxy=http://webcache.tom.spokewrenchdad.com:8080 CPU2026.007.fprate.rsf
Note that this setting will override the value of the http_proxy environment variable, as well as any setting in the config file.
By default, no proxy is used. The special value none may be used to unset any proxies set in the environment or via config file.
Meaning: Do not delete existing object files before attempting to build. This option should only be used for troubleshooting a problematic compile. It cannot be used for a reportable run.
Rather than using this option, it would probably be easier to just go to the build directory and use specmake
Meaning: If set to a non-empty value, all output files will be rooted under the named
directory, instead of under $SPEC (or %SPEC%).
If directory is not an absolute path (one that begins with "/" on Unix, or a device name on Windows), the path
will be created under $SPEC.
This option can be useful for sharing an installation.
It can also be useful if you want to optimize your I/O, as discussed in the corresponding SPEC CPU 2026 Config
Files section on output_root.
Meaning: For reportable runs, the number of (untimed, but mandatory) test and train
workloads to run in parallel.
Because these are untimed, it is often convenient to run many of them at once.
See also the discussion in the corresponding SPEC CPU 2026 Config Files section on
parallel_test
SPECspeed note: If you set both --parallel_test=N and --threads=M (which is only meaningful for SPECspeed testing), the thread request is silently ignored during the test and train runs. This is done in order to prevent accidental system overload with N x M threads.
Meaning: Enable/disable the optional power measurement mode of the benchmark.
If you wish to measure power, you will need:
In your config file, you specify the network location for the controller system; set the expected voltage and current ranges; and describe your measurement setup for readers of your results. See the config file documentation on Power Measurement.
Once your hardware and config file are set up, then:
Usage with rawformat: It is permitted to reformat a power+performance run as performance-only, using the rawformat utility with the --nopower option. You may wish to do so if a run is marked invalid due to sampling or other problems detected during power measurement.
Meaning: Format results for review, meaning that additional detail will be printed that normally would not be present.
Meaning: Enable rolling round-robin rate mode. The rolling round-robin rate mode is new with CPU 2026
Meaning: Adjust the increment value of the rolling round-robin rate mode. For example, if you have a list of 6 benchmarks to run (say, a b c d e f), and you set --rrrrate_inc=2 then the order of execution will be:
Process 0 a c e b d f Process 1 b d f a c e Process 2 c e b d f a Process 3 d f a c e b Process 4 e b d f a c Process 5 f a c e b d
Setting this value is not sufficient to enable rolling round-robin rate mode (--rrrrate is needed for that).
Setting this value to 0 has the effect that copies will only run their initial benchmark.
Negative values trigger an error.
The special case of rrr-rate mode with --rrrrate_inc=0 --iterations=1 --copies=N, where N is the number of selected benchmarks, is sometimes called quick-validation-mode, that allows to run all selected benchmarks in parallel to speed up the benchmark and validation. This can be useful to test if the given configuration file (including all referenced dependencies such as toolchains, compiler flags, and external libraries) is able to produce a result without compile or validation errors.
The rolling round-robin rate mode is new with CPU 2026Meaning: Selects size of input data to run: test, train, or ref.
The reference workload ("ref") is the only size whose time appears in reports.
You might choose to use runcpu --size=test while debugging a new set of compilation options.
Reportable runs automatically invoke all three sizes: they ensure that your binaries can produce correct results with the test and train workloads and then run the ref workload either 2 or 3 times for the actual measurements.
Caution: When requesting workloads, it is best to stick with the above three: test, train, and ref. Other options (or synonyms) may be useful to benchmark developers or with other suites that use this toolset; they are not documented here because it is not possible to generate SPEC CPU 2026 metrics using workloads other than the ones that correspond to these three.
Meaning: Run the Perl test suite to verify correct operation of specperl, the SPEC CPU
pre-compiled version of Perl.
When this option is used, runcpu will not perform any other actions.
specperl is added when you run install.sh or install.bat.
If something goes wrong while installing and you want support, the output
of runcpu --test may be needed.
Meaning: Check www.spec.org for updates.
Apply them if there are any.
When this option is used, runcpu will not perform any other actions.
If you set --verbose to 7 or higher, you will get a list of files that are checked.
Example:
$ cd $SPEC
$ cat version.txt
1.0.1
$ time runcpu --update
SPEC CPU(r) 2017 Benchmark Suites
Copyright 1995-2017 Standard Performance Evaluation Corporation (SPEC)
runcpu v5749
Using 'linux-x86_64' tools
Reading file manifests... read 32325 entries from 2 files in 0.18s (175442 files/s)
Loading runcpu modules.................
Locating benchmarks...found 47 benchmarks in 53 benchsets.
CPU 2026 update mode selected
Downloading update information...
Selected update:
From v1.0.1 to v1.0.2
Update size: 3190 KB
Downloading update metadata: 1312 B (5989 B/s)
Update metadata parsed successfully.
Update summary:
Files to remove: 6
Directories to remove: 0
Files to change: 134
Files to add: 6
Update metadata downloaded and verified.
Checking files that will be changed or removed by the update...
Checks completed.
Update: Downloading update: 3192 KB/3190 KB (100%; 932 KB/s)
Update downloaded and verified.
Uncompressing update file
Proceed with update? (y/n)
y
Suite update successful!
There is no log file for this run.
runcpu finished at 2018-05-18 11:27:29; 8 total seconds elapsed
real 0m9.038s
user 0m1.598s
sys 0m0.680s
$ cat version.txt
1.0.2
$
Meaning: Use submit commands during the comparison phase of the run, if submit was used for the measurement phase of the run.
Meaning: Use submit commands for SPECspeed runs. The submit facility is by default only used for SPECrate runs.
Example: Shayantika, Duane, and Kristen are sharing a system. The default behavior is that run directories are tagged with the username found in the environment, but values can also be explicitly entered, as in:
runcpu -c newSys.cfg -i test -n 1 --username=Shayantika 782.lbm runcpu -c newSys.cfg -i test -n 1 --username=Duane 782.lbm runcpu -c newSys.cfg -i test -n 1 --username=Kristen 782.lbm
After the above commands, there are three different run directories tagged to match the above user name:
$ cd benchspec/CPU/782.lbm_r/run $ cat list run_base_test_newSys.0001 dir=/spec/benchspec/CPU/782.lbm_r/run/run_base_test_newSys.0001 ... username=Shayantika run_base_test_newSys.0002 dir=/spec/benchspec/CPU/782.lbm_r/run/run_base_test_newSys.0002 ... username=Duane run_base_test_newSys.0003 dir=/spec/benchspec/CPU/782.lbm_r/run/run_base_test_newSys.0003 ... username=Kristen
Meaning: Print detailed version information, including versions of:
specdiff
specinvoke
specmake
specperl
specpp
specrxp
specxz
When this option is used, runcpu will not perform any other actions.
If something goes wrong and you want support, the
output of runcpu --version may be needed.
(This table is organized alphabetically, without regard to upper/lower case, and without regard to the presence of a leading "no").
| -a | Same as --action |
|---|---|
| --action action | Do: build|buildsetup|clean|clobber| onlyrun|realclean|report|run|runsetup|scrub| setup|trash|validate |
| --basepeak | Copy base results to peak (use with --rawformat) |
| --nobuild | Do not attempt to build binaries |
| -c | Same as --config |
| -C | Same as --copies |
| --check_version | Check whether an updated version of CPU 2026 is available |
| --comment "text" | Add a comment to the log and the stored configfile. |
| --config file | Set config file for runcpu to use |
| --copies | Set the number of copies for a SPECrate run |
| -D | Same as --rebuild |
| -d | Same as --deletework |
| --debug | Same as --verbose |
| --define SYMBOL[=VALUE] | Define a config preprocessor macro |
| --delay secs | Add delay before and after benchmark invocation |
| --deletework | Force work directories to be rebuilt |
| --dryrun | Same as --fake |
| --dry-run | Same as --fake |
| --expid=dir | Experiment id, a subdirectory to use for results/runs/exe |
| -F | Same as --flagsurl |
| --fake | Show what commands would be executed. |
| --fakereport | Generate a report without compiling codes or doing a run. |
| --fakereportable | Generate a fake report as if "--reportable" were set. |
| --[no]feedback | Control whether builds use feedback directed optimization |
| --flagsurl url | Location (url or filespec) where to find your flags file |
| --graph_auto | Let the tools pick minimum and maximum for the graph |
| --graph_min N | Set the minimum for the graph |
| --graph_max N | Set the maximum for the graph |
| -h | Same as --help |
| --help | Print usage message |
| --http_proxy | Specify the proxy for internet access |
| --http_timeout | Timeout when attempting http access |
| -I | Same as --ignore_errors |
| -i | Same as --size |
| --ignore_errors | Continue with benchmark runs even if some fail |
| --ignoreerror | Same as --ignore_errors |
| --info_wrap_column N | Set wrap width for non-notes informational items |
| --infowrap | Same as --info_wrap_column |
| --input | Same as --size |
| --iterations N | Run each benchmark N times |
| --keeptmp | Keep temporary files |
| -L | Same as --label |
| -l | Same as --loose |
| --label label | Set the label for executables, build directories, and run directories |
| --loose | Do not produce a reportable result |
| --noloose | Same as --reportable |
| -M | Same as --make_no_clobber |
| --make_no_clobber | Do not delete existing object files before building. |
| --mockup | Same as --fakereportable |
| -n | Same as --iterations |
| -N | Same as --nobuild |
| --notes_wrap_column N | Set wrap width for notes lines |
| -noteswrap | Same as --notes_wrap_column |
| -o | Same as --output_format |
| --output_format format[,format...] | Generate: all|cfg|check|csv|flags|html|mail|pdf|ps|raw|screen|text |
| --output_root=dir | Write all files here instead of under $SPEC |
| --parallel_test | Number of test/train workloads to run in parallel |
| --[no]power | Control power measurement during run |
| --preenv | Allow environment settings in config file to be applied |
| -R | Same as --rawformat |
| --rawformat | Format raw file |
| --rebuild | Force a rebuild of binaries |
| --reportable | Produce a reportable result |
| --noreportable | Same as --loose |
| --reportonly | Same as --fakereport |
| --[no]review | Format results for review |
| -s | Same as --reportable |
| -S SYMBOL[=VALUE] | Same as --define |
| -S SYMBOL:VALUE | Same as --define |
| --[no]setprocgroup | [Don't] try to create all processes in one group. |
| --size size[,size...] | Select data set(s): test|train|ref |
| --strict | Same as --reportable |
| --nostrict | Same as --loose |
| -T | Same as --tune |
| --[no]table | Do [not] include a detailed table of results |
| --threads=N | Set number of OpenMP threads for a SPECspeed run |
| --test | Run various perl validation tests on specperl |
| --train_with | Change the training workload |
| --tune | Set the tuning levels to one of: base|peak|all |
| --tuning | Same as --tune |
| --undef SYMBOL | Remove any definition of this config preprocessor macro |
| -U | Same as --username |
| --update | Check www.spec.org for updates to benchmark and example flag files, and config files |
| --username | Name of user to tag as owner for run directories |
| --use_submit_for_compare | If submit was used for the run, use it for comparisons too. |
| --use_submit_for_speed | Use submit commands for SPECspeed (default is only for SPECrate). |
| -v | Same as --verbose |
| --verbose | Set verbosity level for messages to N |
| -V | Same as --version |
| --version | Output lots of version information |
| -? | Same as --help |
Using SPEC CPU®2026: the 'runcpu' Command: Copyright © 2017-2026 Standard Performance Evaluation Corporation (SPEC®)