Using SPEC CPU®2026: the 'runcpu' Command

Latest: www.spec.org/cpu2026/Docs/

1. Basics

1.1 Defaults

1.2 Syntax

1.3 Benchmarks and suites

1.4 Run order

1.5 Storage Usage

1.5.1 Directory tree

1.5.2 Hey! Where did all my disk space go?

1.6 Multi-user support and limitations
expid (partial solution) output_root (recommended)

1.7 Actions: build buildsetup report run runsetup setup validate
cleaning: clean clobber onlyrun realclean scrub trash
(alternative: Clean by hand)

1.8 Rolling round-robin rate mode New

2. Commonly used options

--action --check_version --config --copies --flagsurl --help --ignore_errors --iterations --loose --output_format --rawformat --rebuild --reportable --threads --tune

3. Less common options

--baseonly --basepeak --nobuild --comment --define --delay --deletework --[no]enable_monitor --expid --fake --fakereport --fakereportable --[no]feedback --[no]force_monitor --[no]graph_auto --graph_max --graph_min --http_proxy --http_timeout --info_wrap_column --keeptmp --label --log_timestamp --make_no_clobber --notes_wrap_column --output_root --parallel_test --parallel_test_workloads --[no]power --preenv --reportonly --review --rrrrate New --rrrrate_inc[=N] New --[no]setprocgroup --size --[no]table --test --undef --update --use_submit_for_compare --use_submit_for_speed --username --verbose --version

4 Quick reference

1. Basics

What is runcpu?   runcpu is the primary tool for the SPEC CPU®2026 Benchmark Suite, a product of the SPEC® non-profit corporation (about SPEC). You use it from a Linux or Unix shell or the Microsoft Windows command line to build and run benchmarks, with commands such as these:

runcpu --config=eniac.cfg    --action=build 735.gem5_r
runcpu --config=colossus.cfg --threads=16   872.marian_s
runcpu --config=z3.cfg       --copies=64    fprate 

The first command compiles the benchmark named 735.gem5_r. The second runs the OpenMP benchmark 872.marian_s using 16 threads. The third runs 64 copies of all the SPECrate Floating Point benchmarks.

Before reading this document: If you have not already done so, please install and test your SPEC CPU 2026 distribution (ISO image). This document assumes that you have already:

If you have not done the above, please see the brief instructions in the Quick Start guide, or the more detailed section "Testing Your Installation" UnixWindows.

1.1 Defaults

The SPEC CPU default settings described in this document may be adjusted by config files.

The order of precedence for settings is:

Highest precedence: runcpu command
Middle: config file
Lowest: the tools as shipped by SPEC

Therefore, when this document tells you that something is the default, bear in mind that your config file may have changed that setting. With luck, the author of the config file will tell you so.

1.2 Syntax

The syntax for the runcpu command is:

runcpu [options] [list of benchmarks to run]

Options are described in the following sections. There, you will notice that many options have both long and short names. The long names are invoked using two dashes, and the short names use only a single dash. For long names that take a parameter, you can optionally use an equals sign. Long names can also be abbreviated, provided that you still enter enough letters for uniqueness. For example, the following commands all do the same thing:

runcpu --config=dianne_july25a --debug=99 fprate
runcpu --config dianne_july25a --debug 99 fprate
runcpu --conf dianne_july25a   --deb 99   fprate
runcpu -c dianne_july25a       -v 99      fprate

1.3 Benchmarks and Suites

In the list of benchmarks to run, you can use one or more individual benchmarks, such as 723.llvm_r, or you can run entire suites, using one of the Short Tags below.

Short
Tag
Suite Contents Metrics How many copies?
What do Higher Scores Mean?
intspeed SPECspeed®2026 Integer 13 integer benchmarks SPECspeed2026_int_base
SPECspeed2026_int_peak
SPECspeed suites always run one copy of each benchmark.
Higher scores indicate that less time is needed.
fpspeed SPECspeed®2026 Floating Point 13 floating point benchmarks SPECspeed2026_fp_base
SPECspeed2026_fp_peak
intrate SPECrate®2026 Integer 14 integer benchmarks SPECrate2026_int_base
SPECrate2026_int_peak
SPECrate suites run multiple concurrent copies of each benchmark.
The tester selects how many.
Higher scores indicate more throughput (work per unit of time).
fprate SPECrate®2026 Floating Point 12 floating point benchmarks SPECrate2026_fp_base
SPECrate2026_fp_peak
The "Short Tag" is the canonical abbreviation for use with runcpu, where context is defined by the tools. In a published document, context may not be clear.
To avoid ambiguity in published documents, the Suite Name or the Metrics should be spelled as shown above.

Supersets: There are several supersets which run more than one of the above.

Synonyms - Suite selection is done with the short tags:    intrate fprate intspeed fpspeed
You can also use full metric names. You can say:   runcpu SPECspeed2026_int_base
Some alternates (such as int_rate or CPU2026) may provoke runcpu to say that it is trying to DWIM (wikipedia) but these are not recommended.

Benchmark names: Individual benchmarks can be named, numbered, or both.
Separate them with a space.
Names can be abbreviated, as long as you enter enough characters for uniqueness.
Each of the following commands does the same thing:

runcpu -c jason_july09d 750.sealcrypto_r 723.llvm_r 772.marian_r
runcpu -c jason_july09d 750 723 772
runcpu -c jason_july09d sealcrypto_r llvm_r marian_r
runcpu -c jason_july09d seal llvm marian_r

To exclude a benchmark: Use a hat (^, also known as carat, typically found as shift-6). Note that if hat has significance to your shell, you may need to protect it from interpretation by the shell, for example by putting it in single quotes. On Windows, you will need to use both a hat and double quotes for each benchmark you want to exclude.

bash-n.n.n$ runcpu -c cathy_apr21c fprate ^748 ^nest_r  
pickyShell% runcpu -c cathy_apr21c sep14c fprate '^748' '^nest_r'
E:\cpu2026> runcpu -c cathy_apr21c sep14c fprate "^748" "^nest_r"

Turning off reportable: If your config file sets reportable=yes then you cannot run a subset unless you turn that option off.

[/usr/cathy/cpu2026]$ runcpu --config cathy_apr21b --noreportable fprate ^parest 

1.4 Run order

A reportable run does these steps:

  1. Test: Set up all of the benchmarks using the test workload. Run them. Verify that they get correct answers. The test workloads are run merely as an additional verification of correct operation of the generated executables; their times are not reported and do not contribute to overall metrics. Therefore multiple benchmarks can be run simultaneously, as in the example below where the tester has set --parallel_test to allow up to 20 simultaneous tests.

  2. Train: Do the same steps for the train workload, for the same reasons, with the same verification, non-reporting, and parallelism.

  3. Ref: Run the refrate (7xx benchmarks) or the refspeed (8xx) workload

    If running refspeed, multiple --threads are optionally allowed.
    If running refrate multiple --copies are optionally allowed, as in the example below which uses 256 copies in base.
    (*) For reportable runs, --iterations must be 2 or 3.

  4. Report:

Summarizing reportable run order: The order can be summarized as:

          setup for test
          test (*)
          setup for train
          train (*)
          setup for ref
          ref1, ref2 [, ref3] (**)

  (*) Multiple benchmarks may overlap if --parallel_test > 1
 (**) One benchmark at a time.  Third run only if --iterations=3.

Reportable order when more than one tuning is present: If you run both base and peak tuning, base is always run first.

          setup for test
          test base and peak (*)
          setup for train
          train base and peak (*)
          setup for ref
          base ref1, base ref2 [, base ref3] (**)
          peak ref1, peak ref2 [, peak ref3] (**)

 (*)  Multiple benchmarks may overlap if --parallel_test > 1
      Peak and base may also overlap.
 (**) One benchmark at a time.  Third run only if --iterations=3.

Reportable order when more than one suite is present: If you start a reportable using more than one suite, all the work is done for one suite before proceeding to the next.

For example runcpu --iterations=3 --reportable intspeed fprate would cause:

          intspeed setup test
          intspeed test
          intspeed setup train
          intspeed train
          intspeed setup refspeed
          intspeed refspeed #1
          intspeed refspeed #2
          intspeed refspeed #3
          fprate   setup test
          fprate   test
          fprate   setup train
          fprate   train
          fprate   setup refrate
          fprate   refrate #1
          fprate   refrate #2
          fprate   refrate #3

If you request more than one suite (for example, by using all) then a table is printed to show you the run order:

Action   Run Mode   Workload      Report Type      Benchmarks
------   --------   --------   -----------------   ----------------------------
report   rate       refrate    SPECrate2026_fp     fprate
report   speed      refspeed   SPECspeed2026_fp    fpspeed
report   rate       refrate    SPECrate2026_int    intrate
report   speed      refspeed   SPECspeed2026_int   intspeed
   

Reportable example: A log from a reportable run with copies=4 is excerpted below. The Unix grep command picks out lines that match one of the quoted strings; Microsoft Windows users could try findstr instead.

$ grep -e 'Running B' -e 'Starting' -e '(#' CPU2026.007.log 
Starting runcpu for intrate...
Running Benchmarks (up to 4 concurrent processes)
  Starting runcpu for 706.stockfish_r test base branden_oct15b
  Starting runcpu for 707.ntest_r test base branden_oct15b
  Starting runcpu for 708.sqlite_r test base branden_oct15b
  Starting runcpu for 710.omnetpp_r test base branden_oct15b
  Starting runcpu for 714.cpython_r test base branden_oct15b
  Starting runcpu for 721.gcc_r test base branden_oct15b
  Starting runcpu for 723.llvm_r test base branden_oct15b
  Starting runcpu for 727.cppcheck_r test base branden_oct15b
  Starting runcpu for 729.abc_r test base branden_oct15b
  Starting runcpu for 734.vpr_r test base branden_oct15b
  Starting runcpu for 735.gem5_r test base branden_oct15b
  Starting runcpu for 750.sealcrypto_r test base branden_oct15b
  Starting runcpu for 753.ns3_r test base branden_oct15b
  Starting runcpu for 777.zstd_r test base branden_oct15b
Running Benchmarks (up to 4 concurrent processes)
  Starting runcpu for 706.stockfish_r train base branden_oct15b
  Starting runcpu for 707.ntest_r train base branden_oct15b
  Starting runcpu for 708.sqlite_r train base branden_oct15b
  Starting runcpu for 710.omnetpp_r train base branden_oct15b
  Starting runcpu for 714.cpython_r train base branden_oct15b
  Starting runcpu for 721.gcc_r train base branden_oct15b
  Starting runcpu for 723.llvm_r train base branden_oct15b
  Starting runcpu for 727.cppcheck_r train base branden_oct15b
  Starting runcpu for 729.abc_r train base branden_oct15b
  Starting runcpu for 734.vpr_r train base branden_oct15b
  Starting runcpu for 735.gem5_r train base branden_oct15b
  Starting runcpu for 750.sealcrypto_r train base branden_oct15b
  Starting runcpu for 753.ns3_r train base branden_oct15b
  Starting runcpu for 777.zstd_r train base branden_oct15b
Running Benchmarks
  Running (#1) 706.stockfish_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:42:58]
  Running (#1) 707.ntest_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:45:49]
  Running (#1) 708.sqlite_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:49:05]
  Running (#1) 710.omnetpp_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:53:00]
  Running (#1) 714.cpython_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:55:40]
  Running (#1) 721.gcc_r refrate (ref) base branden_oct15b (4 copies) [2025-10-15 23:57:55]
  Running (#1) 723.llvm_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:02:16]
  Running (#1) 727.cppcheck_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:05:45]
  Running (#1) 729.abc_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:08:17]
  Running (#1) 734.vpr_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:11:01]
  Running (#1) 735.gem5_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:13:55]
  Running (#1) 750.sealcrypto_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:16:19]
  Running (#1) 753.ns3_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:19:40]
  Running (#1) 777.zstd_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:26:19]
  Running (#2) 706.stockfish_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:30:54]
  Running (#2) 707.ntest_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:33:41]
  Running (#2) 708.sqlite_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:36:58]
  Running (#2) 710.omnetpp_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:40:49]
  Running (#2) 714.cpython_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:43:30]
  Running (#2) 721.gcc_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:45:43]
  Running (#2) 723.llvm_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:49:57]
  Running (#2) 727.cppcheck_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:53:25]
  Running (#2) 729.abc_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:55:56]
  Running (#2) 734.vpr_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 00:58:43]
  Running (#2) 735.gem5_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 01:01:41]
  Running (#2) 750.sealcrypto_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 01:04:06]
  Running (#2) 753.ns3_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 01:07:27]
  Running (#2) 777.zstd_r refrate (ref) base branden_oct15b (4 copies) [2025-10-16 01:14:07]

$ (white space adjusted for readability) 

1.5 Storage Usage

1.5.1 Directory tree

The structure of the CPU 2026 directory tree is:

$SPEC or %SPEC% - the root directory
   benchspec    - Some suite-wide files
      CPU         - The benchmarks
   bin          - Tools to run and report on the suite
   config       - Config files
   Docs         - HTML and plaintext documentation
   result       - Log files and reports
   tmp          - Temporary files
   tools        - Sources for the CPU 2026 tools

Within each of the individual benchmarks, the structure is:

nnn.benchmark - root for this benchmark
   build      - Benchmark binaries are built here
   data
      all     - Data used by all runs (if needed by the benchmark)
      ref     - The timed data set
      test    - Data for a simple test that an executable is functional
      train   - Data for feedback-directed optimization
   Docs       - Documentation for this benchmark
   exe        - Compiled versions of the benchmark
   run        - Benchmarks are run here
   Spec       - SPEC metadata about the benchmark
   src        - The sources for the benchmark

Many SPECspeed benchmarks (8nn.benchmark_s) share content that is located under a corresponding SPECrate benchmark (7nn.benchmark_r). Shared sources files may be compiled differently for SPECspeed vs. SPECrate. For example, the sources for 849.fotonik3d_s can be found at 749.fotonik3d_r/src/ and only 849.fotonik3d_s can be compiled with OpenMP.

Look for the output of your runcpu command in the directory $SPEC/result (Unix) or %SPEC%\result (Windows). There, you will find log files and result files. More information about log files can be found in the Config Files document.

The format of the result files depends on what was selected in your config file, but will typically include at least .txt for ASCII text, and will always include .rsf, for raw (unformatted) run data. More information about result formats can be found below, under --output_format. Note that you can always re-generate the output, using the --rawformat option, also documented below.

1.5.2 Hey! Where did all my disk space go?

When you find yourself wondering "Where did all my disk space go?", the answer is usually "The run directories." Most activity takes place in automatically created subdirectories of $SPEC/benchspec/CPU/*/run/ (Unix) or %SPEC%\benchspec\CPU\*\run\ (Windows). Other consumers of disk space underneath individual nnn.benchmark directories include the build/ and exe/ directories.

At the top of the directory tree, space is used by your config/ and result/ directories, and for temporary directories

$SPEC/tmp
output_root/tmp

Usually, the largest amount of space is in the run directories. For example, the tester who generated the result excerpted above is lazy about cleaning, and at the moment this paragraph is written, there are many SPECrate run directories on the system:

---------------------------------------
One lazy user's space. Yours will vary.
---------------------------------------
Directories                       GB
----------------------------     -----
Top-level (config,result,tmp)      0.1
Benchmarks
  $SPEC/benchspec/CPU/*/exe        2
  $SPEC/benchspec/CPU/*/build      9
  $SPEC/benchspec/CPU/*/run      198
---------------------------------------

If you use the config file label feature, then directories are named to try to make it easy for you to hunt them down, For example, suppose Andres has a config file that he is using to test some new memory optimizations using SPECrate (multi-copy) mode. He has set

label=AndresMemoryOpt

in his config file. In that case, the tools would create directories such as these:

$ pwd
/Users/andres/cpu2026/benchspec/CPU/750.sealcrypto_r
$ ls -d */*Andres*
build/build_base_AndresMemoryOpt.0000
exe/sealcrypto_r_base.AndresMemoryOpt
run/run_base_refrate_AndresMemoryOpt.0000
run/run_base_refrate_AndresMemoryOpt.0001
run/run_base_refrate_AndresMemoryOpt.0002
run/run_base_refrate_AndresMemoryOpt.0003
run/run_base_refrate_AndresMemoryOpt.0004
run/run_base_refrate_AndresMemoryOpt.0005
run/run_base_refrate_AndresMemoryOpt.0006
run/run_base_refrate_AndresMemoryOpt.0007
run/run_base_refrate_AndresMemoryOpt.0008
run/run_base_refrate_AndresMemoryOpt.0009
run/run_base_refrate_AndresMemoryOpt.0010
run/run_base_refrate_AndresMemoryOpt.0011
run/run_base_refrate_AndresMemoryOpt.0012
run/run_base_test_AndresMemoryOpt.0000
run/run_base_train_AndresMemoryOpt.0000
$  

To get your disk space back, see the documentation of the various cleaning options, below.

1.6 Multi-user support

SPEC CPU 2026 benchmark suites support multiple users sharing an installation; however you must choose carefully regarding file protections. This section describes the multi-user features and protection options.

Features that are always enabled:

Limitations: The default methods impose two key limitations, which will not be safe in some environments:

  1. The directory tree must be writable by each of the users, which means that they have to trust each other not to modify or delete each others' files.
  2. Directories such as result/ and nnn.benchmark/exe/ and nnn.benchmark/run/ are not segregated by user. Therefore you can have only one version of (for example) 709.cactus_r/exe/cactus_base.somelabel and different users will have their result logs intermixed in the result/ directory.

Partial solution(?) expid+conventions:
You can deal with limitation #2 if users adopt certain habits. For example, Sunil could name all his config files Sunil-something.cfg. He could use runcpu --expid=Sunil or the corresponding config file could set expid=Sunil to cause his results to be placed under $SPEC/result/Sunil (or %SPEC%\result\Sunil\) and binaries under nnn.benchmark/exe/Sunil/. Unfortunately, this alleged solution still requires that the tree be writeable by all users, and will not help Sunil if Mat comes along and blithely does one of the alternate cleaning methods.

Solution(?) Give up:
You could just choose to spend the disk space to give each person their own tree. For SPEC CPU 2026 v1.0, this may increase disk space requirement by about 10 GB per user.

Recommended Solution: output_root. The recommended method uses 4 steps:

StepExample (Unix)
(1) Protect most of the SPEC tree read-only chmod -R ugo-w $SPEC
(2) Allow shared access to the config directory chmod 1777 $SPEC/config
chmod u+w $SPEC/config/*cfg
(3) Keep your own config files cp config/assignment1.cfg config/kevin1.cfg
(4) Use the --output_root switch or
add an output_root to your config file.
runcpu --output_root=~/cpu2026
output_root = /home/${username}/cpu2026

More detail:

  1. Most of the CPU 2026 tree is shared, and can be protected read-only. For example, on a Unix system, you might set protections with:

    chmod -R ugo-w $SPEC
    
  2. The one exception is the config directory, $SPEC/config/ (Unix) or %SPEC%\config\ (Windows), which needs to be a read/write directory shared by all the users, and config files must be writeable. On most Unix system, chmod 1777 is very useful: it lets anyone create files, which they own, control, and protect. (1777 is commonly used for /tmp for this very reason.)

    chmod 1777 $SPEC/config
    chmod u+w $SPEC/config/*cfg
    
  3. Config files usually would not be shared between users. For example, students might create their own copies of a config file:

    Kevin enters:

    cd /cs403/cpu2026
    . ./shrc
    cd config
    cp assignment1.cfg kevin1.cfg
    chmod u+w kevin1.cfg
    runcpu --config=kevin1 --action=build 782.lbm_r 

    Christoph enters:

    cd /cs403/cpu2026
    . ./shrc
    cd config
    cp assignment1.cfg christoph1.cfg
    chmod u+w christoph1.cfg
    runcpu --config=christoph1 --action=build 782.lbm_r  
  4. Set output_root in the config files to change the destinations of the outputs. For example, if config files include (near the top):

    output_root=/home/${username}/spec
    label=feb27a
    

    then these directories will be used for the above runcpu command:

    Kevin's directories
    build: /home/kevin/spec/benchspec/CPU/782.lbm_r/build/build_base_feb27a.0001
    Logs:  /home/kevin/spec/result
    Christoph's
    build: /home/christoph/spec/benchspec/CPU/782.lbm_r/build/build_base_feb27a.0000
    Logs:  /home/christoph/spec/result

Navigation: Unix users can easily navigate an output_root tree using ogo

1.7 Actions

Most runcpu commands perform an action on a set of benchmarks.

(Exceptions: runcpu --rawformat or update.)

The default action is validate.
The actions are described in two tables below: first, actions that relate to building and running; and then actions regarding cleanup.

--action build Compile the benchmarks, using the config file specmake options.
--action buildsetup

Set up build directories for the benchmarks.
Copy the source files to the directory, and create the needed Makefiles.
Do not attempt to actually do the build.

This option may be useful when debugging a build: you can set up a directory and play with it as a private sandbox.

--action onlyrun

Run the benchmarks but do not verify that they got the correct answers.
You cannot use this option to report performance.

This option may be useful while applying CPU 2026 for some other purpose, such as tracing instructions for a hardware simulator, or generating a system load while debugging an operating system feature.

--action report Synonym for --fakereport; see also --fakereportable.
--action run Synonym for --action validate.
--action runsetup

Set up the run directory (or directories).
If executables do not exist, build them.
Copy executables and data to the directory(ies).
Create control file specccmds.cmd but do not actually run any benchmarks.

This option may be useful when debugging a run.
See the runsetup sandbox example in the Utilities documentation.

--action setup Synonym for --action runsetup
--action validate Build (if needed), set up directories, run, check for correct answers, generate reports.
This is the default action.

Cleaning actions are listed in order from least thorough to most:

--action clean

Empty run and build directories for the specified benchmark set for the current user. For example, if the current OS username is set to jeff and this command is entered:

D:\cpu2026\> runcpu --action clean --config may12a fprate

then the tools will remove build and run directories with username jeff for fprate benchmarks generated by config file may12a.cfg.

--action clobber Clean + remove the corresponding executables.
--action trash Remove run and build directories for all users and all labels for the specified benchmarks.
--action realclean A synonym for --action trash
--action scrub Trash + remove the corresponding executables.
Caution Fake mode is not implemented for the cleaning actions.
For example, if you say runcpu --fake --action=clean the cleaning really happens.

Clean by hand:
If you prefer, you can clean disk space by entering commands such as the following (on Unix systems):

rm -Rf $SPEC/benchspec/C*/*/run
rm -Rf $SPEC/benchspec/C*/*/build
rm -Rf $SPEC/benchspec/C*/*/exe 

The above commands not only empty the contents of the run and exe directories; they also delete the directories themselves. That's fine; the tools will re-create the run and exe directories if they are needed again later on.

result directories can be cleaned or renamed. Don't worry about creating a new directory; runcpu will do so automatically. You should be careful to ensure no surprises for any currently-running users. If you move result directories, it is a good idea to also clean temporary directories at the same time.
Example:

cd $SPEC
mv result old-result
rm -Rf tmp/
cd output_root     # (If you use an output_root)
rm -Rf tmp/

Windows users: Windows users can achieve similar effects using the rename command to move directories, and the rd command to remove directories.

I have so much disk space, I'll never use all of it:

Run directories are automatically re-used for subsequent runs. If you prefer, you can ask the tools to never touch a used run directory. Do this by setting the environment variable:

     SPEC_CPU2026_NO_RUNDIR_DEL

In this case, you should be prepared to do frequent cleaning, perhaps after reviewing the results of each run.

1.8 Rolling round-robin rate

New with CPU 2026: The runcpu utility includes the ability to run heterogeneous workloads, in addition to the traditional Homogeneous Capacity Method. As explained below, the heterogeneous method is known as "rolling round-robin rate" (rrr-rate); it produces a table of results that includes benchmark-by-benchmark data; the table does not include SPECratios, geometric means, nor official comparable SPEC metrics. The rrr-rate method may be of interest in academic and research contexts. In such cases, please bear in mind the rules about research/academic publication: (SPEC CPU 2026)     (SPEC Fair Use rule).

Modern server systems, especially multi-tenant environments, do not operate under homogeneous loads: different virtual machines or containers may run different workloads simultaneously. The rrr-rate method characterizes performance under such heterogeneous conditions and reveals how load on adjacent cores affects individual benchmarks — the “noisy neighbor” effect.

Rolling round-robin rate (rrr-rate) mode is similar to normal rate mode with the following key differences:

The key properties of the rrr-rate mode are:

Comparison of rate and rrr-rate mode

rate rrr-rate
Parallel benchmark characteristics All benchmarks running in parallel are identical. All benchmarks running in parallel are as different as possible (when --copies ≤ number of selected benchmarks; see key properties above for the wrap-around case).
copies Number of parallel processes running the same benchmark. Number of independent benchmark queues, each running a different benchmark.
iterations Number of times each benchmark is repeated. All copies of a benchmark finish before the next benchmark starts. Number of times each process repeats its full benchmark queue. A process runs all benchmarks in its queue before starting the next iteration.
Benchmark queue One global queue shared by all processes. All copies of one benchmark finish before the next benchmark begins. Each process has its own queue containing all benchmarks. Queues differ only in their starting benchmark and step size (--rrrrate_inc).
Synchronization All copies synchronize after every benchmark. All copies start at the same time. No further synchronization during the run.
Benchmark verification After each individual benchmark run. After all benchmarks in the queue have completed (see Benchmark verification below).
Result metric Elapsed time from start of the first copy to end of the last copy to finish. Mean run time and coefficient of variation (CV) per benchmark.

Run order based on an example

runcpu --rrrrate -c oct29a --iterations=2 --copies=4 727.cppcheck_r 714.cpython_r 750.sealcrypto_r 734.vpr_r

This command will do the following:

The execution order of benchmarks across both iterations will be (the | marks the boundary between iteration 1 and iteration 2):

Process 0: 714.cpython_r    727.cppcheck_r   734.vpr_r        750.sealcrypto_r | 714.cpython_r    727.cppcheck_r   734.vpr_r        750.sealcrypto_r
Process 1: 727.cppcheck_r   734.vpr_r        750.sealcrypto_r 714.cpython_r    | 727.cppcheck_r   734.vpr_r        750.sealcrypto_r 714.cpython_r
Process 2: 734.vpr_r        750.sealcrypto_r 714.cpython_r    727.cppcheck_r   | 734.vpr_r        750.sealcrypto_r 714.cpython_r    727.cppcheck_r
Process 3: 750.sealcrypto_r 714.cpython_r    727.cppcheck_r   734.vpr_r        | 750.sealcrypto_r 714.cpython_r    727.cppcheck_r   734.vpr_r

When running the example above, the output to the screen includes:

Running Benchmarks
  Running (#1) 714.cpython_r refrate (ref) base oct29a for copy 0 [2025-11-07 07:50:39]
  Running (#1) 727.cppcheck_r refrate (ref) base oct29a for copy 1 [2025-11-07 07:50:39]
  Running (#1) 734.vpr_r refrate (ref) base oct29a for copy 2 [2025-11-07 07:50:39]
  Running (#1) 750.sealcrypto_r refrate (ref) base oct29a for copy 3 [2025-11-07 07:50:39]
  Running (#1) 734.vpr_r refrate (ref) base oct29a for copy 1 [2025-11-07 07:53:36]
  Running (#1) 727.cppcheck_r refrate (ref) base oct29a for copy 0 [2025-11-07 07:54:16]
  Running (#1) 750.sealcrypto_r refrate (ref) base oct29a for copy 2 [2025-11-07 07:54:30]
  Running (#1) 714.cpython_r refrate (ref) base oct29a for copy 3 [2025-11-07 07:54:55]
  Running (#1) 734.vpr_r refrate (ref) base oct29a for copy 0 [2025-11-07 07:57:15]
  Running (#1) 750.sealcrypto_r refrate (ref) base oct29a for copy 1 [2025-11-07 07:57:27]
  Running (#1) 727.cppcheck_r refrate (ref) base oct29a for copy 3 [2025-11-07 07:58:30]
  Running (#1) 714.cpython_r refrate (ref) base oct29a for copy 2 [2025-11-07 07:58:44]
  Running (#1) 750.sealcrypto_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:01:00]
  Running (#1) 734.vpr_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:01:29]
  Running (#1) 714.cpython_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:01:43]
  Running (#1) 727.cppcheck_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:02:28]
  Running (#2) 714.cpython_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:05:30]
  Running (#2) 750.sealcrypto_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:05:31]
  Running (#2) 727.cppcheck_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:05:32]
  Running (#2) 734.vpr_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:05:38]
  Running (#2) 734.vpr_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:08:39]
  Running (#2) 727.cppcheck_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:09:18]
  Running (#2) 750.sealcrypto_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:09:37]
  Running (#2) 714.cpython_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:09:55]
  Running (#2) 734.vpr_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:12:11]
  Running (#2) 750.sealcrypto_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:12:22]
  Running (#2) 727.cppcheck_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:13:29]
  Running (#2) 714.cpython_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:13:49]
  Running (#2) 750.sealcrypto_r refrate (ref) base oct29a for copy 0 [2025-11-07 08:15:56]
  Running (#2) 734.vpr_r refrate (ref) base oct29a for copy 3 [2025-11-07 08:16:25]
  Running (#2) 714.cpython_r refrate (ref) base oct29a for copy 1 [2025-11-07 08:16:37]
  Running (#2) 727.cppcheck_r refrate (ref) base oct29a for copy 2 [2025-11-07 08:17:23]
Success: 8x714.cpython_r 8x727.cppcheck_r 8x734.vpr_r 8x750.sealcrypto_r

Quick validation

The special case --rrrrate_inc=0 --iterations=1 --copies=N (where N equals the number of selected benchmarks) runs all benchmarks in parallel, one per copy, without rolling. This is sometimes called quick-validation-mode: it exercises the full configuration (toolchain, compiler flags, libraries) in parallel to check for compile or validation errors, without the overhead of a full rrr-rate run. See --rrrrate_inc for details.

Console output

In normal rate mode, one line is printed when all copies of a benchmark start together, e.g.:

  Running (#1) 714.cpython_r refrate (ref) base oct29a (4 copies)

In rrr-rate mode, each copy prints its own line (as shown in the log above), because copies run different benchmarks and start them independently. The success counter also counts each copy individually, as shown above: 8x714.cpython_r means 4 copies × 2 iterations = 8 successful runs.

Benchmark verification

To avoid tail effects, benchmark verification in rrr-rate mode is separated from the benchmark run. After all copies finish running every benchmark in their queue, a separate parallel verification phase runs — one process per copy. This ensures that a slow verification step does not stall other active cores during the main run.

Because verification is deferred, all run results must be preserved until verification completes. For this reason, rrr-rate creates one run directory per benchmark × copy × iteration, compared to one per copy in normal rate mode. Plan additional disk space accordingly for runs with many benchmarks, copies, or iterations.

Note: --minimize_rundirs is incompatible with rrr-rate mode. If set, it is silently overridden (a notice is printed at runtime), because rrr-rate requires all run directories to remain intact until deferred verification completes.

Monitor hooks

The monitor_pre_bench and monitor_post_bench hooks support $SPECCOPYNUM and $BIND variable expansion in rrr-rate mode, which allows monitoring commands to be restricted to specific copy numbers. See monitors.html for general information about hooks.

Rolling round-robin rate results

The key result metrics reported for each benchmark are:

Example calculation with 4 copies and 1 iteration, where the four copies of a benchmark take 220, 225, 228, and 231 seconds:

mean     = (220 + 225 + 228 + 231) / 4 = 226 s
variance = ((220-226)^2 + (225-226)^2 + (228-226)^2 + (231-226)^2) / 4
         = (36 + 1 + 4 + 25) / 4 = 16.5
std dev  = sqrt(16.5) = 4.06 s
CV       = 4.06 / 226 = 0.018

Energy metrics are not reported in rrr-rate mode; energy fields in the output appear as --.

The .rsf raw output file contains the following rrr-rate-specific fields for each benchmark and tuning level. Each field is written under the path spec.cpu2026.results.benchmark.tune.field, where dots in benchmark names are replaced with underscores (e.g. spec.cpu2026.results.714_cpython_r.base.time_avg). The individual per-copy, per-iteration run records are one level deeper: spec.cpu2026.results.benchmark.tune.NNN.field, where NNN is a zero-padded index ordered iteration-first, then copy: NNN = iteration × copies + copynum. For example, with --copies=2 --iterations=3, record 000 is (copy 0, iter 0), 001 is (copy 1, iter 0), 002 is (copy 0, iter 1), and so on. Each record includes copynum and iteration fields that identify which copy and iteration it belongs to. The copy 0 record for each iteration additionally carries the following run-time statistics, computed over the run times of all copies for that iteration:

Field Description
min, max Minimum and maximum copy run time
mean Mean copy run time
median Median copy run time
variance, sigma Variance and standard deviation of copy run times
cv Coefficient of variation of copy run times (sigma / mean)
quartile_low, quartile_high, iqr First quartile (Q1), third quartile (Q3), and interquartile range (Q3−Q1)
whisker_low, whisker_high Tukey fences: Q1−1.5×IQR and Q3+1.5×IQR
Field Description
valid Overall validity: S (success), CE, RE, or VE
copies, iterations Values of the corresponding runcpu parameters
cN_time Median execution time across iterations for copy N
time_avg Mean of per-copy median execution times (the value shown in the result table)
time_cv Coefficient of variation of per-copy median execution times (the CV shown in the result table)

An example result output (from the runcpu command above) contains two tables. The first shows each copy × iteration run time individually:

                           Estimated
                            Base
Benchmarks       Copy Iter  Run Time
---------------- ---- ----  --------
714.cpython_r       0    0      224  S
714.cpython_r       1    0      228  S
714.cpython_r       2    0      223  S
714.cpython_r       3    0      226  S
714.cpython_r       0    1      225  S
714.cpython_r       1    1      227  S
...

The trailing S is the per-run validity status (S = success). Unlike regular rate mode, no run is marked with * as “selected”; all runs contribute to the per-copy median.

The second table shows one summary row per benchmark with the avg time and CV:

                               Estimated
                               Base avg Base CV
Benchmarks       Copies Iters  Time     Time
---------------- ------ ------ -------- --------
706.stockfish_r      NR
707.ntest_r          NR
708.sqlite_r         NR
710.omnetpp_r        NR
714.cpython_r         4      2  224       0.0232
721.gcc_r            NR
723.llvm_r           NR
727.cppcheck_r        4      2  183       0.0274
729.abc_r            NR
734.vpr_r             4      2  234       0.0295
735.gem5_r           NR
750.sealcrypto_r      4      2  261       0.0249
753.ns3_r            NR
777.zstd_r           NR

NR (Not Run) indicates a benchmark that belongs to the full suite but was not selected for this run. In the CSV output format, the status column for summary rows shows -- rather than a validity code, because rrr-rate has no single “selected” run at the per-benchmark level.

2 Commonly used options

Most users of runcpu will want to become familiar with the following options.

This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.

--action action

--check_version

--config name

--copies number

--flagsurl URL[,URL...]

--help

--ignore_errors

--iterations number

--loose

--output_format format

Name|synonyms... Meaning
all implies all of the following except screen, check, and mail
config
   cfg|cfgfile
   configfile
   conffile

config file used for this run, written as a numbered file in the result directory, for example, $SPEC/result/CPU2026.030.fprate.refrate.cfg

  1. The config file is saved on every run, as a compressed portion of the rawfile. Therefore, you can regenerate it later, if desired, using rawformat
  2. Results published by SPEC include your config file. Anyone can download it and try to reproduce your result.
  3. The config file printed by --output_format=config is not identical to the original:

    • The file name matches the other files for this result, not the name you had in your config/ directory.
    • It does not include protected comments
    • It includes a copy of the runcpu line that invoked it.
    • It tells you whether output_root was defined.
    • It includes any result edits you make after the run (see utility.html).
    • It does not include the HASH section.
check
   subcheck
   reportcheck
   reportable
   reportablecheck
   chk|sub|subtest|test
Reportable syntax check (automatically enabled when using --rawformat).
  • Causes the format of many fields to be checked, e.g. "Nov-2018", not "11/18" for hw_avail.
  • Consistent formats help readers, especially when searching.
  • check is included by default when using --rawformat.
  • It can be disabled by adding nocheck to your list of formats.
csv
   spreadsheet

Comma-separated variable. If you populate spreadsheets from your runs, you probably should not cut/paste data from text files; you'll get more accurate data by using --output_format csv. The csv report includes all runs, more decimal places, system information, and even the compiler flags.

default
implies HTML and text
flag|flags
Flag report. Will also be produced when formats that use it are requested (PDF, HTML).
html
   xhtml|www|web
web page
mail
   mailto|email
All generated reports will be sent to an address specified in the config file.
pdf
   adobe
Portable Document Format. This format is the design center for SPEC CPU 2026 reporting. Other formats contain less information: text lacks graphs, postscript lacks hyperlinks, and HTML is less structured. (PDF does not appear as part of "default" only because some systems may lack the ability to read it.)
postscript
   ps|printer|print
PostScript
raw
   rsf
The unformatted raw results, written to a numbered file in the result directory that ends with .rsf (e.g. /spec/cpu2026/rc4/result/CPU2026.042.fpspeed.rsf). Your raw result files are your most important, because the other formats are generated from them.
screen|scr|disp
   display|terminal|term
ASCII text output to stdout.
text
   txt|ASCII|asc
Plain ASCII text file

--rawformat rawfiles

--rebuild

--reportable

--threads N

--tune tuning

3. Less common options

This section is organized alphabetically without regard to upper/lower case and without regard to presence or absence of no at the start of the switch.

--baseonly

--basepeak [bench,bench,...]

--nobuild

--comment "comment text"

--define SYMBOL[=VALUE]
--define SYMBOL:VALUE

--delay secs

--deletework

--enable_monitor

--expid subdir

--fake

--fakereport

--fakereportable

--[no]feedback

--force_monitor

--[no]graph_auto

--graph_max N

--graph_min N

--http_proxy proxy[:port]

--http_timeout N

--info_wrap_columns N

--[no]keeptmp

--label name

--[no]log_timestamp

--make_no_clobber

--notes_wrap_columns N

--output_root directory

--parallel_test processes

--parallel_test_workloads workload,...

--power, --nopower

--preenv, --nopreenv

--review, --noreview

--rrrrate

--rrrrate_inc[=N]

--setprocgroup, --nosetprocgroup

--size size[,size...]

--table, --notable

--test

--train_with WORKLOAD

--undef SYMBOL

--update

--use_submit_for_compare

--use_submit_for_speed

--username name

--verbose n

--version


4 Quick reference

(This table is organized alphabetically, without regard to upper/lower case, and without regard to the presence of a leading "no").

-a Same as --action
--action action Do: build|buildsetup|clean|clobber| onlyrun|realclean|report|run|runsetup|scrub| setup|trash|validate
--basepeak Copy base results to peak (use with --rawformat)
--nobuild Do not attempt to build binaries
-c Same as --config
-C Same as --copies
--check_version Check whether an updated version of CPU 2026 is available
--comment "text"Add a comment to the log and the stored configfile.
--config file Set config file for runcpu to use
--copies Set the number of copies for a SPECrate run
-D Same as --rebuild
-d Same as --deletework
--debug Same as --verbose
--define SYMBOL[=VALUE] Define a config preprocessor macro
--delay secs Add delay before and after benchmark invocation
--deletework Force work directories to be rebuilt
--dryrun Same as --fake
--dry-run Same as --fake
--expid=dir Experiment id, a subdirectory to use for results/runs/exe
-F Same as --flagsurl
--fake Show what commands would be executed.
--fakereport Generate a report without compiling codes or doing a run.
--fakereportable Generate a fake report as if "--reportable" were set.
--[no]feedback Control whether builds use feedback directed optimization
--flagsurl url Location (url or filespec) where to find your flags file
--graph_auto Let the tools pick minimum and maximum for the graph
--graph_min N Set the minimum for the graph
--graph_max N Set the maximum for the graph
-h Same as --help
--help Print usage message
--http_proxy Specify the proxy for internet access
--http_timeout Timeout when attempting http access
-I Same as --ignore_errors
-i Same as --size
--ignore_errors Continue with benchmark runs even if some fail
--ignoreerror Same as --ignore_errors
--info_wrap_column NSet wrap width for non-notes informational items
--infowrap Same as --info_wrap_column
--input Same as --size
--iterations N Run each benchmark N times
--keeptmp Keep temporary files
-L Same as --label
-l Same as --loose
--label label Set the label for executables, build directories, and run directories
--loose Do not produce a reportable result
--noloose Same as --reportable
-M Same as --make_no_clobber
--make_no_clobber Do not delete existing object files before building.
--mockup Same as --fakereportable
-n Same as --iterations
-N Same as --nobuild
--notes_wrap_column NSet wrap width for notes lines
-noteswrap Same as --notes_wrap_column
-o Same as --output_format
--output_format format[,format...] Generate: all|cfg|check|csv|flags|html|mail|pdf|ps|raw|screen|text
--output_root=dir Write all files here instead of under $SPEC
--parallel_test Number of test/train workloads to run in parallel
--[no]power Control power measurement during run
--preenv Allow environment settings in config file to be applied
-R Same as --rawformat
--rawformat Format raw file
--rebuild Force a rebuild of binaries
--reportable Produce a reportable result
--noreportable Same as --loose
--reportonly Same as --fakereport
--[no]review Format results for review
-s Same as --reportable
-S SYMBOL[=VALUE] Same as --define
-S SYMBOL:VALUE Same as --define
--[no]setprocgroup [Don't] try to create all processes in one group.
--size size[,size...] Select data set(s): test|train|ref
--strict Same as --reportable
--nostrict Same as --loose
-T Same as --tune
--[no]table Do [not] include a detailed table of results
--threads=N Set number of OpenMP threads for a SPECspeed run
--test Run various perl validation tests on specperl
--train_with Change the training workload
--tune Set the tuning levels to one of: base|peak|all
--tuning Same as --tune
--undef SYMBOL Remove any definition of this config preprocessor macro
-U Same as --username
--update Check www.spec.org for updates to benchmark and example flag files, and config files
--username Name of user to tag as owner for run directories
--use_submit_for_compare If submit was used for the run, use it for comparisons too.
--use_submit_for_speed Use submit commands for SPECspeed (default is only for SPECrate).
-v Same as --verbose
--verbose Set verbosity level for messages to N
-V Same as --version
--version Output lots of version information
-? Same as --help

Using SPEC CPU®2026: the 'runcpu' Command: Copyright © 2017-2026 Standard Performance Evaluation Corporation (SPEC®)