<?xml version="1.0"?>
<!DOCTYPE flagsdescription
   SYSTEM "http://www.spec.org/dtd/cpuflags2.dtd"
>

<flagsdescription>

<filename>aocc-flags</filename>
<title>AMD Optimizing C/C++ Compiler Suite Version Staging Flag Descriptions</title>

<style>
    <![CDATA[
    body { background: white; }
    ]]>
</style>

<!-- Lines will be up to this wide ============================================================================================ -->

<!-- Submit command documentation ============================================================================================= -->

<submit_command>
    <![CDATA[
    <p><b>Using <code>numactl</code> to bind processes and memory to cores</b></p>

    <p>For multi-copy runs or single copy runs on systems with multiple sockets, it is advantageous to bind a process to a
        particular core.  Otherwise, the OS may arbitrarily move your process from one core to another.  This can affect
        performance.  To help, SPEC allows the use of a "submit" command where users can specify a utility to use to bind
        processes.  We have found the utility '<code>numactl</code>' to be the best choice.</p>

    <p><code>numactl</code> runs processes with a specific NUMA scheduling or memory placement policy.  The policy is set for a
        command and inherited by all of its children.  The <code>numactl</code> flag "<code>--physcpubind</code>" specifies
        which core(s) to bind the process.  "<code>-l</code>" instructs <code>numactl</code> to keep a process's memory on the
        local node while "<code>-m</code>" specifies which node(s) to place a process's memory.  For full details on using
        <code>numactl</code>, please refer to your Linux documentation, '<code>man numactl</code>'</p>

    <p>Note that some older versions of <code>numactl</code> incorrectly interpret application arguments as its own.  For
        example, with the command "<code>numactl --physcpubind=0 -l a.out -m a</code>", <code>numactl</code> will interpret
        <code>a.out</code>'s "<code>-m</code>" option as its own "<code>-m</code>" option.  To work around this problem, we put
        the command to be run in a shell script and then run the shell script using <code>numactl</code>.  For example:
        "<code>echo 'a.out -m a' > run.sh ; numactl --physcpubind=0 bash run.sh</code>"</p>

    ]]>
</submit_command>


<!-- Software environment description ========================================================================================= -->

<sw_environment>
    <![CDATA[
    <p><b>numactl --interleave=all runcpu</b></p>

    <p><code>numactl --interleave=all runcpu</code> executes the SPEC CPU command <code>runcpu</code> so that memory is consumed across NUMA nodes rather than consumed from a single node. This helps prevent local node out-of-memory conditions which can occur when <code>runcpu</code> is executed without interleaving.
    For full details on using <code>numactl</code>, please refer to your Linux documentation, '<code>man numactl</code>'</p>

    <p><b>Transparent Huge Pages (THP)</b></p>
    <p>
        THP is an abstraction layer that automates most aspects of creating, managing,
        and using huge pages. It is designed to hide much of the complexity in using
        huge pages from system administrators and developers.  Huge pages
        increase the memory page size from 4 kilobytes to 2 megabytes. This provides
        significant performance advantages on systems with highly contended resources
        and large memory workloads. If memory utilization is too high or memory is badly
        fragmented which prevents huge pages being allocated, the kernel will assign
        smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.
    </p>
    <p>
        THP usage is controlled by the sysfs setting <code>/sys/kernel/mm/transparent_hugepage/enabled</code>.
        Possible values:
    </p>
    <ul>
      <li>never: entirely disable THP usage.</li>
      <li>madvise: enable THP usage only inside regions marked MADV_HUGEPAGE using madvise(3).</li>
      <li>always: enable THP usage system-wide. This is the default.</li>
    </ul>
    <p>
        The SPEC CPU benchmark codes themselves never explicitly request huge pages, as the mechanism to do that is OS-specific
        and can change over time.  Libraries such as amdalloc which are used by the benchmarks may explicitly request huge pages,
        and use of such libraries can make the "madvise" setting relevant and useful.
    </p>
    <p>
        When no huge pages are immediately available and one is requested, how the system handles the request for THP creation is
        controlled by the sysfs setting <code>/sys/kernel/mm/transparent_hugepage/defrag</code>.
        Possible values:
    </p>
    <ul>
      <li>never: if no THP are available to satisfy a request, do not attempt to make any.</li>
      <li>defer: an allocation requesting THP when none are available gets normal pages while requesting THP creation in the
          background.</li>
      <li>defer+madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3); for all
          other regions it's like "defer".</li>
      <li>madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3).  This is the
          default.</li>
      <li>always: an allocation requesting THP when none are available will stall until some are made.</li>
    </ul>
    <p>
        An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled.<br/>
        For more information see the <a href="https://www.kernel.org/doc/Documentation/vm/transhuge.txt">Linux transparent hugepage documentation</a>.
    </p>

    <p><b> <code>ulimit -s &lt;n&gt;</code></b></p>
    <p>
        Sets the stack size to <b>n</b> kbytes, or <b>unlimited</b> to allow the stack size to grow without limit.
    </p>

    <p><b> <code>ulimit -l &lt;n&gt;</code></b></p>
    <p>
        Sets the maximum size of memory that may be locked into physical memory.
    </p>

    <p><b><code>powersave -f</code> (on SuSE)</b></p>
    <p>
        Makes the powersave daemon set the CPUs to the highest supported frequency.
    </p>

    <p><b><code>/etc/init.d/cpuspeed stop</code> (on Red Hat)</b></p>
    <p>
        Disables the cpu frequency scaling program in order to set the CPUs to the highest supported frequency.
    </p>

    <p><b><code>LD_LIBRARY_PATH</code></b></p>
    <p>
        An environment variable that indicates the location in the filesystem of bundled libraries to use when running the
        benchmark binaries.
    </p>

    <p><b> <code>sysctl -w vm.dirty_ratio=8</code></b></p>
    <p>
        Limits dirty cache to 8% of memory.
    </p>

    <p><b> <code>sysctl -w vm.swappiness=1</code></b></p>
    <p>
        Limits swap usage to minimum necessary.
    </p>

    <p><b> <code>sysctl -w vm.zone_reclaim_mode=1</code></b></p>
    <p>
        Frees local node memory first to avoid remote memory usage.
    </p>

    <p><b><code>kernel/numa_balancing</code></b></p>
    <p>
      This OS setting controls automatic NUMA balancing on memory mapping and process placement.
      NUMA balancing incurs overhead for no benefit on workloads that are already bound to NUMA nodes.
    </p>
    <p>
      Possible settings:
    </p>
    <ul>
      <li>0: disables this feature</li>
      <li>1: enables the feature (this is the default)</li>
    </ul>
    <p>
        For more information see the <code>numa_balancing</code> entry in the
        <a href="https://www.kernel.org/doc/Documentation/sysctl/kernel.txt">Linux sysctl documentation</a>.
    </p>

    <p><b><code>kernel/randomize_va_space</code> (ASLR)</b></p>
    <p>
        This setting can be used to select the type of process address space
        randomization. Defaults differ based on whether the architecture supports
        ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK
        option or not, or the kernel boot options used.
    </p>
    <p>
        Possible settings:
    </p>
    <ul>

        <li>0 - Turn the process address space randomization off.  This is the default for architectures that do not support
            this feature anyway, and kernels that are booted with the "<code>norandmaps</code>" parameter.</li>

        <li>1 - Randomize addresses of mmap base, stack, and VDSO pages.
            This is the default if the <code>CONFIG_COMPAT_BRK</code> option is enabled at kernel build time.</li>

        <li>2 - Additionally enable heap randomization.  This is the default if <code>CONFIG_COMPAT_BRK</code> is
            disabled.</li>
    </ul>
    <p>
        Disabling ASLR can make process execution more deterministic and runtimes more consistent.
        For more information see the <code>randomize_va_space</code> entry in the
        <a href="https://www.kernel.org/doc/Documentation/sysctl/kernel.txt">Linux sysctl documentation</a>.
    </p>

    <p><b><code>vm/drop_caches</code></b></p>
    <p>
        The two commands are equivalent:

           echo 3> /proc/sys/vm/drop_caches

        and

            sysctl -w vm.drop_caches=3

        Both must be run as root.

        The commands are used to free up the filesystem page cache, dentries, and inodes.
    </p>
    <p>
        Possible settings:
    </p>
    <ul>

        <li>1 - Clear pagecache</li>

        <li>2 - Clear dentries and inodes</li>

        <li>3 - Clear pagecache, dentries, and inodes</li>
    </ul>

    <p><b><code>MALLOC_CONF</code></b></p>
    <p>
        The amdalloc library is a variant of <a href="http://jemalloc.net/jemalloc.3.html">jemalloc</a> library. The amdalloc
        library has tunable parameters, many of which may be changed at run-time via several mechanisms, one of which
        is the <code>MALLOC_CONF</code> environment variable.  Other methods, as well as the order in which they're referenced,
        are detailed in the jemalloc documentation's <a href="http://jemalloc.net/jemalloc.3.html#tuning">TUNING section</a>.
    </p>
    <p>
        The options that can be tuned at run-time are everything in the jemalloc documentation's
        <a href="http://jemalloc.net/jemalloc.3.html#mallctl_namespace">MALLCTL NAMESPACE section</a> that begins with
        "<code>opt.</code>".
    </p>
    <p>
        The options that may be encountered in SPEC CPU 2017 results are detailed here:
    </p>
    <ul>
        <li><code><a href="http://jemalloc.net/jemalloc.3.html#opt.retain">retain</a>:true</code> - Causes unused virtual memory to
            be retained for later reuse rather than discarding it.  This is the default for 64-bit Linux.</li>
        <li><code><a href="http://jemalloc.net/jemalloc.3.html#opt.thp">thp</a>:never</code> - Attempts to never utilize huge pages
            by using <code>MADV_NOHUGEPAGE</code> on all mappings.  This option has no effect except when THP is set to
            "madvise".</li>
    </ul>

    <p><b><code>PGHPF_ZMEM</code></b></p>
    <p>
       An environment variable used to initialize the allocated memory. Setting PGHPF_ZMEM to "Yes" has the effect of
       initializing all allocated memory to zero.
    </p>

    <p><b><code>GOMP_CPU_AFFINITY</code></b></p>
    <p>
        This environment variable is used to set the thread affinity for threads spawned by OpenMP.
    </p>

    <p><b><code>OMP_DYNAMIC</code></b></p>
    <p>
        This environment variable is defined as part of the OpenMP standard.
        Setting it to "false" prevents the OpenMP runtime from dynamically adjusting the number of threads to use for parallel
        execution.
    </p>
    <p>
       For more information, see chapter 4 ("Environment Variables") in the
       <a href="https://www.openmp.org/wp-content/uploads/openmp-4.5.pdf">OpenMP 4.5 Specification</a>.
    </p>

    <p><b><code>OMP_SCHEDULE</code></b></p>
    <p>
        This environment variable is defined as part of the OpenMP standard.
        Setting it to "static" causes loop iterations to be assigned to threads in round-robin fashion in the order of the thread
        number.
    </p>
    <p>
       For more information, see chapter 4 ("Environment Variables") in the
       <a href="https://www.openmp.org/wp-content/uploads/openmp-4.5.pdf">OpenMP 4.5 Specification</a>.
    </p>

    <p><b><code>OMP_STACKSIZE</code></b></p>
    <p>
        This environment variable is defined as part of the OpenMP standard and controls the size of the stack for threads created
        by OpenMP.
    </p>
    <p>
       For more information, see chapter 4 ("Environment Variables") in the
       <a href="https://www.openmp.org/wp-content/uploads/openmp-4.5.pdf">OpenMP 4.5 Specification</a>.
    </p>

    <p><b><code>OMP_THREAD_LIMIT</code></b></p>
    <p>
        This environment variable is defined as part of the OpenMP standard and limits the maximum number of OpenMP threads that
        can be created.
    </p>
    <p>
       For more information, see chapter 4 ("Environment Variables") in the
       <a href="https://www.openmp.org/wp-content/uploads/openmp-4.5.pdf">OpenMP 4.5 Specification</a>.
    </p>
    ]]>
</sw_environment>


<!-- Page headers ============================================================================================================= -->

<header>
    <![CDATA[
    <h2>Compilers: AMD Optimizing C/C++ Compiler Suite</h2>
    ]]>
</header>

<!-- Option splitters ========================================================================================================= -->

<!--
  In the regexp that follows,
   $1   (-\S+)         matches "-flag", which is assumed to contain one or more non-whitespace characters
   $2   ([^&quot;"]*)  matches the rest of the quoted string, which may be empty. "&quot;" is for the benefit of the
                       XML parser and expands to a double quote like you'd expect it to.
-->

<!-- Optimization flags ======================================================================================================= -->

<flag name="F-O"
    class="optimization"
    >
    <example>-O</example>
    <![CDATA[
    <p>Set the optimization level to <kbd>-O2</kbd>.</p>

    <p>If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.</p>
    ]]>
    <include flag="F-O2" />
</flag>

<flag name="F-O0"
    class="optimization"
    >
    <example>-O0</example>
    <![CDATA[
    <p>Means "no optimization". This level compiles the fastest and generates the most debuggable code.</p>

    <p>If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.</p>
    ]]>
    <include flag="F-O0" />
</flag>

<flag name="F-O1"
    class="optimization"
    >
    <example>-O1</example>
    <![CDATA[
    <p>Somewhere between <kbd>-O0</kbd> and <kbd>-O2</kbd>.</p>

    <p>If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.</p>
    ]]>
</flag>

<flag name="F-O2"
    class="optimization"
    >
    <example>-O2</example>
    <![CDATA[
    <p>Moderate level of optimization which enables most optimizations.  This is the default when no "<kbd>-O</kbd>" option is
        specified, or if no value is specified (i.e. "<kbd>-O</kbd>").</p>

    <p>If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.</p>
    ]]>
    <include flag="F-O1" />
</flag>

<flag name="F-O3"
    class="optimization"
    >
    <example>-O3</example>
    <![CDATA[
    <p> Like <kbd>-O2</kbd>, except that it enables optimizations that take longer to perform or that may generate larger code (in
        an attempt to make the program run faster).</p>

    <p>If multiple "O" options are used, with or without level numbers, the last such option is the one that is effective.</p>
    ]]>
    <include flag="F-O2" />
</flag>

<flag name="aocc-Ofast"
    class="optimization"
    regexp="-Ofast(?=\s|$)"
    >
    <example>-Ofast</example>
    <![CDATA[
    <p>Enables all the optimizations from <kbd>-O3</kbd> along with other aggressive optimizations that may violate strict
        compliance with language standards. Refer to the AOCC options document for the language you're using for more detailed
        documentation of optimizations enabled under <kbd>-Ofast</kbd>.</p>
    ]]>
    <include flag="F-O3" />
</flag>

<flag name="aocc-zopt"
    class="optimization"
    regexp="-zopt"
    >
    <example>-zopt</example>
    <![CDATA[
    <p>This option enables a subset of scalar, vector and loop transformations including improved variants of loop invariant code motion, SLP and loop vectorizations, loop-fusion, loop-interchange, loop-unswitch, loop tiling  and loop distribution. </p>
    ]]>
</flag>

<flag name="aocc-march"
    class="optimization"
    regexp="-march=(i486|x86-64|native|znver1|znver2|znver3|znver4|znver5|auto)(?=\s|$)"
    >
    <example>-march=znver5</example>
    <![CDATA[
    <p>Specify that Clang should generate code for a specific processor family member and later. For example, if you specify
        <kbd>-march=znver1</kbd>, the compiler is allowed to generate instructions that are valid on AMD Zen processors, but
        which may not exist on earlier products. <kbd>-march=znver4</kbd> enables AVX 512 ISA for Genoa (znver4) processors.</p>
    ]]>
</flag>

<flag name="aocc-flto"
    class="optimization"
    regexp="-flto(?=\s|$)"
    >
    <example>-flto</example>
    <![CDATA[
    <p>Generate output files in LLVM formats suitable for link time optimization. When used with <kbd>-S</kbd> this generates
        LLVM intermediate language assembly files, otherwise this generates LLVM bitcode format object files (which may be
        passed to the linker depending on the stage selection options).</p>
    ]]>
</flag>

<flag name="F-m32"
    class="optimization"
    >
    <example>-m32</example>
    <![CDATA[
    <p>Generates code for a 32-bit environment. The 32-bit environment sets <kbd>int</kbd>, <kbd>long</kbd> and
        <kbd>pointer</kbd> to 32 bits and generates code that runs on any i386 system.  The compiler generates x86 or IA32
        32-bit ABI. The default on a 32-bit host is 32-bit ABI.  The default on a 64-bit host is 64-bit ABI if the target
        platform specified is 64-bit, otherwise the default is 32-bit.</p>
    ]]>
</flag>

<flag name="F-m64"
    class="optimization"
    >
    <example>-m64</example>
    <![CDATA[
    <p>Generates code for a 64-bit environment. The 64-bit environment sets <kbd>int</kbd> to 32 bits and <kbd>long</kbd> and
        <kbd>pointer</kbd> to 64 bits and generates code for AMD's x86-64 architecture. The compiler generates AMD64, INTEL64,
        x86-64 64-bit ABI. The default on a 32-bit host is 32-bit ABI. The default on a 64-bit host is 64-bit ABI if the target
        platform specified is 64-bit, otherwise the default is 32-bit.</p>
    ]]>
</flag>

<flag name="aocc-ffast-math"
    class="optimization"
    regexp="-ffast-math(?=\s|$)"
    >
    <example>-ffast-math</example>
    <![CDATA[
    <p>Enables a range of optimizations that provide faster, though sometimes less precise, mathematical operations that may
        not conform to the IEEE-754 specifications. When this option is specified, the <kbd>__STDC_IEC_559__</kbd> macro is
        ignored even if set by the system headers.</p>
    ]]>
</flag>

<flag name="aocc-associative-math"
    class="optimization"
    regexp="-fassociative-math(?=\s|$)"
    >
    <example>-fassociative-math</example>
    <![CDATA[
    <p><kbd>-fassociative-math</kbd> allows the compiler to reassociate floating-point expressions.
        This means the compiler may change the grouping of operations (for example, transforming
        <code>(a + b) + c</code> into <code>a + (b + c)</code>) in order to enable better optimization.
        Such transformations may change numerical results due to rounding differences.</p>

    <p>This option is implied by <kbd>-ffast-math</kbd> and <kbd>-Ofast</kbd>.
        Using <kbd>-fno-associative-math</kbd> preserves strict evaluation order and IEEE-compliant
        rounding behavior.</p>
    ]]>
</flag>

<flag name="aocc-ffinite-math-only"
    class="optimization"
    regexp="-ffinite-math-only(?=\s|$)"
    >
    <example>-ffinite-math-only</example>
    <![CDATA[<p><kbd>ffinite-math-only</kbd>, which is implied by <kbd>-fast-math</kbd> and <kbd>-Ofast</kbd>, allows
        optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.
        Setting <kbd>-fno-finite-math-only</kbd> does the opposite: the compiler must prepare for the possible presence of
        NaNs and infinities.
    </p>]]>
</flag>

<flag name="aocc-reciprocal-math"
    class="optimization"
    regexp="-freciprocal-math(?=\s|$)"
    >
    <example>-freciprocal-math</example>
    <![CDATA[
    <p><kbd>-freciprocal-math</kbd> allows the compiler to replace floating-point division operations
        with multiplication by a reciprocal (for example, replacing <code>x / y</code> with
        <code>x * (1.0 / y)</code>). This can significantly improve performance on some architectures,
        but may reduce numerical accuracy.</p>

    <p>This option is implied by <kbd>-ffast-math</kbd> and <kbd>-Ofast</kbd>.
        Using <kbd>-fno-reciprocal-math</kbd> forces the compiler to preserve exact division semantics.</p>
    ]]>
</flag>

<flag name="aocc-fopenmp"
    class="optimization"
    parallel="yes"
    regexp="-fopenmp(?:[:=]\S+)?(?=\s|$)"
    >
    <example>-fopenmp</example>
    <![CDATA[
     <p> Enable handling of OpenMP directives and generate parallel code. The openmp library to be linked can be
     specified through -fopenmp=library option. </p>
    ]]>
</flag>

<flag name="F-lm"
    class="optimization"
    >
    <example>-lm</example>
    <![CDATA[
    <p>Instructs the compiler to link with system math libraries.</p>
    ]]>
</flag>

<flag name="F-lamdlibm"
    class="optimization"
    >
    <example>-lamdlibm</example>
    <![CDATA[
    <p>Instructs the compiler to link with AMD-supported optimized math library.</p>
    ]]>
</flag>

<flag name="aocc-muldefs"
    class="optimization"
    regexp="-z\s+muldefs(?=\s|$)"
    >
    <example>-z muldefs</example>
    <![CDATA[
    <p>Instructs the linker to use the first definition encountered for a symbol, and ignore all others.</p>
    ]]>
</flag>

<flag name="amdalloc-lib"
    class="optimization"
    regexp="-lamdalloc(?=\s|$)"
    >
    <example>-lamdalloc</example>
    <![CDATA[
    <p>amdalloc is a AMD's memory allocator based on jemalloc library and is available as a part of AOCC binary package. </p>
    ]]>
</flag>

<flag name="std-c"
    class="optimization"
    regexp="-std=(?:c|gnu)(?:89|99|11|17|18)(?=\s|$)"
    >
    <example>-std=gnu89</example>
    <![CDATA[
    <p>Selects the C language dialect.</p>
    ]]>
</flag>

<flag name="std-cpp"
      class="optimization"
      regexp="-std=c\+\+(?:98|03|11|14|17|2a)(?=\s|$)"
      >
  <example>-std=c++98</example>
  <![CDATA[
	   <p>Selects the C++ language dialect.</p>
  ]]>
</flag>

<flag name="std-f"
   class="optimization"
   regexp="-Mstandard"
   >
   <example>-Mstandard</example>
   <![CDATA[
   <p>Enables warnings for nonstandard and nonportable Fortran constructs, helping enforce better standard conformance without rejecting code</p>
   ]]>
</flag>

<flag name="F-lomp"
    class="optimization"
    >
    <example>-lomp</example>
    <![CDATA[
    <p>Instructs the compiler to link with the OpenMP runtime libraries.</p>
    ]]>
</flag>


<flag name="F-lflang"
    class="optimization"
    >
    <example>-lflang</example>
    <![CDATA[
    <p>Instructs the compiler to link with flang Fortran runtime libraries.</p>
    ]]>
</flag>

<flag name="F-fvector-transform"
    class="optimization"
    >
    <example>-fvector-transform</example>
    <![CDATA[
    <p>This option enables a subset of vector transformations including improved variants of SLP and loop vectorization.</p>
    ]]>
</flag>

<flag name="F-fscalar-transform"
    class="optimization"
    >
    <example>-fscalar-transform</example>
    <![CDATA[
    <p>This option enables a subset of scalar transformations including improved variants of various code movement optimizations like hosting and invariant code movement.</p>
    ]]>
</flag>

<flag name="F-floop-transform"
    class="optimization"
    >
    <example>-floop-transform</example>
    <![CDATA[
    <p>This option enables a subset of loop transformations including improved variants of loop-fusion, loop-interchange,  loop blocking and distribution.</p>
    ]]>
</flag>

<flag name="F-faggressive-loop-transform"
    class="optimization"
    >
    <example>-faggressive-loop-transform</example>
    <![CDATA[
    <p>This option enables a subset of loop transformations including improved variants of loop unswitching, loop-tiling and versioning of loop invariant code motion.</p>
    ]]>
</flag>

<flag name="F-enable-iv-split"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-enable-iv-split(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-enable-iv-split</example>
    <![CDATA[
    <p>Enables splitting of long live ranges of loop induction variables which span loop boundaries.  This helps reduce
        register pressure and can help avoid needless spills to memory and reloads from memory.</p>
    ]]>
</flag>

<flag name="F-enable-X86-prefetching"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-enable-X86-prefetching(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-enable-x86-prefetching</example>
    <![CDATA[
    <p>This optimization enables generation of prefetch instructions for tightly coupled loops</p>
    ]]>
</flag>

<flag name="F-fepilog-vectorization-of-inductions"
    class="optimization"
    >
    <![CDATA[
    <p>Enables epilog vectorization of loops that require loop induction variables also to be vectorized.</p>
    ]]>
</flag>

<flag name="F-optimize-strided-mem-cost"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-optimize-strided-mem-cost(?=\s|$)"
    >
    <example>-mllvm -optimize-strided-mem-cost</example>
    <![CDATA[
    <p>Optimizes the cost model for strided accesses to memory.</p>
    ]]>
</flag>

<flag name="F-fremap-arrays"
    class="optimization"
    regexp="-fremap-arrays(?=\s|$)"
    >
    <![CDATA[
    <p>This option enables an optimization that transforms the data layout of a single dimensional array to provide better
        cache locality by analysing the access patterns.</p>
    ]]>
</flag>

<flag name="F-fvirtual-function-elimination"
    class="optimization"
    regexp="-fvirtual-function-elimination(?=\s|$)"
    >
    <![CDATA[
    <p>Enables dead virtual function elimination optimization. Requires -flto=full.</p>
    ]]>
</flag>

<flag name="F-reduce-array-computations"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-reduce-array-computations[:=]\d+(?=\s|$)"
    >
    <example>-mllvm -reduce-array-computations=3</example>
    <![CDATA[
    <p>This option eliminates the array computations based on their usage. The computations on unused array
       elements and computations on zero valued array elements are eliminated with this optimization.
        <kbd>-flto</kbd> as whole program analysis is required to perform this optimization.</p>

    <p>Possible values:</p>
    <ul>
        <li>1: Eliminates the computations on unused array elements </li>
        <li>2: Eliminates the computations on zero valued array elements </li>
        <li>3: Eliminates the computations on unused and zero valued array elements </li>
    </ul>
    ]]>
</flag>


<flag name="F-struct-layout"
    class="optimization"
    regexp="-fstruct-layout=\d+(?=\s|$)"
    >
    <example>-fstruct-layout=9</example>
    <![CDATA[
    <p> Analyzes the whole program to determine if the structures in the code can be peeled, if dead or redundant fields can be deleted, and if
        the pointer or integer fields in the structure can be compressed. If feasible, this optimization transforms the code to
        enable these improvements. This transformation is likely to improve cache utilization and memory bandwidth. It is expected
        to improve the scalability of programs executed on multiple cores.
        This is effective only under flto as the whole program analysis is required to perform this optimization. You can choose
        different levels of aggressiveness with which this optimization can be applied to your application; with 1 being the least
        aggressive and 7 being the most aggressive level.</p>

    <p><b>Possible values:</b></p>
    <ul>
        <li><b>fstruct-layout=0:</b> disables structure peeling (default).</li>
        <li><b>fstruct-layout=1:</b> enables structure peeling.</li>
        <li><b>fstruct-layout=2:</b> enables structure peeling and selectively compresses self-referential pointers in these
           structures to 32-bit pointers wherever safe.</li>
        <li><b>fstruct-layout=3:</b> enables structure peeling and selectively compresses self-referential pointers in these
           structures to 16-bit pointers wherever safe.</li>
        <li><b>fstruct-layout=4:</b> enables structure peeling, pointer compression as in level 2 and further enables
            compression of structure fields which are of 64-bit integer type to 32-bit integer type. This is performed under a
            strict safety check.</li>
        <li><b>fstruct-layout=5:</b> enables structure peeling, pointer compression as in level 3 and further enables compression
            of structure fields which are of 64-bit integer type to 32-bit integer type. This is performed under a strict safety
            check. </li>
	<li><b>fstruct-layout=6:</b> enables structure peeling, pointer compression as in level 2 and further enables compression
            of structure fields which are of type 64-bit integer type to 16-bit integer type. This is performed under a strict
            safety check. </li>
        <li><b>fstruct-layout=7:</b> enables structure peeling, pointer compression as in level 3 and further enables compression
            of structure fields which are of type 64-bit integer type to 16-bit integer type. This is performed under a strict
            safety check. </li>
        <li><b>fstruct-layout=8:</b> enables structure peeling, pointer compression, 64 bit integer type compression
            as in level 6 and creates optimal ordering of peeled structure fields which could improve runtime performance. </li>
        <li><b>fstruct-layout=9:</b> enables structure peeling, pointer compression, 64 bit integer type compression
            as in level 7 and creates optimal ordering of peeled structure fields which could improve runtime performance. </li>

     </ul>

       <b>Note:</b>
	<p>fstruct-layout=4 and fstruct-layout=5 are derived from fstruct-layout=2 and fstruct-layout=3 respectively with the added
            feature of safe compression of 64-bit integer fields to 32-bit integer fields in structures. Going from
            fstruct-layout=4 to fstruct-layout=5 may result in higher performance if the pointer values are such that the pointers
            can be compressed to 16-bits.</p>

        <p>fstruct-layout=6 and fstruct-layout=7 are derived from fstruct-layout=2 and fstructlayout=3 respectively, with the
           added feature of safe compression of 64 bit integer fields to 16 bit integer in structures. Going from fstruct-layout=6
           to fstruct-layout=7 may result in higher performance if the pointer values are such that the pointers can be
           compressed to 16-bits. </p>
    ]]>
</flag>

<flag name="F-fprofile-instr-generate"
    class="optimization"
    >
    <![CDATA[
    <p>Turns on LLVM's instrumentation based profiling.</p>
    ]]>
</flag>

<flag name="F-fenable-aggressive-gather"
    class="optimization"
    >
    <![CDATA[
    <p>This option enables generation of gather instructions for cases where it is profitable.</p>
    ]]>
</flag>

<flag name="F-fstrip-mining"
    class="optimization"
    >
    <![CDATA[
    <p>Enables loop strip mining optimization. This optimization breaks a large loop into smaller segments or strips to improve temporal and spatial locality.</p>
    ]]>
</flag>

<flag name="F-fprofile-instr-use"
    class="optimization"
    >
    <![CDATA[
    <p>Uses the profiling files generated from a program compiled with <kbd>-fprofile-instr-generate</kbd> to guide
        optimization decisions.</p>
    ]]>
</flag>

<flag name="F-fgnu89-inline"
    class="optimization"
    >
    <example>-fgnu89-inline</example>
    <![CDATA[
    <p>In the <a href="https://www.spec.org/cpu2017/Docs/benchmarks/502.gcc_r.html">502/602.gcc</a> benchmark description,
        "multiple definitions of symbols" is listed under the "Known Portability Issues" section, and this option is one of the
        suggested workarounds.  This option causes Clang to revert to the same inlining behavior that GCC does when in pre-C99
        mode.</p>
    ]]>
</flag>

<flag name="F-inline-threshold"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-inline-threshold[:=]\d+(?=\s|$)"
    >
   <example>-Wl,-mllvm -Wl,-inline-threshold=100</example>
    <![CDATA[
    <p>Sets the compiler's inlining threshold level to the value passed as the argument.  The inline threshold is used in the
        inliner heuristics to decide which functions should be inlined.</p>
    ]]>
</flag>

<flag name="F-inline-recursion"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-inline-recursion[:=]\d+(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-inline-recursion=4</example>
    <![CDATA[
    <p>Enables inlining for recursive functions based on heuristics, with level 4 being most aggressive. Higher levels may lead
        to code bloat due to expansion of recursive functions at call sites.</p>

    <p>Levels:</p>
    <ul>
        <li>0 [DEFAULT]: Disables inlining for recursive functions.</li>
        <li>1: Recursive functions are inlined up to depth 1. These recursive functions are chosen based on costing heuristics.</li>
        <li>2: Same as level 1 but with more aggressive heuristics.</li>
        <li>3: All recursive functions are inlined upto depth 1.</li>
        <li>4: All recursive functions are inlined upto depth 10.</li>
    </ul>
    ]]>
</flag>

<flag name="F-lsr-in-nested-loop"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-lsr-in-nested-loop(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-lsr-in-nested-loop</example>
    <![CDATA[
    <p>Enables loop strength reduction for nested loop structures.  By default, the compiler performs loop strength reduction
        only for the innermost loop.</p>
    ]]>
</flag>

<flag name="F-ldist-scalar-expand"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-ldist-scalar-expand(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-ldist-scalar-expand</example>
    <![CDATA[
    <p>Enables loop distribution with scalar expansion for better vectorization.</p>
    ]]>
</flag>

<flag name="F-mrecursive"
    class="optimization"
    regexp="-Mrecursive(?=\s|$)"
    >
    <example>-Mrecursive</example>
    <![CDATA[
    <p>  Allocate local variables on the stack, thus allowing recursion.
         SAVEd, data-initialized, or namelist members are always allocated
         statically, regardless of the setting of this switch. </p>
    ]]>
</flag>

<flag name="F-suppress-fmas"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-suppress-fmas(?=\s|$)"
    >
    <example>-suppress-fmas</example>
    <![CDATA[
    <p>  Disables generation of fma instructions when there is a chain of fma instructions and output of one fma
         instruction is used as input to other fma instruction. </p>
    ]]>
</flag>

<flag name="F-unroll-threshold"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-unroll-threshold[:=]\d+(?=\s|$)"
    >
    <example>-mllvm -unroll-threshold=100</example>
    <![CDATA[
    <p>Sets the limit at which loops will be unrolled.  For example, if unroll threshold is set to 100 then only loops with 100
        or fewer instructions will be unrolled.</p>
    ]]>
</flag>

<flag name="F-loop-unswitch-threshold"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-loop-unswitch-threshold[:=]\d+(?=\s|$)"
    >
    <example>-mllvm -loop-unswitch-threshold=100</example>
    <![CDATA[
    <p>Sets the limit at which loops will be unswitched.  For example, if unswitch threshold is set to 100 then only loops with 100
        or fewer instructions will be unswtched.</p>
    ]]>
</flag>

<flag name="F-unroll-aggressive"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-unroll-aggressive(?=\s|$)"
    >
    <example>-mllvm -unroll-aggressive</example>
    <![CDATA[
    <p>Enables aggressive heuristics to get loop unrolling.</p>
    ]]>
</flag>

<flag name="aocc-unroll-loops"
    class="optimization"
    regexp="-funroll-loops(?=\s|$)"
    >
    <example>-funroll-loops</example>
    <![CDATA[
    <p>This option instructs the compiler to unroll loops wherever possible.</p>
    ]]>
</flag>

<flag name="F-fveclib"
    class="optimization"
    regexp="-fveclib(?:[:=]\S+)(?=\s|$)"
    >
    <example>-fveclib=AMDLIBM</example>
    <![CDATA[
    <p>Use the given vector functions library.</p>
    ]]>
</flag>


<flag name="F-use-vzeroupper"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-x86-use-vzeroupper(?:[:=](?:true|false))(?=\s|$)"
    >
    <example>-mllvm -x86-use-vzeroupper=false</example>
    <![CDATA[
    <p>This option controls the vzeroupper instruction generation before a transfer of
        control flow.  Not emitting the vzeroupper instruction can help minimize the AVX to SSE transition penalty.</p>
    ]]>
</flag>

<flag name="F-align-all-nofallthru-blocks"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-align-all-nofallthru-blocks(?:[:=]\d+)?(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-align-all-nofallthru-blocks=6</example>
    <![CDATA[
    <p>Forces the alignment of all blocks that have no fall-through predecessors (i.e. don't add nops that are executed). In log2 format (e.g 4 means align on 16B boundaries).</p>
    ]]>
</flag>

<flag name="F-allow-multiple-definition"
    class="optimization"
    regexp="-Wl,-allow-multiple-definition(?=\s|$)"
    >
    <example>-Wl,allow-multiple-definition</example>
    <![CDATA[
    <p>Suppress error when linking multiple symbols of the same name.</p>
    ]]>
</flag>

<flag name="F-Waocc-no-return-type"
   class="other"
   regexp="-Wno-return-type"
   >
   <example>-Wno-return-type</example>
   <![CDATA[<p>
      Do not warn about functions defined with a return type that defaults to "int" or which return something other than
      what they were declared to.
   </p>]]>
</flag>

<flag name="F-Wno-unused-command-line-argument"
   class="other"
   regexp="-Wno-unused-command-line-argument"
   >
   <example>-Wno-unused-command-line-argument</example>
   <![CDATA[<p>
      Do not warn about unused command line arguments.
   </p>]]>
</flag>
<flag name="F-Wno-implicit-int"
   class="other"
   regexp="-Wno-implicit-int"
   >
   <example>-Wno-implicit-int</example>
   <![CDATA[<p>
      This option controls warnings when a declaration does not specify a type.
   </p>]]>
</flag>

<flag name="F-fvisibility"
    class="optimization"
    regexp="-fvisibility[:=]\S+?(?=\s|$)"
    >
    <example>-fvisibility=hidden</example>
    <![CDATA[
    <p>Set the default symbol visibility for all global declarations.</p>
    ]]>
</flag>

<flag name="F-enable-aggressive-gather"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-enable-aggressive-gather(?:[:=](?:true|false))(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-enable-aggressive-gather=true</example>
    <![CDATA[
    <p>Enable loop unrolling for loops with very short iteration counts where it is beneficial.</p>
    ]]>
</flag>

<flag name="F-enable-masked-gather-sequence"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-enable-masked-gather-sequence(?:[:=](?:true|false))(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-enable-masked-gather-sequence=false</example>
    <![CDATA[
    <p>Toggle option for masked gather sequence generation.</p>
    ]]>
</flag>

<flag name="aocc-fno-PIE"
    class="optimization"
    regexp="-fno-PIE(?=\s|$)"
    >
    <example>-fno-PIE</example>
    <![CDATA[
    <p>Generate position independent executable. Usually this option will be used to compile code that will be linked using the -no-pie linker option.</p>
    ]]>
</flag>

<flag name="aocc-no-pie"
    class="optimization"
    regexp="-no-pie(?=\s|$)"
    >
    <example>-no-pie</example>
    <![CDATA[
    <p>Don’t produce a dynamically linked position independent executable(Linker option).</p>
    ]]>
</flag>

<flag name="F-extra-inliner"
    class="optimization"
    regexp="(?:-mllvm\s+|-Wl,-mllvm\s+-Wl,)-extra-inliner(?=\s|$)"
    >
    <example>-Wl,-mllvm -Wl,-extra-inliner</example>
    <![CDATA[
    <p>Schedule inlining optimization after few other optimizations.</p>
    ]]>
</flag>

<flag name="F-mrecip"
    class="optimization"
    regexp="-mrecip[:=]\S+(?=\s|$)"
    >
    <example>-mrecip=none</example>
    <![CDATA[
    <p>This option enables use of RCPSS and RSQRTSS instructions with an additional Newton-Raphson step to increase precision instead of DIVSS and SQRTSS.</p>
    ]]>
</flag>

<flag name="F-fprofile-generate"
    class="optimization"
    >
    <![CDATA[
    <p>Generate instrumented code to collect execution counts.</p>
    ]]>
</flag>

<flag name="F-fprofile-use"
    class="optimization"
    >
    <![CDATA[
    <p>Use instrumentation data for profile-guided optimization.</p>
    ]]>
</flag>

<!-- Portability flags ======================================================================================================== -->

<flag name="aocc-no-fast-math"
    class="portability"
    regexp="-fno-fast-math(?=\s|$)"
    >
    <example>-fno-fast-math</example>
    <![CDATA[<p><kbd>fno-fast-math</kbd>,
    <p>Disables -ffast-math-optimizations.</p>
    ]]>
</flag>

<flag name="aocc-no-ffinite-math-only"
    class="portability"
    regexp="-fno-finite-math-only(?=\s|$)"
    >
    <example>-fno-finite-math-only</example>
    <![CDATA[<p><kbd>fno-finite-math-only</kbd>, which is implied by <kbd>-fast-math</kbd> and <kbd>-Ofast</kbd>, allows
        optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.
        Setting <kbd>-fno-finite-math-only</kbd> does the opposite: the compiler must prepare for the possible presence of
        NaNs and infinities.
    </p>]]>
</flag>

<flag name="aocc-no-associative-math"
    class="portability"
    regexp="-fno-associative-math(?=\s|$)"
    >
    <example>-fno-associative-math</example>
    <![CDATA[
    <p><kbd>-fassociative-math</kbd> allows the compiler to reassociate floating-point expressions.
        This means the compiler may change the grouping of operations (for example, transforming
        <code>(a + b) + c</code> into <code>a + (b + c)</code>) in order to enable better optimization.
        Such transformations may change numerical results due to rounding differences.</p>

    <p>This option is implied by <kbd>-ffast-math</kbd> and <kbd>-Ofast</kbd>.
        Using <kbd>-fno-associative-math</kbd> preserves strict evaluation order and IEEE-compliant
        rounding behavior.</p>
    ]]>
</flag>

<flag name="no-aocc-reciprocal-math"
    class="portability"
    regexp="-fno-reciprocal-math(?=\s|$)"
    >
    <example>-fno-reciprocal-math</example>
    <![CDATA[
    <p><kbd>-freciprocal-math</kbd> allows the compiler to replace floating-point division operations
        with multiplication by a reciprocal (for example, replacing <code>x / y</code> with
        <code>x * (1.0 / y)</code>). This can significantly improve performance on some architectures,
        but may reduce numerical accuracy.</p>

    <p>This option is implied by <kbd>-ffast-math</kbd> and <kbd>-Ofast</kbd>.
        Using <kbd>-fno-reciprocal-math</kbd> forces the compiler to preserve exact division semantics.</p>
    ]]>
</flag>

<flag name="F-mbyteswapio"
    class="portability"
    regexp="-Mbyteswapio(?=\s|$)"
    >
    <example>-Mbyteswapio</example>
    <![CDATA[
    <p>The binary datasets for some of the Fortran benchmarks in the SPEC CPU suites are stored in big-endian format.  This
        option is necessary for those datasets to be read in correctly.</p>
    ]]>
</flag>

<flag name="F-D_FILE_OFFSET_BITS"
    class="portability"
    regexp="-D_FILE_OFFSET_BITS=\d+(?=\s|$)"
    >
    <![CDATA[
    <p>Specifies size of <kbd>off_t</kbd> data type.</p>
    ]]>
</flag>

<flag name="aocc-unsigned-char"
    class="portability"
    regexp="-funsigned-char(?=\s|$)"
    >
    <example>-funsigned-char</example>
    <![CDATA[
    <p>This option instructs the compiler to treat char type as unsigned.</p>
    ]]>
</flag>

<flag name="aocc-no-enum-constexpr-conversion"
   regexp="-Wno-enum-constexpr-conversion"
   class="portability">
   <example>-Wno-enum-constexpr-conversion</example>
   <![CDATA[
   <p><kbd>-Wno-enum-constexpr-conversion</kbd> suppresses warnings about implicit conversions
      involving enumeration values in constant expressions that may change the value or produce
      an out-of-range enumeration.</p>

   <p>This option disables diagnostics enabled by <kbd>-Wenum-constexpr-conversion</kbd> and
      does not affect program semantics or generated code. It is commonly used for legacy or
      generated C++ code that intentionally relies on enum–integer conversions in constant
      expressions.</p>
   ]]>
</flag>



<!-- Flags that identify the compiler being used ============================================================================== -->

<flag name="clang-c"
    class="compiler"
    regexp="\bclang(?=\s|$)"
    >
    <example>clang</example>
    <![CDATA[
    <p>clang is a C compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking.
        Depending on which high-level mode setting is passed, Clang will stop before doing a full link.</p>
    ]]>
</flag>

<flag name="clang-cpp"
    class="compiler"
    regexp="\bclang\+\+(?=\s|$)"
    >
    <example>clang++</example>
    <![CDATA[
    <p>clang++ C++ compiler which encompasses preprocessing, parsing, optimization, code generation, assembly, and linking.
        Depending on which high-level mode setting is passed, Clang will stop before doing a full link.</p>
    ]]>
</flag>

<flag name="flang"
    class="compiler"
    regexp="\bflang(?=\s|$)"
    >
    <example>flang</example>
    <![CDATA[
    <p>flang is a Fortran compiler which encompasses parsing, optimization, code generation, assembly, and linking. Depending on
        which high-level mode setting is passed, Flang will stop before doing a full link.</p>
    ]]>
</flag>


<!-- "Other" flags ============================================================================================================ -->

<flag name="Link_path"
    class="other"
    regexp="-L\s*\S+(?=\s|$)"
    >
    <example>-L/path/to/libs</example>
    <![CDATA[
    <p>Specifies a directory to search for libraries. Use <kbd>-L</kbd> to add directories to the search path for library
        files.  Multiple <kbd>-L</kbd> options are valid. However, the position of multiple <kbd>-L</kbd> options is important
        relative to <kbd>-l</kbd> options supplied.</p>
    ]]>
</flag>

<flag name="Include_path"
    class="other"
    regexp="-I\s*\S+(?=\s|$)"
    >
    <example>-I /path/to/include</example>
    <![CDATA[
    <p>Specifies a directory to search for include files. Use <kbd>-I</kbd> to add directories to the search path for include
        files.  Multiple <kbd>-I</kbd> options are valid.</p>
    ]]>
</flag>


<!-- vim: set ai filetype=xml syntax=xml expandtab nosmarttab ts=8 sw=4 colorcolumn=132: -->
</flagsdescription>
