SVN Revision: $Revision: 1139 $
Last updated: $Date:: 2017-03-29#$
This document describes the various fields in the standard Chauffeur WDK reports. The different result files are available in HTML and TXT format.
(To check for possible updates to this document, please see https://www.spec.org/chauffeur-wdk/docs/Chauffeur-Result_File_Fields.html)
Selecting one of the following will take you to the detailed table of contents for that section:
3. Top Bar
7. SUT Notes
8. Aggregate Electrical and Environmental Data
11. Worklet Performance and Power Details
1. Chauffeur WDK
1.1 Test harness - Chauffeur
1.1.1 Chauffeur Director
1.1.2 Chauffeur Host
1.1.3 Chauffeur Client
1.1.4 Chauffeur Reporter
1.2 Workloads
1.3 The Power and Temperature Daemon
1.4 Result Validation and Report Generation
1.5 References
2. Main Report File
3. Top bar
3.1 Test sponsor
3.2 Software Availability
3.3 Tested by
3.4 Hardware/Firmware Availability
3.5 SPEC license #
3.6 System Source
3.7 Test Location
3.8 Test Date
4. Summary
4.1 Result Chart
4.2 Result Table
4.2.1 Workload
4.2.2 Worklet
4.2.3 Load Level
4.2.4 Performance Score
4.2.5 Average Active Power (W)
4.2.6 Efficiency Score
5. Aggregate SUT Data
5.1 # of Nodes
5.2 # of Processors
5.3 Total Physical Memory
5.4 # of Cores
5.5 # of Storage Devices
5.6 # of Threads
6. System Under Test
6.1 Shared Hardware
6.1.1 Enclosure
6.1.2 Form Factor
6.1.3 Server Blade Bays (populated / available)
6.1.4 Additional Hardware
6.1.5 Management Firmware Version
6.1.6 Power Supply Quantity (active / populated / bays)
6.1.7 Power Supply Details
6.1.8 Power Supply Operating Mode
6.1.9 Available Power Supply Modes
6.1.10 Network Switches (active / populated / bays)
6.1.11 Network Switch
6.2 Hardware per Node
6.2.1 Hardware Vendor
6.2.2 Model
6.2.3 Form Factor
6.2.4 CPU Name
6.2.5 CPU Frequency (MHz)
6.2.6 Number of CPU Sockets (populated / available)
6.2.7 CPU(s) Enabled
6.2.8 Number of NUMA Nodes
6.2.9 Hardware Threads / Core
6.2.10 Primary Cache
6.2.11 Secondary Cache
6.2.12 Tertiary Cache
6.2.13 Other Cache
6.2.14 Additional CPU Characteristics
6.2.15 Total Memory Available to OS
6.2.16 Total Memory Amount (populated / maximum)
6.2.17 Total Memory Slots (populated / available)
6.2.18 Memory DIMMs
6.2.19 Memory Operating Mode
6.2.20 Power Supply Quantity (active / populated / bays)
6.2.21 Power Supply Details
6.2.22 Power Supply Operating Mode
6.2.23 Available Power Supply Modes
6.2.24 Disk Drive Bays (populated / available)
6.2.25 Disk Drive
6.2.26 Network Interface Cards
6.2.27 Management Controller or Service Processor
6.2.28 Expansion Slots (populated / available)
6.2.29 Optical Drives
6.2.30 Keyboard
6.2.31 Mouse
6.2.32 Monitor
6.2.33 Additional Hardware
6.3 Software per Node
6.3.1 Power Management
6.3.2 Operating System (OS)
6.3.3 OS Version
6.3.4 Filesystem
6.3.5 Other Software
6.3.6 Boot Firmware Version
6.3.7 Management Firmware Version
6.3.8 JVM Vendor
6.3.9 JVM Version
6.3.10 Client Configuration ID
7. SUT Notes
8. Aggregate Electrical and Environmental Data
8.1 Line Standard
8.2 Elevation (m)
8.3 Minimum Temperature (°C)
9. Details Report File
10. Measurement Devices
10.1 Power Analyzer
10.1.1 Hardware Vendor
10.1.2 Model
10.1.3 Serial Number
10.1.4 Connectivity
10.1.5 Input Connection
10.1.6 Metrology Institute
10.1.7 Calibration Laboratory
10.1.8 Calibration Label
10.1.9 Date of Calibration
10.1.10 PTDaemon Version
10.1.11 Setup Description
10.2 Temperature Sensor
10.2.1 Hardware Vendor
10.2.2 Model
10.2.3 Driver Version
10.2.4 Connectivity
10.2.5 PTDaemon Version
10.2.6 Sensor Placement
11. Worklet Performance and Power Details
11.1 Total Clients
11.2 CPU Threads per Client
11.3 Sample Client Command-line
11.4 Performance Data
11.4.1 Phase
11.4.2 Interval
11.4.3 Actual Load
11.4.4 Score
11.4.5 Elapsed Measurement Time (s)
11.4.6 Transaction
11.4.7 Transaction Count
11.4.8 Transaction Time (S)
11.5 Power Data
11.5.1 Phase
11.5.2 Interval
11.5.3 Analyzer
11.5.4 Average Voltage (V)
11.5.5 Average Current (A)
11.5.6 Current Range Setting
11.5.7 Average Power Factor
11.5.8 Average Active Power (W)
11.5.9 Power Measurement Uncertainty (%)
11.5.10 Minimum Temperature (°C)
The SPEC Chauffeur Worklet Development Kit (WDK) is a framework for creating worklets to evaluate the power and performance of server class computers.
The tool consists of several software modules:
These modules work together in real-time to collect server power consumption and performance data by exercising the System Under Test (SUT) with predefined workloads.
The test harness (called Chauffeur): handles the logistical side of measuring and recording power data along with controlling the software installed on the SUT and controller system itself.
The Director reads test parameters and environment description information from the Chauffeur configuration files and controls the execution of the test based on this information. It is the central control instance of Chauffeur and communicates with the other software modules described below via TCP/IP protocol. It also collects the result data from the worklet instances and stores them in the basic result file "results.xml".
This module is the main Chauffeur module on the System Under Test (SUT). It must be launched manually by the tester and it starts the Chauffeur Client instances executing the workloads under control of the Chauffeur Director.
One or more client instances each executing its own Java Virtual Machine (JVM) are started by the Chauffeur Host for every worklet. Each Client executes worklet code to stress the SUT and reports the performance data back to the Chauffeur Director for each phase of the test.
The Reporter gathers the configuration, environmental, power and performance data from the "results.xml" file after a run is complete and compiles it into HTML and text or CSV format result files. It will be started automatically by the Director after finishing all workloads to create the default set of report files. Alternately it can be started manually to generate special report files from the information in the basic result file "results.xml".
A test suite includes one or more workloads, each of which is a group of one or more related worklets. Each worklet contains application logic to exercise the SUT in some way.
The Power and Temperature Daemon (PTDaemon) is a single executable program that communicates with a power analyzer or a temperature sensor via the server's native RS-232 port, USB port or additionally installed interface cards, e.g. GPIB. It reports the power consumption or temperature readings to the Director via a TCP/IP socket connection. It supports a variety of RS-232, GPIB and USB interface command sets for a variety of power analyzers and temperature sensors. PTDaemon is the only Chauffeur software module that is not Java based. Although it can be quite easily setup and run on a server other than the controller, it will typically reside on the controller.
At the beginning of each run, the test configuration parameters are logged in order to be available for later conformance checks. Warnings are displayed for any non-compliant properties and printed in the final report; however, the test will run to completion, producing a report that is marked invalid.
At the end of a test run the Chauffeur Reporter is called to generate the report files described here from the data given in the configuration files and collected during the test run. Basic validity checks are performed to ensure that interval length, target load throughput, temperature, etc. are within the defined limits.
More detailed information can be found in the documents shown in the following table. For the latest versions, please consult SPEC's website.
In this document all references to configurable parameters are printed with different colors using the names from the configuration files:
Parameters from "test-environment.xml" are shown in red: <TestInformation><TestSponsor>
Parameters from "config.xml" are shown in light purple: <suite><definitions><launch-definition><num-clients>.
The following configuration files are delivered with the test kit:
This section gives an overview of the information and result fields in the main report file "results.html/.txt".
The report file headline reads Chauffeur™ Report. Results obtained with 32-bit JVMs include a (32-bit category) designation in the headline.
The predefined default values of the parameters in the "test-environment.xml" file are intentionally incorrect. To highlight this, all parameters are defined with a leading underscore. The Reporter recognizes this and highlights these fields with a yellow background, except for the system name in the headline of the general information table.
The top bar gives general information regarding this test run.
The top bar header shows the name of the hardware vendor (see Hardware Vendor) plus the model name (see Model), potentially followed by a "(Historical System)" designation for systems which are no longer commercially available.
The name of the organization or individual that sponsored the test.
Generally, this is the name of the license holder.
<TestSponsor>
The date when all the software necessary to run the result
is generally available.
<Software><Availability>
The date must be specified in the format: YYYY-MM
For example, if the operating system is available in 2013-02, but the JVM is not available until 2013-04, then the software availability date is 2013-04 (unless some other component pushes it out farther).
The name of the organization or individual that ran the test and submitted
the result.
<TestedBy>
The date when all the hardware and related firmware modules
necessary to run the result are generally available.
<Hardware><Availability>
The date must be specified in the format: YYYY-MM
For example, if the CPU is available in 2013-02, but the Firmware version used for the test is not available until 2013-04, then the hardware availability date is 2013-04 (unless some other component pushes it out farther).
For systems which are no longer commercially available the original availability date must be specified here and the model name must be marked with the supplement "(Historical System)" (see Model).
Please see OSG Policy section 2.3.5 on SUT Availability for Historical Systems https://www.spec.org/osg/policy.html#s2.3.5
The SPEC license number of the organization or individual that ran the
test.
<SpecLicense>
Single Supplier or Parts Built.
<SystemUnderTest><SystemSource>
The name of the city, state, and country the test took place. If there are
installations in multiple geographic locations, that must also be listed in
this field.
<Location>
The date on which the test for this result was performed. This information is provided automatically by the test software based on the timer function of the Controller system.
This section describes the main results for all worklets in a table and as a graph.
The report includes a result summary chart for each worklet. This chart shows the results for each measurement interval for that worklet. The result chart is divided in three sections:
The result table numerically displays the performance, power, and efficiency scores for each measurement interval for each worklet. Each interval is presented on a separate row.
This column of the result table shows the names of the workloads. A workload may include one or more worklets.
This column of the result table shows the names of the worklets. The execution of the worklet will include one or more measurement intervals.
This column of the result table lists the measurement intervals that were executed for this worklet.
The performance score for a load level indicates the performance that was achieved when running that interval. For most worklets, the score is a measure of the transaction rate (throughput) obtained during the measurement period. Some worklets may include custom score calculations to report a different value as the performance score. Results obtained on 32-bit JVMs include a "(32-bit)" designation in the column header.
This column of the result table shows the average of the power consumption (in Watts) that was measured while running this load level. When multiple power analyzers are configured, this value is the sum of the readings for all of the analyzers.
The efficiency score for each load level is calculated as its Performance Score divided by the Average Active Power (W) measured during that interval. Results obtained on 32-bit JVMs include a "(32-bit)" designation in the column header.
Aggregated values for several system configuration parameters are reported in this table.
The total number of all nodes used for running the test. The reported values are calculated by the test software from the information given in the configuration files and by the test startup scripts.
The number of processor chips per node. For multi node results the total number of all chips used for running the test is added. The reported values are calculated by the test software from the information given in the configuration files and the test discovery scripts.
The total memory size for all systems used to run the test. The reported values are calculated by the test software from the information given in the configuration files and by the test discovery scripts.
The total number of all cores used for running the test. The reported value is calculated by the test software from the information given in the configuration files and by the test discovery scripts.
The total number of all storage devices used for running the test. The reported value is calculated by the test software from the information given in the configuration files and by the test discovery scripts.
The total number of all hardware threads used for running the test. The reported value is calculated by the test software from the information given in the configuration files and by the test discovery scripts.
The following section of the report file describes the hardware and the software of the System Under Test (SUT) used to run the reported tests with the level of detail required to reproduce this result.
A table including the description of the shared hardware components. This table will be printed for multi node results only and is not included in single node report files.
The model name identifying the enclosure housing the tested nodes.
<SystemUnderTest><SharedHardware><Enclosure>
The full SUT form factor (including all nodes and any shared hardware).
<SystemUnderTest><SharedHardware><FormFactor>
For rack-mounted systems, specify the number of rack units. For other types of enclosures, specify "Tower" or "Other".
This field is divided into 2 parts separated by a slash. The first part
specifies the number of bays populated with a compute node or server blade.
The second part shows the number of available bays for server blades in the
enclosure.
<SystemUnderTest><SharedHardware><BladeBays><Populated>
<SystemUnderTest><SharedHardware><BladeBays><Available>
Any additional shared equipment added to improve performance and required to
achieve the reported scores.
<SharedHardware><Other><OtherHardware>
For each additional type of hardware component the quantity and description need to be specified.
A version number or string identifying the management firmware running on
the SUT enclosure or "None" if no management controller was
installed.
<SharedHardware><Firmware><Management><Version>
This field is divided into 3 parts separated by slashes.
The first part shows the number of active power supplies, which might be
lower than the next number, if some power supplies are in standby mode and
used in case of failure only.
<SharedHardware><PowerSupplies><PowerSupply><Active>
The second part gives the number of bays populated with a power supply.
<SharedHardware><PowerSupplies><PowerSupply><Populated>
The third part describes the number of power supply bays available in the
SUT enclosure.
<SharedHardware><PowerSupplies><Bays>
The number and watts rating of this power supply unit (PSU) plus the supplier and the part number to identify it.
In the case of a "Parts Built" system (see:
System Source) the manufacturer name and the
part number of the PSU must be specified here.
<SharedHardware><PowerSupplies><PowerSupply><Active>
<SharedHardware><PowerSupplies><PowerSupply><RatingInWatts>
<SharedHardware><PowerSupplies><PowerSupply><Description>
There may be multiple lines in this field if different types of PSUs have been used for this test, one for each PSU type.
Power supply unit (PSU) operating mode active for running this test. Must be
one of the available modes as described in the field
Available Power Supply Modes.
<SharedHardware><PowerSupplies><OperatingMode>
The available power supply unit (PSU) modes depend on the capabilities of
the tested server hardware and firmware.
<SharedHardware><PowerSupplies><AvailableModes><Mode>
Typical power supply modes are:
For example: 2 + 1 Spare PSU
Two PSUs are active, the third PSU is inactive in Standby mode. System operation is guaranteed for 1 PSU fail in case of 3 PSUs in total.
For example: 2 + 2 (2 AC sources)
2 PSUs are active, the other two PSUs are inactive in Standby mode. 2 of the 4 PSUs are each connected to a separate AC source. This ensures that the system can continue operation even if a power line or a single PSU fails.
This field is divided into 3 parts separated by slashes.
The first part shows the number of active network switches, which might be
lower than the next number, if some network switches are in standby mode and
not used for running the test.
<SharedHardware><NetworkSwitches><NetworkSwitch><Active>
The second part gives the number of bays populated with a network
switch.
<SharedHardware><NetworkSwitches><NetworkSwitch><Populated>
The third part describes the number of network switch bays available in the
SUT enclosure.
<SharedHardware><NetworkSwitches><Bays>
"N/A" if no network switch was used.
The number, a description (manufacturer and model name), and details
(special settings, etc.) of the network switch(es) used for this
test.
<SharedHardware><NetworkSwitches><NetworkSwitch><Active>
<SharedHardware><NetworkSwitches><NetworkSwitch><Description>,
"N/A" if no network switch was used.
<SharedHardware><NetworkSwitches><NetworkSwitch><Details>
This section describes in detail the different hardware components of the system under test which are important to achieve the reported result.
Company which sells the hardware.
<SystemUnderTest><Node><Hardware><Vendor>
The model name identifying the system under test.
<SystemUnderTest><Node><Hardware><Model>
Systems which are no longer commercially available should be marked with the
supplement "(Historical System)".
<SystemUnderTest><Node><Hardware><Historical>
Please see OSG Policy section 2.3.5 on SUT Availability for Historical Systems https://www.spec.org/osg/policy.html#s2.3.5.
The form factor for this system.
<SystemUnderTest><Node><Hardware><FormFactor>
In multi-node configurations, this is the form factor for a single node. For rack-mounted systems, specify the number of rack units. For blades, specify "Blade". For other types of systems, specify "Tower" or "Other".
A manufacturer-determined processor formal name.
<SystemUnderTest><Node><Hardware><CPU><Name>
Trademark or copyright characters must not be included in this string. No additional information is allowed here, e.g. turbo boost frequency or hardware threads.
Examples:
The nominal (marked) clock frequency of the CPU, expressed in
megahertz.
<SystemUnderTest><Node><Hardware><CPU><FrequencyMHz>.
If the CPU is capable of automatically running the processor core(s) faster
than the nominal frequency and this feature is enabled, then this additional
information must be listed here, at least the maximum frequency and the use
of this feature.
<SystemUnderTest><Node><Hardware><CPU><TurboFrequencyMHz>
<SystemUnderTest><Node><Hardware><CPU><TurboMode>
Furthermore if the enabled/disabled status of this feature is changed from
the default setting this must be documented in the
System Under Test Notes field.
<SystemUnderTest><Notes><Note>
Example:
This field is divided into 2 parts separated by a slash. The first part
gives the number of sockets populated with a CPU chip as used for this test
result and the second part the number of available CPU sockets.
<SystemUnderTest><Node><Hardware><CPU><PopulatedSockets>
<SystemUnderTest><Node><Hardware><AvailableSockets>
The CPUs that were enabled and active during the test run, displayed as the
number of cores, number of processors, and the number of cores per
processor.
<SystemUnderTest><Node><Hardware><CPU><Cores>
<SystemUnderTest><Node><Hardware><CPU><PopulatedSockets>
<SystemUnderTest><Node><Hardware><CPU><CoresPerChip>
The number of Non-Uniform Memory Access (NUMA) nodes used for this test.
Typically this is equal to the number of populated sockets times 1 or 2
depending on the CPU architecture.
<SystemUnderTest><Node><Hardware><NumaNodes>
The total number of active hardware threads for this test and the number of
hardware threads per core given in parentheses.
<SystemUnderTest><Node><Hardware><CPU><HardwareThreadsPerCore>
Description (size and organization) of the CPU's primary cache. This cache
is also referred to as "L1 cache".
<SystemUnderTest><Node><Hardware><CPU><Cache><Primary>
Description (size and organization) of the CPU's secondary cache. This cache
is also referred to as "L2 cache".
<SystemUnderTest><Node><Hardware><CPU><Cache><Secondary>
Description (size and organization) of the CPU's tertiary, or "L3"
cache.
<SystemUnderTest><Node><Hardware><CPU><Cache><Tertiary>
Description (size and organization) of any other levels of cache
memory.
<SystemUnderTest><Node><Hardware><CPU><Cache><Other>
Additional technical characteristics to help identify the processor.
<SystemUnderTest><Node><Hardware><CPU><OtherCharacteristics>
Total memory capacity in GB available to the operating system for task processing. This number is typically slightly lower then the amount of configured physical memory. It is determined automatically by the Chauffeur Host. For multi-node runs, this is the average memory reported by each host.
This field is divided into 2 parts separated by a slash. The first part
describes the amount of installed physical memory in GB as used for this
test. The second number gives the maximum possible memory capacity in
GB if all memory slots are populated with the highest capacity DIMMs
available in the SUT.
<SystemUnderTest><Node><Hardware><Memory><SizeMB>
<SystemUnderTest><Node><Hardware><Memory><MaximumSizeMB>
This field is divided into 2 parts separated by a slash. The first part
describes the number of memory slots populated with a memory module as used
for this test. The second part shows the total number of available
memory slots in the SUT.
<SystemUnderTest><Node><Hardware><Memory><Dimms><Quantity>
<SystemUnderTest><Node><Hardware><Memory><AvailableSlots>
Detailed description of the system main memory technology, sufficient for
identifying the memory used in this test.
<SystemUnderTest><Node><Hardware><Memory><Dimms><Quantity>
<SystemUnderTest><Node><Hardware><Memory><Dimms><DimmSizeMB>
<SystemUnderTest><Node><Hardware><Memory><Dimms><Description>
There may be multiple instances of this field if different types of DIMMs have been used for this test, one separate field for each DIMM type.
Since the introduction of DDR4 memory there are two slightly different formats. The recommended formats are described here.
DDR4 Format:
N x gg ss pheRxff PC4v-wwwwaa-m; slots k, ... l populated
References:
For example:
8 x 16 GB 2Rx4 PC4-2133P-R; slots 1 - 8 populated
Where:
Note: The main string "gg ss pheRxff PC4v-wwwwaa-m" can be read directly from the label on the memory module itself for all vendors who use JEDEC standard labels.
DDR3 Format:
N x gg ss eRxff PChv-wwwwwm-aa, ECC CLa; slots k, ... l populated
Reference:
For example:
8 x 8 GB 2Rx4 PC3L-12800R-11, ECC CL10; slots 1 - 8 populated
Where:
Description of the memory operating mode. Examples of possible values are:
Standard, Mirror, Spare, Independent
<SystemUnderTest><Node><Hardware><Memory><OperatingMode>
This field is divided into 3 parts separated by slashes. The first part
shows the number of active power supplies, which might be lower than the
next number, if some power supplies are in standby mode and used in case of
failure only. The second part gives the number of bays populated with a
power supply. The third part describes the number of power supply bays
available in this node.
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><Active>
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><Populated>
<SystemUnderTest><Node><Hardware><PowerSupplies><Bays>
All three parts can can show "None" if the node is powered by a shared power supply.
The number and watts rating of this power supply unit (PSU) plus the supplier name and the order number to identify it.
In case of a "Parts Built" system (see
System Source) the manufacturer and the part
number of the PSU must be specified here.
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><Active>
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><RatingInWatts>
<SystemUnderTest><Node><Hardware><PowerSupplies><PowerSupply><Description>
There may be multiple lines in this field if different types of PSUs have been used for this test, one for each PSU type.
"N/A" if this node does not include a power supply.
Operating mode active for running this test. Must be one of the available
modes as described in the field Available Power
Supply Modes.
<SystemUnderTest><Node><Hardware><PowerSupplies><OperatingMode>
The available power supply unit (PSU) modes depend on the capabilities of
the tested server hardware and firmware.
<SystemUnderTest><Node><Hardware><PowerSupplies><AvailableModes>
Typical power supply modes are:
For example: 2 + 1 Spare PSU
Two PSUs are active, the third PSU is inactive in Standby mode. System operation is guaranteed for 1 PSU fail in case of 3 PSUs in total.
For example: 2 + 2 (2 AC sources)
2 PSUs are active, the other two PSUs are inactive in Standby mode. 2 of the 4 PSUs are each connected to a separate AC source. This ensures that the system can continue operation even if a power line or a single PSU fails.
This field is divided into 2 parts separated by a slash. The first part
gives the number of disk drive bays actually populated with a disk drive for
this test. The second part shows the number of available drive bays in
the SUT, some of which may have been empty in the tested configuration.
<SystemUnderTest><Node><Hardware><DiskDrives><DiskGroup><Quantity>
<SystemUnderTest><Node><Hardware><DiskDrives><Bays>
Disk drives may be of different type in heterogenous multi disk configurations. In this case separate Disk Drive fields need to be specified for each type, describing its capabilities.
This field contains four rows. In case of heterogenous multi disk configurations there may be several instances of this field.
This field contains three rows. In case of heterogenous configurations with different Network Interface Cards (NICs) there may be several instances of this field.
Specifies whether any management controller was configured in the SUT.
<SystemUnderTest><Node><Hardware><ManagementController><Quantity>
This field is divided into 2 parts separated by a slash. There may be multiple lines in this field if different types of expansion slots are available, one for each slot type.
The first part gives the number of expansion slots (PCI slots) actually
populated with a card for this test. The second part shows the number
of available expansion slots in the SUT; some of them may have been empty in
the tested configuration.
<SystemUnderTest><Node><Hardware><ExpansionSlots><ExpansionSlot><Populated>
<SystemUnderTest><Node><Hardware><ExpansionSlots><ExpansionSlot><Quantity>
Specifies whether any optical drives were configured in the SUT.
<SystemUnderTest><Node><Hardware><OpticalDrives>
The type of keyboard (USB, PS2, KVM or None) used.
<SystemUnderTest><Node><Hardware><Keyboard>
The type of mouse (USB, PS2, KVM or None) used.
<SystemUnderTest><Node><Hardware><Mouse>
Specifies if a monitor was used for the test and how it was connected
(directly or via KVM).
<SystemUnderTest><Node><Hardware><Monitor>
Number and description of any additional equipment added to improve
performance and required to achieve the reported scores.
<SystemUnderTest><Node><Hardware><Other><OtherHardware><Quantity>
<SystemUnderTest><Node><Hardware><Other><OtherHardware><Description>
This section describes in detail the various software components installed on the system under test, which are critical to achieve the reported result, and their configuration parameters.
This field shows whether power management features of the SUT were enabled
or disabled.
<SystemUnderTest><Node><Software><OperatingSystem><PowerManagement>
Operating system vendor and name.
<SystemUnderTest><Node><Software><OperatingSystem><Vendor>
<SystemUnderTest><Node><Software><OperatingSystem><Name>
Examples:
The operating system version. For Unix based operating systems the detailed
kernel number must be given here. If there are patches applied that affect
performance and / or energy usage, they must be disclosed in the
System Under Test Notes.
<SystemUnderTest><Node><Software><OperatingSystem><Version>
The type of the filesystem used for the operating system and test directories.
<SystemUnderTest><Node><Software><OperatingSystem><FileSystem>
Any performance- and/or power-relevant software used and required to
reproduce the reported scores, including third-party libraries,
accelerators, etc.
<SystemUnderTest><Node><Software><Other><OtherSoftware>
A version number or string identifying the boot firmware installed on the
SUT.
<SystemUnderTest><Node><Firmware><Boot><Version>
A version number or string identifying the management firmware running on
the SUT or "None" if no management controller was installed.
<SystemUnderTest><Node><Firmware><Management><Version>
The company that makes the JVM software.
<SystemUnderTest><Node><JVM><Vendor>
Name and version of the JVM software product, as displayed by the
"java -version" or "java -fullversion"
commands.
<SystemUnderTest><Node><JVM><Version>
Examples:
This field shows the id of the client configuration element from the "config.xml" file specifying the set of JVM options used for running the tests. If there is no id attribute on this element, this field will be blank.
Free text description of what sort of tuning one has to do to the SUT to get
these results. Also additional hardware information not covered in the other
fields above can be given here.
<SystemUnderTest><Node><Notes>
The following list shows examples of information that must be reported in this section:
The following section displays more details of the electrical and environmental data collected during the different target loads, including data not used to calculate the test result. For further explanation of the measured values look in the "SPECpower Methodology" document (SPECpower-Power_and_Performance_Methodology.pdf).
Description of the line standards for the main AC power as provided by the
local utility company and used to power the SUT. The standard voltage and
frequency are printed in this field followed by the number of phases and
wires used to connect the SUT to the AC power line.
<SystemUnderTest><LineStandard><Voltage>
<SystemUnderTest><LineStandard><Frequency>
<SystemUnderTest><LineStandard><Phase>
<SystemUnderTest><LineStandard><Wires>
Elevation of the location where the test was run. This information is
provided by the tester.
<SystemUnderTest><TestInformation><ElevationMeters>
Minimum temperature which was measured by the temperature sensor during all target load levels.
The details report file "results-details.html/.txt" is created together with the standard report file at the end of each succesful run. In addition to the information in the standard report file described above it includes more detailed performance and power result values for each individual worklet.
This report section is available in the Details Report File "results-details.html/.txt" only. It shows the details of the different measurement devices used for this test run.
There may be more than one measurement device used to measure power and temperature. Each of them will be described in a separate table.
The following table includes information about the power analyzer identified by "Name" and used to measure the electrical data.
Company which manufactures and/or sells the power analyzer.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><HardwareVendor>
The model name of the power analyzer type used for this test run.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><Model>
The serial number uniquely identifying the power analyzer used for this test
run.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><SerialNumber>
Which interface was used to connect the power analyzer to the PTDaemon host
system and to read the power data, e.g. RS-232 (serial port), USB, GPIB,
etc.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><Connectivity>
Input connection used to connect the load, if several options are available,
or "Default" if not.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><InputConnection>
Name of the national metrology institute, which specifies the calibration
standards for power analyzers, appropriate for the
Test Location reported in the result files.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><CalibrationInstitute>
Calibration should be done according to the standard of the country where the test was performed or where the power analyzer was manufactured.
Examples:
Country | Metrology Institute |
---|---|
USA | NIST (National Institute of Standards and Technology) |
Germany | PTB (Physikalisch Technische Bundesanstalt) |
Japan | AIST (National Institute of Advanced Industrial Science and Technology) |
Taiwan (ROC) | NML (National Measurement Laboratory) |
China | CNAS (China National Accreditation Service for Conformity Assessment) |
A list of national metrology institutes for many countries is maintained by NIST at http://gsi.nist.gov/global/index.cfm.
Name of the organization that performed the power analyzer calibration
according to the standards defined by the national metrology institute.
This could be the analyzer manufacturer, a third party company, or an
organization within your own company.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><AccreditedBy>
A number or character string which uniquely identifies this meter
calibration event. May appear on the calibration certificate or on a sticker
applied to the power analyzer. The format of this number is specified by the
organization performing the calibration.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><CalibrationLabel>
The date (yyyy-mm-dd) the calibration certificate was issued, from the
calibration label or the calibration certificate.
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><DateOfCalibration>
The version of the power daemon program reading the analyzer data, including CRC information to verify that the released version was running unchanged. This information is provided automatically by the test software.
Free format textual description of the device or devices measured by this
power analyzer and the accompanying PTDaemon instance, e.g. "SUT Power
Supplies 1 and 2".
<SystemUnderTest><MeasurementDevices><PowerAnalyzer><SetupDescription>
The following table includes information about the temperature sensor used to measure the ambient temperature of the test environment.
Company which manufactures and/or sells the temperature sensor.
<SystemUnderTest><MeasurementDevices><TemperatureSensor><HardwareVendor>
The manufacturer and model name of the temperature sensor used for this
test run.
<SystemUnderTest><MeasurementDevices><TemperatureSensor><Model>
The version number of the operating system driver used to control and read
the temperature sensor.
<SystemUnderTest><MeasurementDevices><TemperatureSensor><DriverVersion>
Which interface was used to read the temperature data from the sensor, e.g.
RS-232 (serial port), USB, etc.
<SystemUnderTest><MeasurementDevices><TemperatureSensor><Connectivity>
The version of the power daemon program reading the analyzer data, including CRC information to verify that the released version was running unchanged. This information is provided automatically by the test software.
Free format textual description of the device or devices measured and the
approximate location of this temperature sensor, e.g. "50 mm in front
of SUT main airflow intake".
<SystemUnderTest><MeasurementDevices><TemperatureSensor><SetupDescription>
This report section is available in the Details Report File "results-details.html/.txt" only. It is divided into separate segments for all worklets each starting with a title bar showing the workload and worklet names <workload name>: <worklet name>. Each segment includes Performance and Power Data tables together with some details about the client JVMs for the corresponding worklet.
Total number of client JVMs started on the System Under Test for this
worklet.
<suite><client-configuration><clients><count>
The number of hardware threads each instance of the client JVM is affinitized to.
The complete command-line for one of the client JVMs used to run this worklet, including affinity specification, the Java classpath, the JVM tuning flags and any additional command-line parameters. The affinity mask, the "Client N of M" string and the "-jvmid N" parameter printed here are valid for one specific instance of the client JVM only. The other client JVMs will use their associated affinity masks, strings and parameters but share the rest of the commandline.
The JVM tuning flags are specified by the client-configuration in the
"config.xml" configuration file.
<suite><client-configuration><clients><option-set>
This table displays detailed performance information for a worklet. The information is presented on separate rows per Phase, Interval and Transaction where applicable.
This column of the performance data table shows the phase names for the performance values presented in the following columns of the rows belonging to this phase.
Examples of phases are "Warmup", "Calibration" and "Measurement".
This column of the performance data table shows the interval names for the performance values presented in the following columns of the rows belonging to this interval.
Examples of intervals are "max" and "75%".
The "Actual Load" is calculated by dividing the interval "Score" by the "Calibration Result". This value is shown for the measurement intervals only. It can be compared against the target load level as defined by the "Interval" name.
The fields in this column show the worklet specific score for each interval, which is calculated dividing the sum of all "Transaction Count" values for this interval by the "Elapsed Measurement Time (s)". For the calibration intervals a field showing the "Calibration Result" score is added in this column.
In contrast to the worklet performance score given in the performance data table Result Table above this score isn't normalized.
Note that elapsed interval time is the wall clock time specified in the test configuration file "config.xml" -- not the Transaction Time from the report.
Some worklets may provide their own score calculations that include factors other than the transaction throughput.
The time spent during this interval executing transactions. This is the time used for calculating the "Score" for this interval.
This time typically doesn't match exactly the interval time specified in the test configuration file "config.xml".
The name of the transaction(s) related to the following "Transaction Count" and "Transaction Time" values.
Some worklets execute only one type of transaction whereas others may include several different transaction types. The details for each transaction type will appear on its own row in the table.
The number of successfully completed transactions defined in column "Transaction" during the interval given in column "Interval".
For worklets including multiple transaction types a "sum" field is added, showing the aggregated transaction count for all transactions of this worklet.
The total elapsed (wall clock) time spent executing this transaction during this interval. It only includes the actual execution time, and not input generation time. Since multiple transactions execute concurrently in different threads, this time may be longer than the length of the interval.
For worklets including multiple transaction types a "sum" field is added, showing the aggregated transaction time for all transactions of this worklet.
This table displays detailed power information for a worklet. The information is presented on separate rows per Phase and Interval. There will be separate power data tables for each power analyzer.
This column of the power data table gives the phase names for the power values presented in the following fields of the rows belonging to this phase.
The "Sum" field identifies the summary row for all measurement intervals (see Average Active Power (W)).
This column of the power data table gives the interval names for the power values presented in the following fields of the rows belonging to this interval.
The "Total" field identifies the summary row for all power analyzers (see Average Active Power (W)).
Name identifying the power analyzer whose power readings are displayed in this table. More details regarding this power analyzer are given in the Power Analyzer table(s) in the "Measurement Devices" section above.
Average voltage in Volts for each interval as reported by the PTDaemon instance connected to this power analyzer.
Average current in Amps for each interval as reported by the PTDaemon instance connected to this power analyzer.
The current range for each test phase as configured in the power analyzer. Typically range settings are read by PTDaemon directly from the power analyzer.
Please note that automatic current range setting by the analyzer is not recommended and may result in inaccurate data.
Average power factor for each interval as reported by the PTDaemon instance connected to this power analyzer.
Average active power in Watts for each interval as reported by the PTDaemon instance connected to this power analyzer.
In this column a "Sum""Total" field is added, showing the aggregated active power for all measurement intervals over all power analyzers.
The average uncertainty of the reported power readings for each test phase as calculated by PTDaemon based on the range settings.
For some analyzers range reading may not be supported. The uncertainty calculation may still be possible based on manual or command line range settings. More details are given in the measurement setup guide (see SPECpower_Measurement_Setup_Guide.pdf).
The minimum ambient temperature for each interval as measured by the temperature sensor. All values are measured in ten second intervals, evaluated by the PTDaemon and reported to the test harness at the end of each interval.
Copyright © 2013-2017 Standard Performance Evaluation Corporation
All Rights Reserved