SPECpower_ssj2008 Result File Fields
Last updated: Nov 19 2014
To check for possible updates to this document, please see http://www.spec.org/power/docs/SPECpower_ssj2008-Result_File_Fields.html
ABSTRACT
This document describes the various fields in the different levels of result files making up the complete SPECpower_ssj2008 result disclosure.
-
Main Report File (ssj.NNNN-main.html)
including the overall metric, the system under test description, the controller system description, the measurement devices description and the aggregated measured values (throughput, power and temperature). -
Power/Temperature Details Report (ssj.NNNN-power.html) {>V1.01 only}
including details for possibly multiple power analyzers and temperature sensors. -
Aggregate Performance Report (ssj.NNNN.details.html)
including more detailed throughput information divided into separate sections per JVM, per node or per set.
A set consists of one or more identically configured nodes. Currently only homogeneous configurations with one set are valid for a compliant benchmark run although multiple heterogeneous sets are supported by the software.
The aggregate performance report is not generated for single node or single JVM configurations. -
Set 'Set#' Performance Report (ssj.NNNN.details.Set#.html) {>V1.01 only}
including more detailed throughput information split into separate sections per host.
For benchmark versions >V1.01 a set performance report is generated for heterogeneous configurations including multiple sets only. Such configurations are not valid for publication but are supported by the benchmark software. -
Host 'Host#' Performance Report (ssj.NNNN.details.Set#.Host#.html) {>V1.01 only}
including more detailed throughput information split into separate sections per JVM instance. -
JVM Instance 'Host#.NNN' Performance Report (ssj.NNNN.details.Set#.Host#-Host#.NNN.html)
including the throughput data for each JVM instance separately.
Overview
Selecting one of the following will take you to the detailed table of contents for that section:
1. SPECpower_ssj2008 Benchmark
3. Top Bar
8. Set: 'N'
10. Management Firmware Settings
14. Notes
15. Electrical and Environmental Data
16. Aggregate Performance Data
17. Power/Temperature Details Report
18. Power Details for Device 'N'
19. Aggregate Performance Report
20. Set 'N' Performance Report
21. Host 'N' Performance Report
22. JVM Instance 'N' Performance Report
Detailed Contents
1. SPECpower_ssj2008 Benchmark
1.1 The Workload
1.1.1 Server Side Java
1.1.2 JVM Director
1.2 The Control and Collect System
1.3 The Power and Temperature Daemon
1.4 Result Validation and Report Generation
1.5 References
2. Main Report File
3. Top bar
3.1 Headline
3.2 Test sponsor
3.3 SPEC license #
3.4 Hardware Availability
3.5 Tested by
3.6 Test Location
3.7 Software Availability
3.8 System Source
3.9 Test Date
3.10 Publication Date
3.11 Test Method
3.12 System Designation
3.13 Power Provisioning
3.14 INVALID
4. Benchmark Results Summary
4.1 Performance
4.1.1 Target Load
4.1.2 Actual Load
4.1.3 ssj_ops
4.1.4 Active Idle
4.2 Power
4.2.1 Average Active Power (W)
4.3 Performance to Power Ratio
4.4 ∑ssj_ops / ∑power =
4.5 Result Chart
5. Aggregate SUT Data
5.1 Set Id
5.2 # of Nodes
5.3 # of Chips
5.4 # of Cores
5.5 # of Threads
5.6 Total RAM (GB)
5.7 # of OS Images
5.8 # of JVM Instances
6. System Under Test
7. Shared Hardware
7.1 Shared Hardware
7.1.1 Cabinet/Housing/Enclosure
7.1.2 Form Factor
7.1.3 Power Supply Quantity and Rating (W)
7.1.4 Power Supply Details
7.1.5 Network Switch
7.1.6 Network Switch Details
7.1.7 KVM Switch
7.1.8 KVM Switch Details
7.1.9 Other Hardware
7.1.10 Comment
8. Set: 'N'
8.1 Set Identifier
8.2 Set Description
8.3 # of Identical Nodes
8.4 Comment
8.5 Hardware per Node
8.5.1 Hardware Vendor
8.5.2 Model
8.5.3 Form Factor
8.5.4 CPU Name
8.5.5 CPU Characteristics
8.5.6 CPU Frequency (MHz)
8.5.7 CPU(s) Enabled
8.5.8 Hardware Threads / Core
8.5.9 CPU(s) orderable
8.5.10 Primary Cache
8.5.11 Secondary Cache
8.5.12 Tertiary Cache
8.5.13 Other Cache
8.5.14 Memory Amount (GB)
8.5.15 # and size of DIMM(s)
8.5.16 Memory Details
8.5.17 Power Supply Quantity and Rating (W)
8.5.18 Power Supply Details
8.5.19 Disk Drive
8.5.20 Disk Controller
8.5.21 # and type of Network Interface Cards (NICs) Installed
8.5.22 NICs Enabled in Firmware / OS / Connected
8.5.23 Network Speed
8.5.24 Keyboard
8.5.25 Mouse
8.5.26 Monitor
8.5.27 Optical Drives
8.5.28 Other Hardware
8.6 Software per Node
8.6.1 Power Management
8.6.2 Operating System (OS)
8.6.3 OS Version
8.6.4 Filesystem
8.6.5 JVM Vendor
8.6.6 JVM Version
8.6.7 JVM Commandline Options
8.6.8 JVM Affinity
8.6.9 JVM Instances
8.6.10 JVM Initial Heap (MB)
8.6.11 JVM Maximum Heap (MB)
8.6.12 JVM Address Bits
8.6.13 Boot Firmware Version
8.6.14 Management Firmware Version
8.6.15 Workload Version
8.6.16 Director Location
8.6.17 Other Software
9. Boot Firmware Settings
10. Management Firmware Settings
11. System Under Test Notes
12. Controller System
12.1 Hardware
12.1.1 Hardware Vendor
12.1.2 Model
12.1.3 CPU Description
12.1.4 Memory amount (GB)
12.2 Software
12.2.1 Operating System (OS)
12.2.2 JVM Vendor
12.2.3 JVM Version
12.2.4 CCS Version
13. Measurement Devices
13.1 Power Analyzer
13.1.1 Hardware Vendor
13.1.2 Model
13.1.3 Serial Number
13.1.4 Connectivity
13.1.5 Input Connection
13.1.6 Current Range
13.1.7 Voltage Range
13.1.8 Metrology Institute
13.1.9 Accredited by
13.1.10 Calibration Label
13.1.11 Date of Calibration
13.1.12 PTDaemon Host System
13.1.13 PTDaemon Host OS
13.1.14 PTDaemon Version
13.1.15 Setup Description
13.2 Temperature Sensor
13.2.1 Hardware Vendor
13.2.2 Model
13.2.3 Driver Version
13.2.4 Connectivity
13.2.5 PTDaemon Host System
13.2.6 PTDaemon Host OS
13.2.7 Setup Description
14. Notes
15. Electrical and Environmental Data
15.1 Target Load
15.2 Average Voltage (V)
15.3 Average Current (A)
15.4 Average Power Factor
15.5 Average Active Power (W)
15.6 Line Standard
15.7 Average Power Factor
15.8 Minimum Ambient Temperature (°C)
15.9 Minimum Temperature (°C)
15.10 Elevation (m)
16. Aggregate Performance Data
16.1 Target Load
16.2 Actual Load
16.3 ssj_ops
16.3.1 Target
16.3.2 Actual
16.3.3 ssj_ops@calibrated=
16.4 ssj_ops Chart
17. Power/Temperature Details Report
17.1 Top bar
17.2 Benchmark Results Summary
17.3 Measurement Devices
17.4 Notes
18. Power Details for Device 'N'
18.1 Target Load
18.2 Average Voltage (V)
18.3 Voltage Range (V)
18.4 Average Current (A)
18.5 Current Range (A)
18.6 Average Power Factor
18.7 Average Active Power (W)
18.8 Power Measurement Uncertainty(%)
19. Aggregate Performance Report
19.1 Top bar
19.2 Benchmark Results Summary
19.3 Aggregate SUT Data
19.4 System Under Test
19.5 Shared Hardware
19.6 Set: 'N'
19.7 System Under Test Notes
19.8 Notes
19.9 Set Instance Summary
19.9.1 Set
19.9.2 ssj_ops@100%
19.9.3 ssj_ops Set Chart
19.10 Set N Scores:
20. Set 'N' Performance Report
20.1 Top bar
20.2 Benchmark Results Summary
20.3 Aggregate SUT Data
20.4 System Under Test
20.5 Shared Hardware
20.6 Set: 'N'
20.7 System Under Test Notes
20.8 Notes
20.9 Host Instance Summary
20.9.1 Host
20.9.2 ssj_ops@100%
20.9.3 ssj_ops Host Chart
20.10 Host 'N' Scores:
21. Host 'N' Performance Report
21.1 Top bar
21.2 Benchmark Results Summary
21.3 System Under Test
21.4 Set: 'N'
21.5 System Under Test Notes
21.6 Notes
21.7 JVM Instance Summary
21.7.1 JVM Instance
21.7.2 ssj_ops@100%
21.7.3 ssj_ops JVM Instance Chart
21.8 JVM 'N' Scores:
22. JVM Instance 'N' Performance Report
22.1 Top bar
22.2 Benchmark Results Summary
22.3 System Under Test
22.4 Set: 'N'
22.5 System Under Test Notes
22.6 Notes
22.7 Performance Details
22.7.1 Target Load
22.7.2 Actual Load
22.7.3 Transaction Type
22.7.4 Count
22.7.5 Total Heap (MB)
1. SPECpower_ssj2008 Benchmark
SPECpower_ssj2008 is the first generation SPEC benchmark for evaluating the power and performance of server class computers.
The benchmark suite consists of three separate software modules:
- Workload (SSJ)
- Power and Temperature Daemon (PTDaemon)
- Control and Collect System (CCS)
1.1 The Workload
1.1.1 Server Side Java Business Application Simulation
The workload is a Java program designed to exercise the CPU(s), caches, memory, the scalability of shared memory processors, JVM (Java Virtual Machine) implementations, JIT (Just In Time) compilers, garbage collection, threads, and certain aspects of the operating system of the SUT.
The workload architecture is a 3-tier system with emphasis on the middle tier. These tiers are comprised as follows:
- Random input selection
- Business logic (fully implemented by SPECpower_ssj2008)
- Tables of objects, implemented by Java Collections (rather than a separate database)
1.1.2 JVM Director
The JVM Director is a separate and distinct mechanism from the actual workload itself (the three-tiered client-server environment), but runs concurrently with the JVM instance(s) of the workload. Like the workload, the JVM Director is also a java application, and as such, runs as its own JVM instance.
The JVM Director can be run locally on the SUT, or it can be run remotely at the user's discretion (see Director Location). Whichever method is employed, the JVM Director and the workload JVM instance(s) will communicate via a TCP/IP socket connection.
1.2 The Control and Collect System
The Control and Collect System is a Java-based application that resides on the controller server. CCS is used to connect to three types of data sources via TCP/IP socket communication:
- the SSJ module (via the JVM Director) on the SUT
- one ore more instances of PTDaemon each connected to a power analyzer
- one or more instances of PTDaemon each connected to a temperature sensor
1.3 The Power and Temperature Daemon
The Power and Temperature Daemon (PTDaemon) is a single executable program that communicates with a power analyzer or a temperature sensor via the server's native RS-232 port, USB port or additionally installed interface cards, e.g. GPIB. It reports the power consumption or temperature readings to CCS via a TCP/IP socket connection. It supports a variety of RS-232, GPIB and USB interface command sets for a variety of power analyzers and temperature sensors. PTDaemon is the only of the three SPECpower_ssj2008 software modules that is not Java based. Although it can be quite easily setup and run on a server other than the controller server, in the simplest SPECpower_ssj2008 test bed implementation, the PTDaemon will typically reside on the controller server.
1.4 Result Validation and Report Generation
At the beginning of each run, the benchmark parameters are checked for conformance to the run rules. Warnings are displayed for non-compliant properties and printed in the final report; however, the benchmark will run to completion producing a report that is not valid for publication.
At the end of a benchmark run the report generator module is called to generate the report files described here from the data given in the configuration files and collected during this benchmark run. Again basic validity checks are performed, to ensure that interval length, target load throughput, temperature etc. are within the limits defined in the run rules. For more information see section "2.5.2 Validity Checks" in the Run and Reporting Rules document.
1.5 References
More detailed information can be found in the documents shown in the following table.
For the latest versions, please consult SPEC's website.
In this document all references to configurable parameters are printed in parentheses with red color using the names from the properties files, e.g. (config.test.sponsor) from "SPECpower_ssj_config.props".
The following properties files are delivered with the benchmark kit:
The CCS properties file includes descriptions of the controller system (ccs.config......) and basic information on the power analyzers and temperature sensors (ptd.pwr1.config......). Also the workload to be run is selected and additional parameters are defined. (ccs.wkld......). Currently only one workload (SSJ) is supported.
This file includes the global control parameters for the SSJ workload which can be changed by the user without invalidating the benchmark results (input.calibration.interval_count).
This file is a superset of the default "SPECpower_ssj.props" configuration file and will be needed for experimental use of the benchmark only. Besides the control parameters allowed to be changed (input.calibration.interval_count) it also includes options for the fixed input parameters which cause an invalid result if changed (input.load_level.count).
The characteristics of the test run (config.test......) are specified in this file together with a description of the shared hardware in case of a multi node run (config.shared......).
Detailed hardware (config.hw.....) and software (config.sw.....) descriptions of the system under test are stated here. Each instance of this file describes a single host system or several identically configured systems in a set. A heterogeneous configuration with differently configured systems requires one properties file per set.
This file resides on the system running the JVM Director.
Currently only single set homogeneous configurations are allowed for a valid benchmark run.
2. Main Report File
This section gives an overview of the information and result fields in the main report file. Additional information is shown in the Power/Temperature Details Report, including detailed power information for potentially multiple power analyzers, and the Aggregate Performance Report, which is generated for multiple node tests only. The Set Performance Report represents the next level of details and is created only if multiple heterogeneous sets of nodes are used, which is currently not allowed for valid results. This information is further extended in the Host Performance Report and the JVM Instance Performance Report which are generated for each host and each JVM instance including specific configuration and performance details.
3. Top bar
The top bar shows the measured SPECpower_ssj2008 result and gives some general information regarding this test run.
3.1 Headline
The headline of the performance report includes one field displaying the hardware vendor (config.hw.vendor) and the name (config.hw.model) of the system under test. If this report is for a historical system the declaration "(Historical)" must be added to the model name. In a second field the overall SPECpower_ssj2008 result achieved in this test (overall ssj_ops/watt) is printed, eventually prefixed by an "Invalid" indicator, if the current result does not pass the validity checks implemented in the SPECpower_ssj report generation software. More detailed information about the result metric is presented in section 3.1 of the SPECpower_ssj2008 Run and Reporting Rules.
3.2 Test sponsor
The name of the organization or individual that sponsored the test. Generally, this is the name of the license holder (config.test.sponsor).
3.3 SPEC license #
The SPEC license number of the organization or individual that ran the result (config.test.spec_license).
3.4 Hardware Availability
The date when all the hardware necessary to run the result is generally available (config.hw.available). For example, if the CPU is available in Aug-2007, but the memory is not available until Oct-2007, then the hardware availability date is Oct-2007 (unless some other component pushes it out farther).
3.5 Tested by
The name of the organization or individual that ran the test and submitted the result (config.test.tested_by).
3.6 Test Location
The name of the city, the state and country the test took place. If there are installations in multiple geographic locations, that must also be listed in this field (config.test.location).
3.7 Software Availability
The date when all the software necessary to run the result is generally available (config.sw.available). For example, if the operating system is available in Aug-2007, but the JVM is not available until Oct-2007, then the software availability date is Oct-2007 (unless some other component pushes it out farther).
3.8 System Source
Single Supplier or Parts Built (config.hw.system_source)
- Single Supplier
"Single Supplier" is defined as a SUT configuration where all hardware is provided by a single supplier.
In case of "Single Supplier" systems all part description fields in the reports, which require detailed information to identify the parts, should include the system vendor name and the system vendor order number for the part. -
Parts Built
"Parts Built" is defined as a SUT configuration where hardware is provided by multiple suppliers. A "Parts Built" system disclosure must include enough detail to procure and reproduce all aspects of the submission, including performance and power.
In case of "Parts Built" systems all part description fields in the reports, which require detailed information to identify the parts, must include the part's manufacturer name and the manufacturer's part number to describe the devices.
3.9 Test Date
The date when the test is run. This value is automatically supplied by the SPECpower_ssj software; the time reported by the system under test is recorded in the raw result file .
3.10 Publication Date
The date when this report will be published after finishing the review. This date is automatically filled in with the correct value by the submission tool provided by SPEC. By default this field is set to "Unpublished" by the software generating the report.
3.11 Test Method
Possible values for this property (config.test.method) are:
- Single Node
"Single Node" is defined as a SUT configuration where the test is performed on one server running a single OS image (see section 2.11 of the SPECpower_ssj2008 Run and Reporting Rules). -
Homogeneous Multi Node
"Homogeneous Multi Node" is defined as a SUT configuration where the test is running on multiple, identically configured ("homogeneous") servers sharing some energy efficiency feature (see section 2.11 of the SPECpower_ssj2008 Run and Reporting Rules). -
Heterogeneous Multi Node (Not Valid for Publication)
"Heterogeneous Multi Node" this SUT configuration is composed of multiple, differently configured ("heterogeneous") servers sharing some energy efficiency feature (see section 2.11 of the SPECpower_ssj2008 Run and Reporting Rules).
3.12 System Designation
Possible values for this property (config.hw.system_designation) are:
- Server
"Server" is defined as a computer system that is marketed to support multiple tasks from multiple users, simultaneously (see section 3.3.2 of the SPECpower_ssj2008 Run and Reporting Rules). -
Personal System
"Personal System" is a computer system that is primarily marketed for use by a single individual, even though multiple tasks may execute simultaneously (see section 3.3.2 of the SPECpower_ssj2008 Run and Reporting Rules).
For a "Personal System" the vendor of the display device (monitor) and its model number must be added here, as this device has to be be included in the power measurement, (see section 2.11.2 paragraph 4 of the SPECpower_ssj2008 Run and Reporting Rules).
3.13 Power Provisioning
Possible values for this property (config.hw.power_provisioning) are:
- Line-powered
A computer system which is powered by an external AC power source (see section 2.8 of the SPECpower_ssj2008 Run and Reporting Rules). -
Battery-powered
A computer system designed to be able to run normal operations without an external source of power (see section 2.8 of the SPECpower_ssj2008 Run and Reporting Rules).
3.14 INVALID
Any inconsistencies with the run and reporting rules causing a failure of one of the validity checks implemented in the report generation software will be reported here and all pages of the report file will be stamped with an "Invalid" water mark in case this happens. The printed text will show more details about which of the run rules wasn't met and the reason why. (see section 2.5.2 of the SPECpower_ssj2008 Run and Reporting Rules).
4 Performance Results Summary
This section describes the result details for all measurement intervals in a table and as a graph.
4.1 Performance
The first three columns of the results table show the measured throughput and the actual percentage of calibrated throughput compared to the target percentage.
4.1.1 Target Load
The different target load levels derived from the calibrated throughput, starting with 100% of the calibrated throughput and decreasing to "Active Idle" = 0% or no throughput. The benchmark software schedules the required number of requests to actually achieve the intended throughput levels during each of the measurement intervals, each lasting 240 seconds.
4.1.2 Actual Load
The load levels actually achieved during the different phases of the benchmark as a percentage of the calibrated throughput. The percentages must match the target load of each phase with less than 2% deviation (positive or negative).
4.1.3 ssj_ops
The number of operations finished during this measurement interval divided by the number of seconds defined for this interval, showing the throughput (workload operations per second) for this period.
4.1.4 Active Idle
The last measurement interval running without any transactions scheduled by the workload software. So there is no throughput reported for this interval only the power consumption will be measured and displayed.
4.2 Power
This column of the results summary table shows the power consumption for the different target loads.
4.2.1 Average Active Power (W)
Average active power measured by the power analyzer(s) and accumulated by the PTDaemon (Power and Temperature Daemon) for this measurement interval, displayed as watts (W).
4.3 Performance to Power Ratio
The average throughput divided by the average power consumption for each of the measurement intervals.
4.4 ∑ssj_ops / ∑power =
The overall score of the SPECpower_ssj2008 benchmark calculated from the sum of the performance measured at each target load level (in ssj_ops) divided by the sum of the average power (in W) at each target load including active idle.
4.5 Result Chart
The result chart graphically displays the results reported in the summary table in one diagram. The red bars show the performance to power ratio (throughput / W) of each target load given on the y-axis graphically (corresponding to the upper x-axis) and numerically as a label in the bar. Longer bars / higher numbers are better. By definition there is no throughput for the "Active Idle" level and so the ratio is always 0. The bold blue line with the markers corresponds to the lower x-axis and shows the average power consumption for each target load given on the y-axis. Lower numbers are better. The thin, vertical, straight line corresponds to the upper x-axis and shows the overall ssj_ops per watt result of the benchmark. A higher number is better.
5. Aggregate SUT Data {>V1.01 only}
In this section aggregated values for several system configuration parameters are reported. The section will be displayed only if more than one node is configured.
5.1 Set Id
A user defined identifier (see (SETID) in runssj.bat/runssj.sh) used to identify the descriptive configuration properties that will be used for the system under test. For example, with a (SETID) of "sut", the descriptive configuration properties will be read from the file "SPECpower_ssj_config_sut.props" from the Director system.
5.2 # of Nodes
The number of nodes per set and the total number of all nodes used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
5.3 # of Chips
The number of processor chips per set and the total number of all chips used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
5.4 # of Cores
The number of processor cores per set and the total number of all cores used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
5.5 # of Threads
The number of processor threads per set and the total number of all threads used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
5.6 Total RAM (GB)
The amount of memory (GB) per set and the total memory size for all systems used to run the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
5.7 # of OS Images
The number of operating system images per set and the total number of all OS images used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
5.8 # of JVM Instances
The number of Java Virtual Machine instances per set and the total number of all JVM instances used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
6. System under test
The following section of the report file describes the hardware and the software of the system under test (SUT) used to run the reported SPECpower benchmark with the level of detail required to reproduce this result.
7. Shared Hardware {>V1.01 only}
In this section hardware components common to all nodes will be described. The section will be displayed only if more than one node is configured.
7.1 Shared Hardware
A table including the description of the shared hardware components.
7.1.1 Cabinet/Housing/Enclosure
The model name identifying the enclosure housing the tested nodes (config.shared.enclosure).
7.1.2 Form Factor
The full SUT form factor (including all nodes and any shared hardware). (config.shared.form_factor).
For rack-mounted systems, specify the number of rack units. For other types of enclosures, specify "Tower" or "Other".
7.1.3 Power Supply Quantity and Rating (W)
The number of power supplies that are installed in the tested configuration (config.shared.psu.installed) and the power rating for each power supply (config.shared.psu.rating). Both values are set to 0 if there are no shared power supplies.
7.1.4 Power Supply Details
The supplier name of the PSU and the order number to identify it.
(config.shared.psu.description)
"N/A" if there are no shared power supplies.
In case of a "Parts Built" system (see: System Source) the manufacturer name
and the part number of the PSU must be specified here.
7.1.5 Network Switch
The number of network switches used to run the benchmark (config.shared.network.switch). "N/A" if there is no network switch.
7.1.6 Network Switch Details
The manufacturer of the network switch and the model number to identify it (config.shared.network.switch.description). "N/A" if there is no network switch.
7.1.7 KVM Switch
The number of KVM switches used to run the benchmark (config.shared.kvm) "N/A" if there is no KVM switch.
7.1.8 KVM Switch Details
The manufacturer of the KVM switch and the model number to identify it. (config.shared.kvm.description) "N/A" if there is no KVM switch.
7.1.9 Other Hardware
Any additional shared equipment added to improve performance and required to achieve the reported scores (config.shared.other).
7.1.10 Comment
Description of additional performance or power relevant components not covered in the fields above (config.shared.comment)
8. Set: 'N'
Detailed hardware and software description of the identically configured nodes which constitute this set.
8.1 Set Identifier
A unique identifier for this set of nodes. This number or string is read by the benchmark program from the "-setid" commandline parameter used to start the SSJ code of the benchmark and reported here. (see (SETID) in runssj.bat/runssj.sh)
8.2 Set Description
A textual description of this set of nodes, e.g. the model name of a blade server (config.set.description).
8.3 # of Identical Nodes
The number of identically configured nodes which constitute this set. This number is read by the benchmark program from the "-numHosts" commandline parameter used to start the director code of the benchmark and reported here. (see (NUM_HOSTS) in rundirector.bat/rundirector.sh)
8.4 Comment
Additional comments related to this set of nodes (config.set.comment).
8.5 Hardware per Node
This section describes in detail the different hardware components of the system under test which are important to achieve the reported result.
8.5.1 Hardware Vendor
Company which sells the hardware (config.hw.vendor)
8.5.2 Model
The model name identifying the system under test (config.hw.model)
8.5.3 Form Factor
The form factor for this system (config.hw.form_factor).
In multi-node configurations, this is the form factor for a single node. For rack-mounted systems, specify the number of rack units. For blades, specify "Blade". For other types of systems, specify "Tower" or "Other".
8.5.4 CPU Name
A manufacturer-determined processor formal name. (config.hw.cpu)
8.5.5 CPU Characteristics
Technical characteristics to help identify the processor, such as number of cores, frequency, cache size etc (config.hw.cpu.characteristics).
If the CPU is capable of automatically running the processor core(s) faster than the nominal frequency and this feature is enabled, this field should also list the feature and the maximum frequency it enables on that CPU (e.g.: "Intel Turbo Boost Technology up to 3.46GHz").
If this CPU clock feature is present but is disabled, no additional information is required here.
8.5.6 CPU Frequency (MHz)
The nominal (marked) clock frequency of the CPU, expressed in megahertz.
(config.hw.cpu.mhz).
If the CPU is capable of automatically running the processor core(s) faster than the nominal frequency and this feature is enabled, then the CPU Characteristics field must list additional information, at least the maximum frequency and the use of this feature.
Furthermore if the enabled/disabled status of this feature is changed from the default setting this must be documented in the System Under Test Notes field.
8.5.7 CPU(s) enabled
The CPUs that were enabled and active during the benchmark run, displayed as the number of cores (config.config.hw.cpu.cores), the number of chips (config.config.hw.cpu.chips) and the number of cores per chip (config.config.hw.cpu.cores_per_chip).
8.5.8 Hardware Threads / Core
The number of hardware threads available per core (config.hw.cpu.threads_per_core).
8.5.9 CPU(s) orderable
The number of CPUs that can be ordered in a system of the type being tested (config.hw.cpu.orderable).
8.5.10 Primary Cache
Description (size and organization) of the CPU's primary cache. This cache is also referred to as "L1 cache" (config.hw.cache.primary).
8.5.11 Secondary Cache
Description (size and organization) of the CPU's secondary cache. This cache is also referred to as "L2 cache" (config.hw.cache.secondary).
8.5.12 Tertiary Cache
Description (size and organization) of the CPU's tertiary, or "L3" cache (config.hw.cache.tertiary).
8.5.13 Other Cache
Description (size and organization) of any other levels of cache memory (config.hw.cache.other).
8.5.14 Memory Amount (GB)
Total size of memory in the SUT in GB (config.hw.memory.gb).
8.5.15 # and size of DIMM(s)
Number and size of memory modules used for testing (config.hw.memory.dimms).
8.5.16 Memory Details
Detailed description of the system main memory technology, sufficient for identifying the memory used in this test.
(config.hw.memory.description).
Since the introduction of DDR4 memory there are two slightly different formats.
The recommended formats are described here.
DDR4 Format:
N x gg ss pheRxff PC4v-wwwwaa-m; slots k, ... l populated
References:
- "JEDEC Standard No. 21C" http://www.jedec.org/standards-documents/docs/module4_20_26
- DDR4 SDRAM http://www.jedec.org/standards-documents/docs/jesd79-4
8 x 16 GB 2Rx4 PC4-2133P-R; slots 1 - 8 populated
Where:
-
N = number of DIMMs used
x denotes the multiplication specifier -
gg ss = size of each DIMM, including unit specifier
256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB etc. -
pheR = p=number ranks; he=encoding for certain packaging, often blank
1R = 1 rank of DDR SDRAM installed
2R = 2 ranks
4R = 4 ranks -
xff = Device organization (bit width) of DDR SDRAMs used on this assembly
x4 = x4 organization (4 DQ lines per SDRAM)
x8 = x8 organization
x16 = x16 organization -
PCy = Memory module technology standard
PC4 = DDR4 SDRAM - v = Module component supply voltage values: e.g. <blank> for 1.2V, L for Low Voltage (currently not defined)
- wwww = module speed in Mb/s/data pin: e.g. 1866, 2133, 2400
-
aa = speed grade, e.g.
J = 10-10-10
K = 11-11-11
L = 12-12-12
M = 13-13-13
N = 14-14-14
P = 15-15-15
R = 16-16-16
U = 18-18-18 -
m = Module Type
E = Unbuffered DIMM ("UDIMM"), with ECC (x72 bit module data bus)
L = Load Reduced DIMM ("LRDIMM")
R = Registered DIMM ("RDIMM")
S = Small Outline DIMM ("SO-DIMM")
U = Unbuffered DIMM ("UDIMM"), no ECC (x64 bit module data bus)
T = Unbuffered 72-bit small outline DIMM ("72b-SO-DIMM")
- The main string "gg ss pheRxff PC4v-wwwwaa-m" can be read directly from the label on the memory module itself for all vendors who use JEDEC standard labels.
DDR3 Format:
N x gg ss eRxff PChv-wwwwwm-aa, ECC CLa; slots k, ... l populated
Reference:
- "DDR3 DIMM Label", PRN09-NM4, October 2009 http://www.jedec.org/standards-documents/docs/pr-n09-nm1
8 x 8 GB 2Rx4 PC3L-12800R-11, ECC CL10; slots 1 - 8 populated
Where:
-
N = number of DIMMs used
x denotes the multiplication specifier -
gg ss = size of each DIMM, including unit specifier
256 MB, 512 MB, 1 GB, 2 GB, 4 GB, 8 GB etc. -
eR = Number of ranks of memory installed
1R = 1 rank of DDR SDRAM installed
2R = 2 ranks
4R = 4 ranks -
xff = Device organization (bit width) of DDR SDRAMs used on this assembly
x4 = x4 organization (4 DQ lines per SDRAM)
x8 = x8 organization
x16 = x16 organization -
PCy = Memory module technology standard
PC2 = DDR2 SDRAM
PC3 = DDR3 SDRAM - v = Module component supply voltage values: e.g. <blank> for 1.5V, L for 1.35V
-
wwwww = Module bandwidth in MB/s
8500 = 8.53 GB/s (corresponds to 1066 MHz)
10600 = 10.66 GB/s (corresponds to 1333 MHz)
12800 = 12.80 GB/s (corresponds to 1600 MHz)
14900 = 14.90 GB/s (corresponds to 1866 MHz) -
m = Module Type
E = Unbuffered DIMM ("UDIMM"), with ECC (x72 bit module data bus)
F = Fully Buffered DIMM ("FB-DIMM")
M = Micro-DIMM
N = Mini-Registered DIMM ("Mini-RDIMM"), no address/command parity function
P = Registered DIMM ("RDIMM"), with address/command parity function
R = Registered DIMM, no address/command parity function
S = Small Outline DIMM ("SO-DIMM")
U = Unbuffered DIMM ("UDIMM"), no ECC (x64 bit module data bus) - aa = DDR SDRAM CAS Latency in clocks at maximum operating frequency
- ECC = Additional specification for modules which have ECC (Error Correction Code) capabilities
- CLa = CAS latency if the tester has changed the latency to something other than the default
- slots k, ... l = Numbers denoting the mother board memory slots populated with the memory modules described before
8.5.17 Power Supply Quantity and Rating (W)
The number of power supplies that are installed in this node (config.hw.psu.installed) and the power rating for each power supply (config.hw.psu.rating). Both entries should show "None" if the node is powered by a shared power supply.
8.5.18 Power Supply Details
The supplier of the PSU and the order number to identify it. (config.hw.psu.description).
"Shared" if this node is powered by a shared power supply and does not include its own.
In case of a "Parts Built" system (see: System Source) the manufacturer
and the part number of the PSU must be specified here.
8.5.19 Disk Drive
A description of the disk drive(s) (count, model, size, type, rotational speed and RAID level if any) used to boot the operating system and to hold the benchmark software and data during the run (config.hw.disk).
8.5.20 Disk Controller
The supplier name and order number of the controller used to drive the disk(s) (config.hw.disk.controller).
In case of a "Parts Built" system (see: System Source) the manufacturer name
and the part number of the disk controller must be specified here.
8.5.21 # and type of Network Interface Cards (NICs) Installed
A description of the network controller(s) (number, supplier name, order number, ports and speed) installed on the SUT (config.hw.network.controller).
In case of a "Parts Built" system (see: System Source) the manufacturer name
and the part number of the NIC must be specified instead of supplier name and order number.
8.5.22 NICs Enabled in Firmware / OS / Connected
The number of NICs (ports) enabled in the Firmware, in the OS and actually connected during the test (config.hw.network.controller.enabled.firmware, config.hw.network.controller.enabled.os, config.hw.network.controller.connected).
8.5.23 Network Speed
The network speed actually used on the configured NICs during the test (config.hw.network.speed). A minimal speed of at least 1 Gbit/sec is required for a valid benchmark run.
8.5.24 Keyboard
The type of keyboard (USB, PS2, KVM or None) used (config.hw.keyboard).
8.5.25 Mouse
The type of mouse (USB, PS2, KVM or None) used (config.hw.mouse).
8.5.26 Monitor
Specifies if a monitor was used for the test and how it was connected (directly or via KVM) (config.hw.monitor).
8.5.27 Optical Drives
Specifies whether any optical drives were configured in the SUT (config.hw.optical).
8.5.28 Other Hardware
Any additional equipment added to improve performance and required to achieve the reported scores (config.hw.other).
For "Personal Systems" (see System Designation)
the vendor of the display device (monitor) and its model number must be added here, as this device has to be be included in the power measurement, (see section 2.11.2 paragraph 4 of the SPECpower_ssj2008 Run and Reporting Rules).
8.6 Software
This section describes in detail the various software components installed on the system under test, which are important to achieve the reported result, and their configuration parameters.
8.6.1 Power Management
This field shows whether power management features of the SUT were enabled or disabled (config.sw.power_management).
8.6.2 Operating System
The operating system name (config.sw.os).
8.6.3 OS Version
The operating system version. If there are patches applied that affect performance, they must be disclosed in the System Under Test Notes (config.sw.os.version).
8.6.4 File System
The type of the filesystem used to contain the run directories (config.sw.filesystem).
8.6.5 JVM Vendor
The company that makes the JVM software. (config.sw.jvm.vendor)
8.6.6 JVM Version
Name and version of the JVM software product. (config.sw.jvm.version)
8.6.7 JVM Commandline Options
JVM command-line options used when invoking the benchmark. (config.sw.jvm.options)
8.6.8 JVM Affinity
Commands used to configure affinity for each JVM (config.sw.jvm.affinity)
Examples:
taskset -c [0,2;1,3]
start /affinity [0x3,0xC]
8.6.9 JVM Instances
The quantitiy of JVm instances running. This number is detected automatically by the benchmark program and reported here.
8.6.10 JVM Initial Heap Memory (MB)
How many megabytes initially used by the JVM heap. "Unlimited" or "dynamic" are allowable values for JVMs that adjust automatically (config.sw.jvm.heap.initial).
8.6.11 JVM Maximum Heap Memory (MB)
How many megabytes can maximally be used by the JVM heap. "Unlimited" or "dynamic" are allowable values for JVMs that adjust automatically (config.sw.jvm.heap.max).
8.6.12 JVM Address Bits
The basic pointer size (32 or 64 bit) used by the installed JVM (config.sw.jvm.bitness).
8.6.13 Boot Firmware Version
A version number or string identifying the boot firmware installed on the SUT. (config.sw.boot_firmware.version).
8.6.14 Management Firmware Version
A version number or string identifying the management firmware running on the SUT or "None" if no management controller was installed. (config.sw.mgmt_firmware.version).
8.6.15 Workload Version
The name and revision number of the workload program used to produce this result. This information is provided automatically by the benchmark software.
8.6.16 Director Location
Indentifies the system which hosts the director controlling the different JVM instances (SUT, Controller or other). Locations other than SUT or Controller require additional description under Notes (config.director.location).
8.6.17 Other Software
Any performance-relevant software used and required to reproduce the reported scores, including third-party libraries, accelerators, etc. (config.sw.other)
9. Boot Firmware Settings
Free text description of what sort of tuning one has to do to the boot firmware (BIOS) to get these results, e.g configuration settings changed from the default. (config.sw.boot_firmware.settings).
10. Management Firmware Settings
Free text description of what sort of tuning one has to do to the management firmware to get these results or "None" if no management controller was installed, e.g configuration settings changed from the default. (config.sw.mgmt_firmware.settings).
11. System Under Test Notes
Free text description of what sort of tuning one has to do to either
the OS or the JVM to get these results. Also additional hardware information not covered in the other fields above can be given here.
The following list shows examples of information that must be reported in this section:
- System tuning parameters other than default.
- Processor tuning parameters other than default.
- Process tuning parameters other than default.
- Changes to the background load, if any.
- Critical customer-identifiable firmware or option versions such as network and disk controllers.
- Definitions of tuning parameters must be included.
- Part numbers or sufficient information that would allow the end user to order the SUT configuration.
- Identification of any components used that are supported but that are no longer orderable by ordinary customers.
12. Controller System
The next section of the report file describes the hardware and the software of the system running the controller program.
12.1 Hardware
This part of the report contains a brief overview of the hardware used to run the SPECpower Control and Collection System (CCS).
12.1.1 Hardware Vendor
Company which sells/manufactures the controller hardware (ccs.config.hw.vendor)
12.1.2 Model
The model name identifying the system running the controller software(ccs.config.hw.model)
12.1.3 CPU Description
The name of the processor installed in the controller system (ccs.config.hw.cpu) and some technical characteristics to help identify the processor, such as number of cores, frequency, cache size etc (ccs.config.hw.cpu.characteristics)
12.1.4 Memory amount (GB)
Total size of memory in the controller system in GB (ccs.config.hw.memory.gb)
12.2 Software
Main software components installed on the controller system.
12.2.1 Operating System (OS)
The name and the version of the operating system installed on controller system (ccs.config.sw.os)
12.2.2 JVM Vendor
The company which makes the JVM software (ccs.config.sw.jvm.vendor)
12.2.3 JVM Version
Name and version of the JVM software product. (ccs.config.sw.jvm.version)
12.2.4 CCS Version
The version of the controller program used to produce this result. This information is provided automatically by the benchmark software.
13. Measurement Devices
This section of the report shows the details of the different measurement devices used for this benchmark run.
Starting with version 1.10 of the benchmark there may be more than one measurement device used to measure power and temperature.
13.1 Power Analyzer
The following table includes information about the power analyzer used to measure the electrical data.
13.1.1 Hardware Vendor
Company which manufactures and/or sales the power analyzer (ptd.pwrN.config.analyzer.vendor)
13.1.2 Model
The model name of the power analyzer type used for this benchmark run (ptd.pwrN.config.analyzer.model)
13.1.3 Serial Number
The serial number uniquely identifying the power analyzer used for this benchmark run (ptd.pwrN.config.analyzer.serial)
13.1.4 Connectivity
Which interface was used to connect the power analyzer to the PTDaemon host system and to read the power data, e.g. RS-232 (serial port), USB, GPIB etc. (ptd.pwrN.config.analyzer.connectivity)
13.1.5 Input Connection
Input connection used to connect the load, if several options are available, or "Default" if not (ptd.pwrN.config.analyzer.input_connection).
13.1.6 Current Range{<V1.10 only}
Value of current range setting to which the power analyzer has been configured, or "Auto" if none (ptd.pwrN.config.analyzer.current_range).
13.1.7 Voltage Range{<V1.10 only}
Value of voltage range setting to which the power analyzer has been configured, or "Auto" if none (ptd.pwrN.config.analyzer.voltage_range).
13.1.8 Metrology Institute
Name of the national metrology institute, which specifies the calibration standards for power analyzers, appropriate for the Test Location reported in the FDR (ptd.pwrN.config.calibration.institute).
Calibration should be done according to the standard of the country where the test was performed or where the power analyzer was manufactured.
Examples from accepted result reports:
Country | Metrology Institute |
---|---|
USA | NIST (National Institute of Standards and Technology) |
Germany | PTB (Physikalisch Technische Bundesanstalt) |
Japan | AIST (National Institute of Advanced Science and Technology) |
Taiwan (ROC) | NML (National Measurement Laboratory) |
China | CNAS (China National Accreditation Service for Conformity Assessment) |
A list of national metrology institutes for many countries is maintained by NIST here http://gsi.nist.gov/global/index.cfm.
13.1.9 Accredited by
Name of the organization that performed the power analyzer calibration according to the standards defined by the national metrology institute. Could be the analyzer manufacturer, a third party company, or an organization within your own company (ptd.pwrN.config.calibration.accredited_by).
13.1.10 Calibration Label
A number or character string which uniquely identifies this meter calibration event. May appear on the calibration certificate or on a sticker applied to the power analyzer. The format of this number is specified by the metrology institute (ptd.pwrN.config.calibration.label).
13.1.11 Date of Calibration
The date (DD-MMM-YYYY) the calibration certificate was issued, from the calibration label or the calibration certificate (ptd.pwrN.config.calibration.date).
13.1.12 PTDaemon Host System
The manufacturer and model number of the system connected to power analyzer and running the power daemon. If PTDaemon is running on the controller system a reference to this system can be reported instead, e.g. "Controller". (ptd.pwrN.config.ptd.system)
13.1.13 PTDaemon Host OS
The name and the version of the operating system installed on the power daemon host system. If PTDaemon is running on the controller system a reference to this system can be reported instead, e.g. "same as Controller". (ptd.pwrN.config.ptd.os)
13.1.14 PTDaemon Version
The version of the power daemon program reading the analyzer data, including CRC information to verify that the released version was running unchanged. This information is provided automatically by the benchmark software.
13.1.15 Setup Description
Free format textual description of the device or devices measured by this power analyzer and the accompanying PTDaemon instance, e.g. "SUT Power Supplies 1 and 2". (ptd.pwrN.config.analyzer.setup_description).
13.2 Temperature Sensor
The following table includes information about the temperature sensor used to measure the ambient temperature of the test environment.
13.2.1 Hardware Vendor
Company which manufactures and/or sales the temperature sensor (ptd.tempN.config.sensor.vendor)
13.2.2 Model
The manufacturer and model name of the temperature sensor used for this benchmark run (ptd.tempN.config.sensor.model)
13.2.3 Driver Version
The version number of the operating system driver used to control and read the temperature sensor (ptd.tempN.config.sensor.driver)
13.2.4 Connectivity
Which interface was used to read the temperature data from the sensor, e.g. RS-232 (serial port), USB etc. (ptd.tempN.config.sensor.connectivity)
13.2.5 PTDaemon Host System
The manufacturer and model number of the system connected to temperature sensor and running the temperature daemon (ptd.tempN.config.ptd.system)
13.2.6 PTDaemon Host OS
The name and the version of the operating system installed on the temperature daemon host system (ptd.tempN.config.ptd.os)
13.2.7 Setup Description
Free format textual description of the device or devices measured and the approximate location of this temperature sensor, e.g. "50 mm in front of SUT main airflow intake". (ptd.tempN.config.sensor.setup_description)
14. Notes
Additional important information required to reproduce the results from other reporting sections, i.e. not related to the SUT, that require a larger text area.
(config.notes).
15. Electrical and Environmental Data
The following section displays more details of the electrical and environmental data collected during the different target loads, including data not used to calculate the benchmark result. For further explanation of the measured values look in the "SPECpower Methodology" document (SPEC-Power_and_Performance_Methodology.pdf).
15.1 Target Load
Load levels as described in paragraph Target Load
15.2 Average Voltage (V) {<V1.10 only}
Average voltage for each of the target load levels measured in Volt (V).
15.3 Average Current (A) {<V1.10 only}
Average current for each of the target load levels measured in Ampere (A).
15.4 Average Power Factor {<V1.10 only}
Average power factor for each of the target load levels (PF).
15.5 Average Active Power (W)
Average active power for each target load level as described in paragraph Average Active Power (W)
15.6 Line Standard
Description of the line standards for the main AC power as provided by the local utility company and used to power the SUT. The standard voltage and frequency are printed in this field followed by the number of phases and wires used to connect the SUT to the AC power line (config.line.standard.voltage, config.line.standard.frequency, config.line.standard.phase, config.line.standard.wires).
15.7 Average Power Factor {<V1.10 only}
Power factor average over all target load levels.
15.8 Minimum Ambient Temperature (°C)
The minimum ambient temperature for each of the target load levels measured by the temperature sensor. All values are measured in ten second intervals, evaluated by the PTDaemon and reported to the collection system at the end of each target load level.
15.9 Minimum Temperature (°C)
Minimum temperature which was measured by the temperature sensor during all target load levels.
15.10 Elevation (m)
Elevation of the location where the test was run. This inforamtion is provided by the tester (config.test.elevation)
16. Aggregate Performance Data
This section describes the aggregated throughput for all JVM instances measured during all test phases including the calibration intervals in a table and as a graph.
16.0.1 Target Load
Load levels as described in paragraph Target Load plus the calibration phases at the beginning. The number of calibration phases can be configured by the tester (config.input.calibration.interval_count), minimum = 3, maximum = 10.
16.0.2 Actual Load
This column shows the actual target loads as described in paragraph Actual Load.
16.1 ssj_ops
The throughput scores both, target and actual values, for all test phases are printed in two columns.
16.1.1 Target
The target throughput for the measurement phases calculated from the calibrated maximum throughput ssj_ops@calibrated=.
16.1.2 Actual
The actual throughput measured during all test phases including calibration as described in paragraph ssj_ops.
16.1.3 ssj_ops@calibrated=
The calibrated throughput is calculated from the average throughput of the last two calibration phases. It is required to run at least three calibration phases and at most ten (config.input.calibration.interval_count).
16.2 ssj_ops Chart
The result chart graphically displays the throughput results reported in the aggregate performance data table in one diagram. The blue line with the square data points represents the target values and the red line with the rotund data points represents the actually measured throughput values for the different test phases as indicated on the x-axis. The throughput values are shown on the y-axis, higher values are better. The thin horizontal line at the top shows the maximal throughput calculated from the calibration runs.
17. Power/Temperatur Details Report
This is the second part of the SPECpower_ssj2008 full disclosure report.
17.1 Top bar
This section shows the measured SPECpower_ssj2008 result and gives some general information regarding this test run. For more details see section Top bar.
17.2 Benchmark Results Summary
This section presents the aggregated active power consumption (see Average Active Power (W)) and the minimum temperature (see Minimum Ambient Temperature (°C)) for all test phases (see Target Load).
The chart to the right graphically displays the power and temperature values from the summary table. The red line with the square data points represents the power consumption in W and the blue line with the rotund data points represents the minimum temperature values in °C for the different test phases as indicated on the x-axis. The power values are shown on the left y-axis, lower numbers are better. The temperature values are shown on the right y-axis.
The thin red horizontal line indicates the average power consumption for the target loads not including the calibration phases.
17.3 Measurement Devices
A description of the power analyzers and temperature sensors used for this test. For more details see section Measurement Devices.
17.4 Notes
A description of all configuration settings that have been changed from the default values. For more details see section Notes.
18. Power Details for Device 'N'
This section includes additional power information for all test phases for each power analyzer at a time.
18.1 Target Load
Load levels as described in paragraph Target Load
18.2 Average Voltage (V)
Average voltage in V for each test phase as reported by the PTDaemon instance connected to power analyzer 'N'.
18.3 Voltage Range (V)
The voltage range for each test phase as configured in the power analyzer. Typically range settings are read by PTDaemon directly from the power analyzer. If a power analyzer does not support range reading the values are taken from the (ptd.pwr1.config.analyzer.voltage_range) property in the "ccs.props" file.
Please note that automatic voltage range setting by the analyzer is not allowed for all currently accepted analyzers and will invalidate the result.
18.4 Average Current (A)
Average current in A for each test phase as reported by the PTDaemon instance connected to power analyzer 'N'
18.5 Current Range (A)
The current range for each test phase as configured in the power analyzer. Typically range settings are read by PTDaemon directly from the power analyzer. If a power analyzer does not support range reading the values are taken from the (ptd.pwr1.config.analyzer.current_range) property in the "ccs.props" file.
Please note that automatic current range setting by the analyzer is not allowed for all currently accepted analyzers and will invalidate the result.
18.6 Average Power Factor
Average power factor for each test phase as reported by the PTDaemon instance connected to power analyzer 'N'
18.7 Average Active Power (W)
Active power averages for each test phase as reported by the PTDaemon instance connected to power analyzer 'N'.
18.8 Power Measurement Uncertainty(%)
The average uncertainty of the reported power readings for each test phase as calculated by PTDaemon based on the range settings.
The value must be within the 1% limit defined in section "2.13.2 Power Analyzer Specifications" of the
SPECpower_ssj2008 Run and Reporting Rules document.
For some analyzers range reading may not be supported. The uncertainty calculation may still be possible based on manual or
command line range settings. More details are given in the measurement setup guide, see
SPEC-Power_Measurement_Setup_Guide.pdf.
19. Aggregate Performance Report
This is the third part of the SPECpower_ssj2008 full disclosure report revising the configuration information and aggregate performance numbers from the first part plus adding more detailed performance information if more than one JVM instance was started.
This report is not created for benchmark runs using only one JVM instance.
19.1 Top bar
This section shows the measured SPECpower_ssj2008 throughput results and gives some
general information regarding this test run.
For more details see section Top bar of the main report file.
In contrast to the top bar in the main report file the headline does not show the overall metric but the aggregated performance at 100% target load for the whole SUT and the average throughput at 100% target load per Host and per JVM.
19.2 Benchmark Results Summary
This section describes the aggregated throughput for all JVM instances measured during all test phases including the calibration intervals in a table and as a graph. For more details see section Aggregate Performance Data.
19.3 Aggregate SUT Data
This section repeats the information from the corresponding section of the main report file.
For more details see section Aggregate SUT Data.
19.4 System Under Test
This section repeats the information from the corresponding section of the main report file.
For more details see section System Under Test.
19.5 Shared Hardware
This section repeats the information from the corresponding section of the main report file.
For more details see section Shared Hardware.
19.6 Set: 'N'
This section repeats the information from the corresponding section of the main report file.
For more details see section Set: 'N'.
19.7 System Under Test Notes
This section repeats the information from the corresponding section of the main report file.
For more details see section System Under Test Notes.
19.8 Notes
This section repeats the information from the corresponding section of the main report file.
For more details see section Notes.
19.9 Set Instance Summary
This section gives an overview of the accumulated throughput for the different sets.
19.9.1 Set
This column of the table names the different sets, the aggregated throughput for all sets at 100% target load and the average throughput per Host and per JVM at 100% target load.
19.9.2 ssj_ops@100%
The aggregated throughput for all JVM instances of a set at the 100% target load level.
19.9.3 ssj_ops Set Chart
The result chart graphically displays the throughput results reported in the set instance summary table in one diagram. The colored lines represent the actually measured throughput values of each set for the different test phases as indicated on the x-axis. The throughput values are shown on the y-axis, higher values are better. The thin horizontal line shows the average throughput per set.
19.10 Set 'N' Scores
This section describes the aggregated throughput for all JVM instances belonging to this set measured during all test phases including the calibration intervals in a table and as a graph.
The layout and the information are similar to the "Aggregate Performance Data" section of the main report file.
For more details see section Aggregate Performance Data.
20. Set 'N' Performance Report
This report represents the next level of the SPECpower_ssj2008 full disclosure report.
It describes the configuration and performance details for a specific set of nodes.
There may exist several of these reports, one for each set.
This report is not created for benchmark runs using only one homogeneous set of nodes.
20.1 Top bar
This section shows the measured SPECpower_ssj2008 throughput results for this set of nodes and gives some
general information regarding this test run.
For more details see section Top bar of the main report file.
In contrast to the top bar in the main report file the headline does not show the overall metric but the aggregated performance at 100% target load for the whole set and the average throughput at 100% target load per Host and per JVM.
20.2 Benchmark Results Summary
This section describes the aggregated throughput for all JVM instances of this set measured during all test phases including the calibration intervals in a table and as a graph.
For more details see section Aggregate Performance Data.
20.3 Aggregate SUT Data
This section repeats set specific information from the corresponding section of the main report file.For more details see section Aggregate SUT Data.
20.4 System Under Test
This section repeats set specific information from the corresponding section of the main report file.
For more details see section System Under Test.
20.5 Shared Hardware
This section repeats the information from the corresponding section of the main report file.
For more details see section Shared Hardware.
20.6 Set: 'N'
This section repeats set specific information from the corresponding section of the main report file.
For more details see section Set: 'N'.
20.7 System Under Test Notes
This section repeats set specific information from the corresponding section of the main report file.
For more details see section System Under Test Notes.
20.8 Notes
This section repeats set specific information from the corresponding section of the main report file.
For more details see section Notes.
20.9 Host Instance Summary
This section gives an overview of the accumulated throughput for the different hosts belonging to this set.
For more details regarding the layout and the field names see section Set Instance Summary.
20.9.1 Host
This column of the table names the different hosts, the aggregated throughput for all hosts at 100% target load and the average throughput per Host and per JVM at 100% target load.
20.9.2 ssj_ops@100%
The aggregated throughput for all JVM instances of a host at the 100% target load level.
20.9.3 ssj_ops Host Chart
The result chart graphically displays the throughput results reported in the host instance summary table in one diagram. The colored lines represent the actually measured throughput values of each host for the different test phases as indicated on the x-axis. The throughput values are shown on the y-axis, higher values are better. The thin horizontal line shows the average throughput per host.
20.10 Host 'N' Scores:
This section describes the aggregated throughput for all JVM instances running on host 'N' measured during all test phases including the calibration intervals in a table and as a graph.
The layout and the information are similar to the "Aggregate Performance Data" section of the main report file.
For more details see section Aggregate Performance Data.
21. Host 'N' Performance Report
This report represents the next level of the SPECpower_ssj2008 full disclosure report.
It describes the configuration and performance details for a specific node or host.
There may exist several of these reports, one for each host.
21.1 Top bar
This section shows the measured SPECpower_ssj2008 throughput results for this node and gives some
general information regarding this test run.
For more details see section Top bar of the main report file.
In contrast to the top bar in the main report file the headline does not show the overall metric but the aggregated performance at 100% target load for this node and the average throughput at 100% target load per JVM.
21.2 Benchmark Results Summary
This section describes the aggregated throughput for all JVM instances of this host measured during all test phases including the calibration intervals in a table and as a graph.
For more details see section Aggregate Performance Data.
21.3 System Under Test
This section repeats host specific information from the corresponding section of the main report file.
For more details see section System Under Test.
21.4 Set 'N'
This section repeats host specific information from the corresponding section of the main report file.
For more details see section Set: 'N'.
21.5 System Under Test Notes
This section repeats host specific information from the corresponding section of the main report file.
For more details see section System Under Test Notes.
21.6 Notes
This section repeats host specific information from the corresponding section of the main report file.
For more details see section Notes.
21.7 JVM Instance Summary
This section gives an overview of the accumulated throughput for the different JVMs running on this host.
For more details regarding the layout and the field names see section Set Instance Summary.
21.7.1 JVM Instance
This column of the table names the different JVM instances, the aggregated throughput for all JVM instances at 100% target load and the average throughput per JVM at 100% target load.
21.7.2 ssj_ops@100%
The aggregated throughput for all JVM instances of a host at the 100% target load level.
21.7.3 ssj_ops JVM Instance Chart
The result chart graphically displays the throughput results reported in the host instance summary table in one diagram. The colored lines represent the actually measured throughput values of each JVM instance for the different test phases as indicated on the x-axis. The throughput values are shown on the y-axis, higher values are better. The thin horizontal line shows the average throughput per JVM.
21.8 JVM 'N' Scores:
This section describes the throughput of one JVM instance measured during all test phases including the calibration intervals in a table and as a graph.
The layout and the information are similar to the "Aggregate Performance Data" section of the main report file.
For more details see section Aggregate Performance Data.
22. JVM Instance 'N' Performance Report
This is the lowest level part of the SPECpower_ssj2008 full disclosure report revising the configuration information and performance numbers from the previous level plus adding more detailed throughput information for different transaction types. There may exist several of these reports, one for each JVM instance.
22.1 Top bar
This section shows the measured SPECpower_ssj2008 throughput results for this JVM instance and gives some
general information regarding this test run.
For more details see section Top bar of the main report file.
In contrast to the top bar in the main report file the headline does not show the overall metric but the performance at 100% target load for this JVM.
22.2 Benchmark Results Summary
This section describes the aggregated throughput for this specific JVM instance measured during all test phases including the calibration intervals in a table and as a graph. For more details see section Aggregate Performance Data.
22.3 System Under Test
This section repeats host specific information from the corresponding section of the main report file.
For more details see section System Under Test.
22.4 Set: 'N'
This section repeats host specific information from the corresponding section of the main report file.
For more details see section Set: 'N'.
22.5 System Under Test Notes
This section repeats host specific information from the corresponding section of the main report file.
For more details see section System Under Test Notes.
22.6 Notes
This section repeats host specific information from the corresponding section of the main report file.
For more details see section Notes.
22.7 Performance Details
This table gives more details about the transactions executed during the different test phases by this JVM instance.
22.7.1 Target Load
For description see Target Load.
22.7.2 Actual Load
For description see Actual Load.
22.7.3 Transaction Type
This column of the table names the different transaction types which are executed by the workload.
22.7.4 Count
This column of the table displays the count of successfully finished transactions during the different test phases separately for each type.
22.7.5 Total Heap (MB)
The total amount of heap memory used by this JVM instance during the different test phases.
Product and service names mentioned herein may be the trademarks of their respective owners.
Copyright 2007-2012 Standard Performance Evaluation Corporation (SPEC)
All Rights Reserved