SPEC virt_sc ® 2013 V1.1 is a software benchmark product developed by the Standard Performance Evaluation Corporation (SPEC), a non-profit group of computer vendors, system integrators, universities, research organizations, application software stacks. The benchmark is intended to be run by hardware vendors, virtualization software vendors, application software vendors, datacenter managers, and academic researchers.
The release of SPEC virt_sc V1.1 enhances security protocol support to the webserver workload, addresses minor defects in the reporter and syntax checker, and updates the Power and Temperature Daemon (PTD) to the latest version.
SPEC virt_sc V1.1 is an update to SPEC virt_sc 2013 V1.0 and shares its benchmark architecture, workload implementation, harness, and run requirements. For application stacks that require Transport Layer Security (TLS), it supports TLSv1, TLSv1.1, and TLSv1.2 and adds newer ciphers. Existing support of SSLv3 remains available and is the default.
The benchmark presents an overall workload that achieves the maximum performance of the platform when running a set of four application workloads against one or more sets of Virtual Machines called "tiles". Scaling the workload on the SUT (System Under Test) consists of running an increasing number of tiles. Peak performance is the point at which the addition of another tile either fails the Quality of Service criteria or fails to improve the overall metric.
The benchmarker has the option of running with power monitoring enabled and can submit results to either the performance with SUT power category and/or performance with Server only power category.
The suite consists of several SPEC workloads that represent applications that industry surveys report to be common targets of virtualization and server consolidation. We modified each of these standard workloads to match an enterprise server consolidation scenario's resource requirements for CPU, memory, disk I/O, and network utilization for each workload. The SPEC workloads used are modified versions of SPECweb2005, SPECjAppServer2004, SPECmail2008, and SPEC CPU2006.
Initial SPEC virt_sc results are available on SPEC's web site. Subsequent results are posted on an ongoing basis following each two-week review cycle: results submitted by the two-week deadline are reviewed by SPEC virtualization subcommittee members for conformance to the run rules, and if accepted at the end of that period are then publicly released. Results disclosures are at: http://www.spec.org/virt_sc2013/results.
SPEC virt_sc is a standardized benchmark, which means that it is an abstraction of the real world. For example all the database servers can use the same database archive to restore their copy of the database. This helps reduce the complexity of setting up the test.
SPEC virt_sc results are not intended for use in sizing or capacity planning.
In SPEC virt_sc, a tile is a single unit of work that is comprised of four application workloads that are driven across five distinct virtual machines plus a separate Database Server VM. The load on the SUT is scaled up by configuring additional sets of the VMs as described below and increasing the tile count for the benchmark. For a SPEC virt_sc tile, the workloads and their VMs are:
The Application Server VM for each tile requires an enterprise-class Database Server VM backend. Each Database Server VM is shared by up to four appserver VMs. For every four consecutive tiles, a separate Database Server VM is required. Only the last Database Server VM may be shared by fewer than four tiles if the number of tiles is not a multiple of four.
When the SUT does not have sufficient system resources to support the full load of an additional tile, the benchmark offers the use of a fractional load tile. A fractional tile consists of an entire tile with all VMs but running at a reduced percentage of its full load.
SPEC virt_sc is available via web download from the SPEC site at $3000 for new licensees and $1500 for academic and eligible non-profit organizations. The order form is at: http://www.spec.org/order.html.
The benchmark includes the code necessary to run the driver system(s), the server-side file set generation tools, and dynamic content implementations. It is at the tester's discretion to choose the application stack.
The SPEC virt_sc kit contains:
See the Run Rules and the User's Guide for more detailed information and requirements.
You can find more information on how to set up and run the benchmark in the User's Guide and the Client Harness User's Guide. You also can register at the SPECvirt Forum where you can post questions and review solutions to common problems. If you cannot resolve your issue using these methods, please send email to bring it to the attention of the SPEC Virtualization subcommittee.
At the Forum you also can find an ExampleVM environment for the SPEC virt_sc V1.1 benchmark. This ExampleVM environment includes documentation and scripts to help configure the six virtual machines needed in a SPEC virt_sc tile as well as a client virtual machine. Even if the configuration is not exactly the same as one your are trying to set up, having a working example for comparison is a valuable tool to aid you in setting up your own environment. See this topic for more information.
Only SPEC virt_sc licensees can submit results. SPEC member companies submit results free of charge, and non-members may submit results for an additional fee. All results are subject to a two-week review by SPEC virtualization subcommittee members. First-time submitters should contact SPEC's administrative office.
SPEC virt_sc submissions must include both the raw output file and configuration information required by the benchmark. During the review process, other information may be requested by the subcommittee. You can find submission requirements in the run rules.
The current version of the run rules can be found at: http://www.spec.org/virt_sc2013/docs/SPECvirt_RunRules.html.
The SPEC virt_sc Design Document contains design information on the benchmark and workloads. The Run and Reporting Rules, the User's Guide, and the Client Harness User's Guide contain instructions for installing and running the benchmark. See http://www.spec.org/osg/virtualization for the available information on SPEC virt_sc.
SPEC developed a test harness driver to coordinate running the component workloads in one or more tiles on the SUT. The harness allows you to run and monitor the benchmark, collects measurement data as the test runs, post-processes the data at the end of the run, validates the results, and generates the test report.
The benchmark supports three categories of results, each with its own
primary metric. Results may be compared only within a given category;
however, the benchmarker has the option of submitting results from a given
test to one or more categories. The first category is Performance-Only and
its metric is SPEC virt_sc which is expressed as "SPEC virt_sc
No. Currently the benchmark is designed for a single host system.
Yes, you can use open source products when running the benchmark as long as you comply with open source requirements specified in the Run Rules.
The SPEC Virtualization subcommittee reviews all results but does not require that they be independently audited.
No. SPEC must review and accept the result before it can be announced publicly.
Yes, the client driver machines must be configured properly to accommodate the workloads. You may use one or more physical systems for client load drivers, clients may be virtualized, and the tile may be driven by multiple clients. Note that client resource requirements for SPEC virt_sc are higher than for SPEC virt sc 2010. See the User's Guide for more information regarding hardware and software requirements for the clients.
SPEC virt_sc implements the SPECpower methodology for power measurement. The benchmarker has the option of running with power monitoring enabled and can submit results to any of three categories: * performance only (SPEC virt_sc) * performance/power for the SUT (SPEC virt_sc_PPW) * performance/power for the Server-only (SPEC virt_sc_ServerPPW)
You can find more information on power measurement in the Client Harness User's Guide and Run Rules.
No, they are not. Several substantive changes have been made that make the SPEC virt_sc workloads unique.
No. Results between the different SPEC virt_sc categories cannot be compared.
No. SPEC virt_sc is unique and not comparable to other benchmarks.
A compliant benchmark result meets all the requirements of the SPEC virt_sc run rules for a valid result. In addition to the run and reporting rules, several validation and tolerance checks are built-in to the benchmark. If you intend to publicly use the SPEC virt_sc metrics, the result must be compliant and accepted by SPEC.
Yes, for non-compliant runs only. You may set the load level for each or all workloads to be heavier or lighter as your needs dictate. You can set these load levels by changing parameters in the Control.config file and possibly each workload's configuration file.
The run time is approximately three hours with default settings.
SPEC virt_sc has been implemented as a standardized end-to-end benchmark designed to stress all layers of a system that handles a workload representative of server consolidation. Performance critical components include the server hardware (Processors, Memory, Network, Storage, etc.), the virtualization technology (hardware virtualization, operating system virtualization, and hardware partitioning), the guest (VM) operating systems, and the guest application software stacks. Selection and tuning of any of these components can have significant effects on the overall performance of the SUT.
The best way to differentiate the performance characteristics of different versions or products for a specific element of a system is to hold all other elements constant and change only component you are interested in. For example if you want to see the effects of RAID 5 vs RAID 10, then keep the other elements of the server, virtualization products, and the guest VMs the same and install copies of the VMs on the RAID 5 storage and RAID 10 storage while keeping other storage elements such as number of LUNs the same and run your tests. Similarly if you want to compare versions of hypervisors, then you need to keep the rest of the platform constant. If you change other elements such as the software running on the VMs, it can significantly impact the overall results.
SPEC virt_sc supports hardware virtualization, operating system
virtualization, and hardware partitioning. The benchmark does not address
multiple host performance or application virtualization.
The documentation assumes that you have familiarity with virtualization concepts and implementations. You require experience with the installation, configuration, management, and tuning of your selected hypervisor platform. You must know how to use your virtualization platform to create, administer, and modify virtual machines and allocate system resources including: