The development of the SPECaccel 2023 benchmark suite is a collaboration by representatives from industry vendors, high performance computing centers and academic institutions to provide a baseline to compare accelerator performance. While vendor supplied languages may offer optimal performance for a particular accelerator, the use of directives showcases performance portability across many accelerators.
SPEC and EEMBC combining forces will significantly further both organizations’ missions to provide global, independent and high-quality benchmarks, and provide one source for benchmarks which cover the smallest microcontroller to the largest supercomputers.
An IT Manager at a rapidly growing financial services company, needed to specify the hardware required for the company’s new private cloud deployment. He was pleased to find the SPECvirt Datacenter 2021 benchmark, which offered several benefits compared to the 2013 version – and to any of the other tools John had in his arsenal.
The benchmark has undergone a significant transformation with three major upgrades since it first released for Creo 3, and we are particularly gratified with being able to work directly with PTC on this update. In addition to new test cases that exercise features added to Creo over the last few releases, we have significantly enhanced the benchmark’s interface to make it far more user-friendly.
Emma is a tech enthusiast who develops product designs and plays games on the same workstation, which includes a CPU and GPU that were mid-range when she bought the system in 2018. She narrowed her choice down to five possibilities based on price and the brands she favored, but she obviously couldn’t afford to purchase all five and test them. She was at a bit of a loss until she noted that in several of the reviews, GPU performance comparisons were based on the SPECviewperf benchmark.
Pleased to congratulate Professor Lizy Kurian John, IEEE Micro Editor-in-Chief and Truchard Foundation Chair at the Department of Electrical and Computer Engineering, University of Texas at Austin, on receiving the Joe J. King Professional Engineering Achievement Award. Professor John is well known within SPEC for her contributions to SPEC CPU, and in turn, her contribution to new CPU processor design.
This year’s event marked an exciting return to an in-person conference, and nearly 150 attendees enjoyed three keynote speeches, 28 research presentations, seven data challenge presentations, a range of workshops and more. The presentations offered a broad range of topics, including AI, fair data sharing practices in research, and performance engineering practices at companies such as ABB, MongoDB and Redis.
SPEC believes that the most effective computing benchmarks are based on how various user communities run actual applications. To enable us to do this, Search Programs encourage users outside of SPEC to contribute applications, workloads, or models that will enable us to build more comprehensive and more applicable benchmarks, which in turn will better serve their communities. The new Benchmark Search Program is for the SPEC Graphics and Workstation Performance Group (GWPG).
The SPECapc for Maya 2023 benchmark consists of 47 tests using eleven different models and animations. It includes eight different graphics tests in various modes and five different CPU tests. The graphics-oriented tests use six different Maya view settings — Shaded, Shaded SSAO, wireframe on shading, wireframe on shaded SSAO, textured, and textured SSAO. Various tests measure both animation and 3D model rotation performance. Five CPU tests within the benchmark perform CPU ray tracing and evaluation caching in various modes.
Over the last few years, the cloud market has grown in its depth and breadth of offerings. From its simple beginnings, when on-premises workloads and applications could be run on instances rented on the cloud, the market has moved to designing cloud-native applications that run on disaggregated hardware.
SPEC believes that the most effective computing benchmarks are developed based on how various user communities run actual applications. To enable us to do this, SPEC regularly conducts Search Programs that encourage those outside of SPEC to contribute applications, workloads, or models that will enable us to build more comprehensive and more applicable benchmarks that will better serve their communities.





