SPEC Blog
News and views about SPEC, our products, and the people in our community.
News and views about SPEC, our products, and the people in our community.
The benchmark has undergone a significant transformation with three major upgrades since it first released for Creo 3, and we are particularly gratified with being able to work directly with PTC on this update. In addition to new test cases that exercise features added to Creo over the last few releases, we have significantly enhanced the benchmark’s interface to make it far more user-friendly.
Emma is a tech enthusiast who develops product designs and plays games on the same workstation, which includes a CPU and GPU that were mid-range when she bought the system in 2018. She narrowed her choice down to five possibilities based on price and the brands she favored, but she obviously couldn’t afford to purchase all five and test them. She was at a bit of a loss until she noted that in several of the reviews, GPU performance comparisons were based on the SPECviewperf benchmark.
Pleased to congratulate Professor Lizy Kurian John, IEEE Micro Editor-in-Chief and Truchard Foundation Chair at the Department of Electrical and Computer Engineering, University of Texas at Austin, on receiving the Joe J. King Professional Engineering Achievement Award. Professor John is well known within SPEC for her contributions to SPEC CPU, and in turn, her contribution to new CPU processor design.
This year’s event marked an exciting return to an in-person conference, and nearly 150 attendees enjoyed three keynote speeches, 28 research presentations, seven data challenge presentations, a range of workshops and more. The presentations offered a broad range of topics, including AI, fair data sharing practices in research, and performance engineering practices at companies such as ABB, MongoDB and Redis.
SPEC believes that the most effective computing benchmarks are based on how various user communities run actual applications. To enable us to do this, Search Programs encourage users outside of SPEC to contribute applications, workloads, or models that will enable us to build more comprehensive and more applicable benchmarks, which in turn will better serve their communities. The new Benchmark Search Program is for the SPEC Graphics and Workstation Performance Group (GWPG).
The SPECapc for Maya 2023 benchmark consists of 47 tests using eleven different models and animations. It includes eight different graphics tests in various modes and five different CPU tests. The graphics-oriented tests use six different Maya view settings — Shaded, Shaded SSAO, wireframe on shading, wireframe on shaded SSAO, textured, and textured SSAO. Various tests measure both animation and 3D model rotation performance. Five CPU tests within the benchmark perform CPU ray tracing and evaluation caching in various modes.
Over the last few years, the cloud market has grown in its depth and breadth of offerings. From its simple beginnings, when on-premises workloads and applications could be run on instances rented on the cloud, the market has moved to designing cloud-native applications that run on disaggregated hardware.
SPEC believes that the most effective computing benchmarks are developed based on how various user communities run actual applications. To enable us to do this, SPEC regularly conducts Search Programs that encourage those outside of SPEC to contribute applications, workloads, or models that will enable us to build more comprehensive and more applicable benchmarks that will better serve their communities.
As sustainability has become an increasingly important global issue, the SPECpower benchmark has played a critical role in enabling and encouraging vendors to improve the energy efficiency of their products. Over the last few years, the growing focus on sustainability has also led to an important new direction for SPEC.