SPECstorage(TM) Solution 2020_ai_image Result Pure Storage : Pure Storage FlashBlade//EXA SPECstorage Solution = 6300 AI_Jobs (Overall Response Time = 0.97 msec) 2020_ai_image =============================================================================== Performance =========== Business Average Metric Latency AI_Jobs AI_Jobs (AI_Jobs) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 630 0.3 274086 61610 1260 0.4 548172 123227 1890 0.7 822258 184827 2520 0.7 1096344 246449 3150 0.8 1370429 308051 3780 1.0 1644515 369690 4410 1.2 1918602 431277 5040 1.4 2192687 492872 5670 1.6 2466770 554501 6300 1.8 2740854 616129 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | Pure Storage FlashBlade//EXA | +---------------------------------------------------------------+ Tested by Pure Storage Hardware Available June 2025 Software Available June 2025 Date Tested January 2026 License Number 9072 Licensee Locations 2555 Augustine Drive, Santa Clara, CA 9505 Pure Storage’s FlashBlade//EXA is an ultra-scale, disaggregated data storage platform built to deliver extreme throughput, low-latency metadata performance, and seamless scalability for large-scale AI and high-performance computing workloads. It delivers this via RDMA-enabled pNFS, and is validated using SPECstorage Solution 2020_ai_image. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Metadata Pure FlashBlade The Metadata Node system was a Node Storage //EXA 3-chassis multi-chassis system configuration with 2 eXternal Fabric Modules (XFMs). Each XFM had 4 x 400Gbps Uplink ports. Each chassis is connected to each XFM with 4x100Gb uplink ports. Each chassis was equipped with 10 x S500R1 FlashBlade//EXA blades. Each blade was equipped with 2 × 37.5TB N58R DirectFlash Modules (DFMs). { Metadata Node system details: [ 2 x eXternal Fabric Modules (XFMs) - Model: XFM-8400 - Part Number: 86-0001-04 ] [ 3 x FlashBlade Chassis - Model: CH-FB-II - Part Number: 83-0383-12 ] [ 10 x FlashBlade S500R1 blades per chassis - Model: FB-S500 - Part Number: 83-0433-08 ] [ 2 x DirectFlash Modules (DFMs) per blade - Raw Capacity: 37.50 TB (34.11 TiB) - Part Number: 83-0489-06 ] [ Pure Storage does not publish publicly accessible specifications for the components of the FlashBlade Metadata Node system. Detailed specifications are available only through customer or partner support documentation. A Technical Deep Dive on FlashBlade//S can be found at: http s://www.purestorage.com/video/techn ical-deep-dive-on- flashblade/6307195175112.html ] } 2 30 Data Node Supermicro Supermicro Supermicro ASG-1115S-NE316R servers Data Nodes {CPU = [single-socket AMD EPYC 9355P processor with 32 physical cores (64 logical CPUs via SMT) on a 64-bit x86 architecture]} {MEMORY = (192 GB of DDR5 ECC memory).} {DATA NETWORK ADAPTERS = (2 x NVIDIA ConnectX-7 EN (MT2910) single-port 400 GbE QSFP112 Ethernet adapters installed, operating over PCIe Gen5 with secure firmware and InfiniBand/VPI functionality disabled.)} {SSDs = [ Each data node has 16 x KIOXIA KCM7DRJE3T84 3.84 TB enterprise NVMe SSDs installed. ] [ Each data node had 61.45 TB of raw capacity and 28.88 TB of usable capacity. ]} {Operating System = [The FlashBlade//EXA Data Node Operating System (Purity//DN) was loaded onto each data node. Security scanning for Purity//DN (the Data Node OS for FlashBlade//EXA) is performed as part of the release process. Purity//DN does not provide mechanisms for non-administrative users to run third-party code, and thus is not affected by common OS vulnerabilities.]} 3 60 Host Supermicro Ubuntu Supermicro ASG-1115S-NE316R servers Initiator 24.04 {CPU = [single-socket AMD EPYC Bare-Metal 9355P processor with 32 physical Host cores (64 logical CPUs via SMT) on Initiators a 64-bit x86 architecture]} {MEMORY = (192 GB of DDR5 ECC memory).} {DATA NETWORK ADAPTERS = (2 x NVIDIA ConnectX-7 EN (MT2910) single-port 400 GbE QSFP112 Ethernet adapters installed, operating over PCIe Gen5 with secure firmware and InfiniBand/VPI functionality disabled.)} {SSD = [1 x Micron 7450-series MTFDKBA480TFR 480 GB enterprise NVMe SSD (NVMe 1.4, PCIe-attached) with full SMART support and 0% media wear, used as a local system disk.]} {Operating System = [Ubuntu 24.04.3 LTS, Kernel Linux 6.14.6clearflag-v1+]} 4 20 Host Supermicro Ubuntu Supermicro SYS-621C-TN12R servers Initiator 24.04 {CPU = [ dual-socket Intel Xeon Bare-Metal Silver 4516Y+ platform with 48 Host physical cores (96 logical CPUs via Initiators SMT) on a 64-bit x86 architecture ]} {MEMORY = (1024 GB of DDR5 ECC memory - reduced to 198752M via GRUB - GRUB_CMDLINE_LINUX_DEFAULT="quiet splash mem=198752M" )} {DATA NETWORK ADAPTERS = (2 x NVIDIA ConnectX-7 EN (MT2910) single-port 400 GbE QSFP112 Ethernet adapters installed, operating over PCIe Gen5 with secure firmware and InfiniBand/VPI functionality disabled.)} {SSD = [1 x Micron 5400-series MTFDDAK240TGA 240 GB enterprise SATA SSD (2.5-inch, SATA 6 Gb/s) with full SMART support and 100% remaining endurance, used as the system disk.]} {Operating System = [Ubuntu 24.04.3 LTS, Kernel Linux 6.14.6clearflag-v1+]} 5 8 Data NVIDIA NVIDIA 2 x NVIDIA SN5600 spine data Network SN5600 network switches Switch data 6 x NVIDIA SN5600 leaf data network network switches ( http switches s://docs.nvidia.com/networking/disp lay/sn5000/specifications#src-27058 11927_Specifications- SN5600Specifications ) Configuration Diagrams ====================== 1) storage2020-20260130-00145.config1.png (see SPECstorage Solution 2020 results webpage) 2) storage2020-20260130-00145.config2.png (see SPECstorage Solution 2020 results webpage) 3) storage2020-20260130-00145.config3.png (see SPECstorage Solution 2020 results webpage) 4) storage2020-20260130-00145.config4.png (see SPECstorage Solution 2020 results webpage) 5) storage2020-20260130-00145.config5.png (see SPECstorage Solution 2020 results webpage) 6) storage2020-20260130-00145.config6.png (see SPECstorage Solution 2020 results webpage) 7) storage2020-20260130-00145.config7.png (see SPECstorage Solution 2020 results webpage) 8) storage2020-20260130-00145.config8.png (see SPECstorage Solution 2020 results webpage) 9) storage2020-20260130-00145.config9.png (see SPECstorage Solution 2020 results webpage) 10) storage2020-20260130-00145.config10.png (see SPECstorage Solution 2020 results webpage) 11) storage2020-20260130-00145.config11.png (see SPECstorage Solution 2020 results webpage) 12) storage2020-20260130-00145.config12.png (see SPECstorage Solution 2020 results webpage) 13) storage2020-20260130-00145.config13.png (see SPECstorage Solution 2020 results webpage) 14) storage2020-20260130-00145.config14.png (see SPECstorage Solution 2020 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Operating Host OS Ubuntu Ubuntu 24.04.3 LTS installed on all System 24.04.3 LTS, 80 bare-metal initiator hosts. (Initiators) Kernel Linux Kernel Linux 6.14.6clearflag-v1+ 6.14.6clearf (Backported “NFSv4/pNFS: Clear lag-v1+ NFS_INO_LAYOUTCOMMIT in pnfs_mark_layout_stateid_invalid”). 2 FlashBlade Metadata Purity//FB Purity//FB is the proprietary Purity//FB Node System 4.6.4 (GA FlashBlade operating environment Operating build) responsible for managing DFMs System (DirectFlash Modules), handling distributed metadata, RDMA/NFS protocol processing, and ensuring data integrity. The 4.6.4 release was used without patches to represent production-ready software. File striping was not enabled for this test. 3 The FlashBla Data Node Purity//DN Purity//DN is the dedicated de//EXA Data Operating 1.0 (GA operating system and services stack Node System build) for Data Nodes in FlashBlade//EXA Operating systems, System designed to deliver high (Purity//DN) performance, high availability, and advanced management for scale-out storage environments. Purity//DN runs on the Data Nodes (DN) of FlashBlade//EXA, providing the core OS and services required for data storage, management, and high-throughput operations. It is distinct from Purity//FB (the FlashBlade controller OS) and follows a separate release cycle, though releases are generally aligned with major Purity//FB feature releases for compatibility. 4 Networking Network pNFS with This includes all firmware and Stack Stack RDMA-enabled kernel-level drivers supporting 400Gbps NICs RoCE (RDMA over Converged Ethernet) for ultra-low latency. Each host initiator and each data node has 2 x NVIDIA (Mellanox) ConnectX-7 EN (MT2910) single-port 400 GbE QSFP112 Ethernet adapters installed, operating over PCIe Gen5 with secure firmware and InfiniBand/VPI functionality disabled. BGP (Border Gateway Protocol) networking was used. Native OS-provided NFS client, drivers, and tools were used. 5 pNFS Configu File System NFSv4.1 The Namespace and filesystem ration Client mount with metadata are served by a Metadata nconnect=16, Node system that clients mount via RDMA NFSv4.1 over TCP. While the metadata service provides data layout information, host initiators communicate directly with the data nodes using pNFS semantics and RDMA for high- performance data access. Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | NFS Client | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- nconnect 16 Enables multiple transport connections per mount Hardware Configuration and Tuning Notes --------------------------------------- None Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | SPECstorage Solution 2020_ai_image Workload Engine | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- NodeManager 80 Ensures balanced load generation Count Software Configuration and Tuning Notes --------------------------------------- Ensures storage reaches steady state before measurement phase. Required by SPECstorage Solution 2020 rules. Service SLA Notes ----------------- Purity//FB 4.6.4 and Purity/DN was installed and operated under internal support governance with direct engineering oversight. No patches or hotfixes were applied during the benchmark run. The Ethernet switching fabric used in the testbed environment was configured by the internal networking team. Run-time software (SPECstorage Solution 2020_ai_image, netmist, and nodeManagers) was deployed uniformly across all 80 host initiators. All components in the testbed were physically hosted and internally maintained; no cloud infrastructure or external SLA dependencies were involved. The testbed environment matched expected production-grade deployment topologies for FlashBlade customers. Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 Pure Storage FlashBlade//EXA blades, Distributed FlashBla 30 Me designed for high-performance Erasure Coding de tadat file and object workloads. Each blade S-class a is independently addressable with DirectFl Node embedded compute ash Blade and connected via XFM-to-FIOM Modules s acr architecture. Provides highly (DFMs), oss 3 parallelized metadata access, with 2 × Chass consistent latency and fault 37.5TB is isolation between blades. per blade 2 RAID 10–style layout: RAID10-style 16 x 30 (Inner layer: consists of RAID 1 layout implemented KIOXIA K Data mirrors of NVMe drive pairs) as RAID0-over- CM7DRJE3 Nodes (Outer layer: RAID 0 striped across 8 RAID1 T84 3.84 mirrored devices) TB enter (Filesystem: XFS on top of the prise striped mirror set) NVMe The system uses an mdadm RAID0-over- SSDs are RAID1 (striped mirrors) configuration used to with an XFS filesystem on top; back the this results in an inherent 50% XFS file capacity efficiency due to mirroring, system. making approximately 866 TB of usable capacity from 1843.5 TB of raw capacity mathematically consistent once RAID metadata, XFS filesystem structures, and alignment overhead are included. 3 80 Ubuntu 24.04 bare-metal initiator Host-based Persiste 80 In hosts with RDMA-enabled 400GbE checkpointing and nt boot itiat NICs. Each host connects to the client retry NVMe, no or FlashBlade Metadata Node system via a local Hosts single-port data ret NFSv4.1 TCP mount using ention `nconnect=16`. Hosts generate benchmark load and issue sustained concurrent file operations for the duration of the test window. Number of Filesystems 1 file systems distributed over 30 data nodes, and shared to all 80 host initiators Total Capacity 1843.5 TB Raw, 866.4 TB Usable Filesystem Type pNFS (RDMA enabled) Filesystem Creation Notes ------------------------- The distributed filesystem was created via the FlashBlade//EXA GUI. It was configured with a NFSv4.1 export. Storage and Filesystem Notes ---------------------------- When a filesystem is created on FlashBlade//EXA, the system orchestrates a series of coordinated actions across metadata node system (MDN) and data nodes (DNs), with a strong focus on scalability, performance, and data placement control. 1. Node Group Selection and Association Node Groups: Before a filesystem can be created, at least one data node group must exist. A node group is a logical collection of data nodes that will serve as the backing storage for the filesystem. This design allows administrators to control which DNs are used for specific filesystems, limiting the blast radius of failures and optimizing performance for different workloads. Association: When creating a filesystem, you must specify the node group it will use. All data for that filesystem will be placed only on the DNs in the selected group. This ensures that filesystems do not compete for IO resources across all DNs and that access can be maintained for unaffected files if a DN goes offline. 2. Filesystem Creation Workflow Metadata Node (MDN) Actions: The MDN receives the filesystem creation request (typically via the management GUI or CLI). It records the association between the new filesystem and the chosen node group. The MDN persists all relevant metadata, including the filesystem’s unique ID, node group membership, and configuration details, in its distributed metadata store (DFMs). Data Node (DN) Preparation: The MDN communicates with each DN in the node group to prepare them for the new filesystem. On each DN, an XFS file system is created atop a software RAID (MD-RAID) array, using the local SSDs. The XFS export is assigned a UUID, which is tracked by the MDN. Export and Mounting: The DNs export the new XFS filesystem over NFS (typically NFSv3 over RDMA for data, NFSv4.1 over TCP for metadata). The MDN keeps a record of each export’s UUID and IP address, ensuring that if a DN is replaced or its network changes, the system can recognize and re-associate the export. 3. Data Placement and File Mapping Placement Algorithm: When files are created within the new filesystem, the MDN uses a data placement algorithm to select which DN in the node group will store each file. The selection is based on available capacity, ensuring balanced utilization. At GA, each file is mapped to a single DN—striping across DNs is not supported. Metadata and Data Coordination: The MDN manages all metadata (directory structure, file attributes, etc.), while the DNs handle the actual file data. The MDN provides clients with the necessary information (DN IP, file handle, etc.) to access data directly on the appropriate DN. 4. Protocols and Control Protocols: The system uses NFSv4.1 (pNFS) for client-to-MDN communication and NFSv3 (FlexFile layout, often over RDMA) for client-to-DN data transfer. The MDN also uses gRPC and NFSv3 to coordinate with DNs for export management and health monitoring. 5. Manageability and Limits Node Group Constraints: You cannot create a filesystem with an empty node group, nor can you remove a DN from a node group if it is still in use by a live filesystem. No Snapshots or Quotas: At GA, features like filesystem snapshots and quotas are not supported on FlashBlade//EXA. ==== Example of mount command from host initiator: # sudo mount -t nfs -o vers=4,nconnect=16 192.168.2.101:/specsfs2020 /mnt/specsfs2020 # mount | grep nfs 192.168.2.101:/specsfs2020 on /mnt/specsfs2020 type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,nconnect=16,timeo=600,retrans=2,sec=sys, clientaddr=192.168.10.117,local_lock=none,addr=192.168.2.101) ==== Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 400Gb Ethernet 2 per each Each initiator used a 400Gbps RDMA-capable NIC (RDMA-enabled) of the 80 connected to multiple central switches host initiators (160 total), 2 per each of the 30 data nodes (60 total) Transport Configuration Notes ----------------------------- RDMA over Converged Ethernet (RoCEv2) was used across the entire fabric. All initiator hosts were connected via two 400Gbps RDMA NICs to central Ethernet switches. FlashBlade XFM modules provided 4 uplinks per chassis. All routing managed with eBGP within the data plane. Management plane uses standard layer 3 routing. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 2 x NVIDIA SN5600 800Gb Ethernet 128 113 These switches support spine data network Switch (RDMA- full-speed RoCEv2 switches capable) transport, and were configured with BGP (Border Gateway Protocol). 2 6 x NVIDIA SN5600 800Gb Ethernet 384 262 These switches support leaf data network Switch (RDMA- full-speed RoCEv2 switches capable) transport, and were configured with BGP (Border Gateway Protocol). Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 20 dual- Bare-Metal Each host initiator is Load Generation and socket Supermicro equipped with 1024 GB of Benchmark Execution Intel SYS-621C-TN12R DDR5 ECC memory reduced Xeon Host to 198752M via GRUB (GRUB Silver Initiators _CMDLINE_LINUX_DEFAULT='q 4516Y+ uiet splash platform mem=198752M'), with 48 and 2 x 400 Gbps NVIDIA physical (Mellanox) ConnectX-7 EN cores 400GbE adapters providing (96 RDMA over Ethernet (RoCE) logical CPUs via SMT) on a 64-bit x86 arch itecture 2 60 single- Bare-Metal Each data node is Load Generation and socket Supermicro ASG equipped with 192 GB of Benchmark Execution AMD EPYC -1115S-NE316R DDR5 ECC memory and 2 x 9355P pr Host 400 Gbps NVIDIA ocessor Initiators (Mellanox) ConnectX-7 EN with 32 400GbE adapters providing physical RDMA over Ethernet cores (RoCE). (64 logical CPUs via SMT) on a 64-bit x86 arch itecture 3 30 single- Bare-Metal Each host initiator is Data Storage socket Supermicro ASG equipped with 192 GB of AMD EPYC -1115S-NE316R DDR5 ECC memory, and 2 x 9355P pr Data Nodes 400 Gbps NVIDIA ocessor (Mellanox) ConnectX-7 EN with 32 400GbE adapters providing physical RDMA over Ethernet cores (RoCE), (64 and 16 x KIOXIA logical KCM7DRJE3T84 3.84 TB CPUs via enterprise NVMe SSDs. SMT) on a 64-bit x86 arch itecture 4 2 Pure FB//EXA The Pure Storage XFM-8400 FB//EXA MetaData Storage MetaData Node is the external fabric Node System FlashBla System interconnect module used eXternal Fabric de in multi-chassis Module XFM-8400 FlashBlade//S systems. eXternal It provides the network Fabric fabric connectivity that Module links multiple FlashBlade//S chassis together and connects them to host networks. In multi-chassis configurations, a pair of these XFM-8400 modules interconnects all chassis and servers, supporting high-speed optics (e.g., 10/25/40/100 Gbps QSFP and higher-speed options) to deliver scalable bandwidth for unified fast file and object workloads. 5 6 Pure FB//EXA Pure Storage FlashBlade FB//EXA MetaData Storage MetaData Node FIOM-1000 refers to the Node System Chassis FlashBla System Fabric I/O Module used FIOM de FIOM- inside FlashBlade//S 1000 chassis. Chassis It is a hot-swappable Fabric midplane network and I/O IO module that provides the Module internal fabric connectivity between the blades and the rest of the system’s networking infrastructure. Each FlashBlade//S chassis typically contains two FIOM-1000 modules for redundant fabric connectivity and they host integrated Ethernet switching and external ports used for connecting the storage blades to client networks and the system fabric. These modules include multiple high-speed ports (e.g., QSFP28 for 100 GbE in existing hardware) and have internal management interfaces (such as management, USB, and console ports) to support chassis networking functions. 6 30 Pure FB//EXA Pure Storage FlashBlade FB//EXA MetaData Storage MetaData Node FB-S500 is a blade-level Node System blades FlashBla System component used in the de FB-S5 FlashBlade//S scale-out 00R1 unified fast file and blade object storage platform. It is one of the compute- and-storage blades that populate a FlashBlade//S 5U chassis, delivering high performance for demanding unstructured workloads such as analytics, AI, machine learning, and large-scale file/object stores. A chassis can hold up to 10 blades, each of which connects to Pure’s DirectFlash® Modules (DFMs) and the system’s internal fabric to provide throughput, capacity, and low-latency access across the cluster. The S500 blade emphasizes extreme performance, and works with multiple DFMs per blade to scale I/O and capacity. FlashBlade//S systems support modular expansion of blades and DFMs to meet evolving performance and capacity needs. Processing Element Notes ------------------------ No virtualization layers were used; all elements operated on bare-metal hardware and were statically assigned for test reproducibility. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Each of the Supermicro 188.47444 60 V 11308 ASG-1115S-NE316R host initiators was equipped with 192 GB (197629740 kB) of DDR5 system memory. Each of the Supermicro 194.09375 20 V 3881 SYS-621C-TN12R host initiators was equipped with 1024 GB of DDR5 system memory - reduced to 198752M via GRUB (GRUB _CMDLINE_LINUX_DEFAULT='q uiet splash mem=198752M') Each of the Supermicro 188.47444 30 V 5654 ASG-1115S-NE316R data nodes was equipped with 192 GB (197629740 kB) of DDR5 system memory. Grand Total Memory Gibibytes 20844 Memory Notes ------------ All 80 host initiators had usable memory configurations similar to the following: # free -h total used free shared buff/cache available Mem: 188Gi 10Gi 168Gi 6.5Mi 10Gi 177Gi Swap: 8.0Gi 0B 8.0Gi Stable Storage ============== The system uses an mdadm RAID10-style (RAID0-over-RAID1) configuration of mirrored NVMe drive pairs striped across eight devices with an XFS filesystem on top, providing data protection through mirroring and yielding ~50% usable capacity (≈866 TB usable from 1843.5 TB raw) after accounting for RAID and filesystem overhead. Solution Under Test Configuration Notes ======================================= Details of data network switches: NVIDIA SN5600 spine data network switch #1: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 61 of 64 front-panel OSFP ports are in use and configured for 800 GbE operation.Of the 64 front-panel OSFP ports, 57 ports are administratively up and operationally up at 800 GbE. 4 ports are administratively down and operationally down. The switch retains 3 front-panel OSFP ports of unused capacity. NVIDIA SN5600 spine data network switch #2: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 61 of 64 front-panel OSFP ports are in use and configured for 800 GbE operation. Of the 64 front-panel OSFP ports, 56 ports are administratively up and operationally up at 800 GbE. 1 port is administratively up at 800 GbE but operationally down. 4 ports are administratively down and operationally down. The switch retains 3 front-panel OSFP ports of unused capacity. NVIDIA SN5600 leaf data network switch #1: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 36 of 64 front-panel OSFP ports are in use. 20 ports are configured in a 2 × 400 GbE breakout configuration. 16 ports operate at native 800 GbE line rate. The remaining 28 front-panel OSFP ports are unused and available for expansion. NVIDIA SN5600 leaf data network switch #2: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 38 of 64 front-panel OSFP ports are in use. 22 ports are configured in a 2 × 400 GbE breakout configuration. 16 ports operate at native 800 GbE line rate. The remaining 26 front-panel OSFP ports are unused and available for expansion. NVIDIA SN5600 leaf data network switch #3: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 36 of 64 front-panel OSFP ports are in use. 20 ports are configured in a 2 × 400 GbE breakout configuration. 16 ports operate at native 800 GbE line rate. The remaining 28 front-panel OSFP ports are unused and available for expansion. NVIDIA SN5600 leaf data network switch #4: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 36 of 64 front-panel OSFP ports are in use. 20 ports are configured in a 2 × 400 GbE breakout configuration. 16 ports operate at native 800 GbE line rate. 1 400 GbE breakout lane is administratively up but operationally down. The remaining 28 front-panel OSFP ports are unused and available for expansion. NVIDIA SN5600 leaf data network switch #5: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 58 of 64 front-panel OSFP ports are in use. In-use ports consist of 40 × 400 GbE breakout lanes (20 × 2 × 400 GbE) and 16 × 800 GbE ports operating at native speed. 1 additional 800 GbE port is administratively up but operationally down. 6 front-panel OSFP ports are unused and available as unused capacity. NVIDIA SN5600 leaf data network switch #6: NVIDIA SN5600 Ethernet switch (64 × 800G OSFP ports), based on the NVIDIA Spectrum-4 ASIC. At the time of the test, 58 of 64 front-panel OSFP ports are in use. In-use ports consist of 40 × 400 GbE breakout lanes (20 × 2 × 400 GbE) and 28 × 800 GbE ports operating at native speed. All in-use ports are administratively up and operationally up. 6 front-panel OSFP ports are unused and available as unused capacity. Other Solution Notes ==================== None Dataflow ======== The Namespace and filesystem metadata are served by a Metadata Node system that clients mount via NFSv4.1 over TCP. While the metadata service provides data layout information, host initiators communicate directly with the data nodes using pNFS semantics and RDMA for high-performance data access. Other Notes =========== The benchmark was executed using SPECstorage Solution 2020_ai_image workload profile (version 2564) with default warmup (900s) and measurement (300s) durations. All tests were performed using GA-only firmware (Purity//FB 4.6.4) with no patches. The AI_IMAGE workload successfully scaled to 6300 jobs under valid SPECstorage Solution 2020_ai_image conditions. For more information on FlashBlade//EXA, please look at the following URLs: Pure.AI ( https://www.pure.ai/ ) Meet FlashBlade//EXA. More AI. Less Waiting. ( https://www.youtube.com/watch?v=Df4I-YgEpaY ) Tackling Myths Around AI Data and FlashBlade//EXA ( https://www.youtube.com/watch?v=rBPHCuS6yKQ ) Inside Pure Storage’s FlashBlade//EXA: Scaling AI Without Bottlenecks - Six Five In The Booth ( https://www.youtube.com/watch?v=YDkt43n7E3A ) This is an SSD?! - PureStorage FlashBlade Tour ( https://www.youtube.com/watch?v=L4AKeW0Y-F0 ) Technical Deep Dive on FlashBlade//S ( https://www.purestorage.com/video/technical-deep-dive-on-flashblade/6307195175112.html ) Other Report Notes ================== None =============================================================================== Generated on Wed Feb 18 10:54:46 2026 by SpecReport Copyright (C) 2016-2026 Standard Performance Evaluation Corporation