-
- News
- Books
Featured Books
- smt007 Magazine
Latest Issues
Current IssueComing to Terms With AI
In this issue, we examine the profound effect artificial intelligence and machine learning are having on manufacturing and business processes. We follow technology, innovation, and money as automation becomes the new key indicator of growth in our industry.
Box Build
One trend is to add box build and final assembly to your product offering. In this issue, we explore the opportunities and risks of adding system assembly to your service portfolio.
IPC APEX EXPO 2024 Pre-show
This month’s issue devotes its pages to a comprehensive preview of the IPC APEX EXPO 2024 event. Whether your role is technical or business, if you're new-to-the-industry or seasoned veteran, you'll find value throughout this program.
- Articles
- Columns
Search Console
- Links
- Events
||| MENU - smt007 Magazine
Intel, Ohio Supercomputer Center Double AI Processing Power with New HPC Cluster
February 20, 2024 | Intel CorporationEstimated reading time: 2 minutes
A collaboration including Intel, Dell Technologies, Nvidia and the Ohio Supercomputer Center (OSC), introduces Cardinal, a cutting-edge high-performance computing (HPC) cluster. Purpose-built to meet the increasing demand for HPC resources in Ohio across research, education and industry innovation, particularly in artificial intelligence (AI).
AI and machine learning are integral tools in scientific, engineering and biomedical fields for solving complex research inquiries. As these technologies continue to demonstrate efficacy, academic domains such as agricultural sciences, architecture and social studies are embracing their potential.
Cardinal is equipped with the hardware capable of meeting the demands of expanding AI workloads. In both capabilities and capacity, the new cluster will be a substantial upgrade from the system it will replace, the Owens Cluster launched in 2016.
The Cardinal Cluster is a heterogeneous system featuring Dell PowerEdge servers and the Intel® Xeon® CPU Max Series with high bandwidth memory (HBM) as the foundation to efficiently manage memory-bound HPC and AI workloads while fostering programmability, portability and ecosystem adoption. The system will have:
- 756 Max Series CPU 9470 processors, which will provide 39,312 total CPU cores.
- 128 gigabytes (GB) HBM2e and 512 GB of DDR5 memory per node.
With a single software stack and traditional programming models on the x86 base, the cluster will more than double OSC’s capabilities while addressing broadening use cases and allowing for easy adoption and deployment.
The system is also equipped with:
- Thirty-two nodes that will have 104 cores, 1 terabyte (TB) of memory and four Nvidia Hopper architecture-based H100 Tensor Core GPUs with 94 GB HBM2e memory interconnected by four NVLink connections.
- Nvidia Quantum-2 InfiniBand, which provides 400 gigabits per second (Gbps) of networking performance with low latency to deliver 500 petaflops of peak AI performance (FP8 Tensor Core, with sparsity) for large AI-driven scientific applications.
- Sixteen nodes that will have 104 cores, 128 GB HBM2e and 2 TB DDR5 memory for large symmetric multiprocessing (SMP) style jobs.
“The Intel Xeon CPU Max Series is an optimal choice for developing and implementing HPC and AI workloads, leveraging the most widely adopted AI frameworks and libraries,” said Ogi Brkic, vice president and general manager of Data Center AI Solutions product line at Intel. “The inherent heterogeneity of this system will empower OSC’s engineers, researchers and scientists, enabling them to fully exploit the doubled memory bandwidth performance it offers. We take pride in supporting OSC and our ecosystem with solutions that significantly expedite the analysis of existing and future data for their targeted focus areas.”
Suggested Items
Micron First to Ship Critical Memory for AI Data Centers
05/01/2024 | MicronMicron Technology, Inc. announced it is leading the industry by validating and shipping its high-capacity monolithic 32Gb DRAM die-based 128GB DDR5 RDIMM memory in speeds up to 5,600 MT/s on all leading server platforms.
Samsung Electronics Begins Industry’s First Mass Production of 9th-Gen V-NAND
04/29/2024 | Samsung ElectronicsSamsung Electronics, the world leader in advanced memory technology, today announced that it has begun mass production for its one-terabit (Tb) triple-level cell (TLC) 9th-generation vertical NAND (V-NAND), solidifying its leadership in the NAND flash market.
Micron’s Full Suite of Automotive-Grade Solutions Qualified for Qualcomm Automotive Platforms to Power AI in Vehicles
04/17/2024 | MicronMicron Technology, Inc. announced that it has qualified a full suite of its automotive-grade memory and storage solutions for Qualcomm Technologies Inc.’s Snapdragon® Digital Chassis™, a comprehensive set of cloud-connected platforms designed to power data-rich, intelligent automotive services.
Intel Breaks Down Proprietary Walls to Bring Choice to Enterprise GenAI Market
04/10/2024 | IntelAt Intel Vision, Intel introduces the Intel® Gaudi® 3 AI accelerator, which delivers 4x AI compute for BF16, 1.5x increase in memory bandwidth, and 2x networking bandwidth for massive system scale out compared to its predecessor – a significant leap in performance and productivity for AI training and inference on popular large language models (LLMs) and multimodal models.
Gartner Says Worldwide Semiconductor Revenue Declined 11% in 2023
01/16/2024 | Gartner, Inc.Worldwide semiconductor revenue in 2023 totaled $533 billion, a decrease of 11.1% from 2022, according to preliminary results by Gartner, Inc.