-
- News
- Books
Featured Books
- smt007 Magazine
Latest Issues
Current IssueComing to Terms With AI
In this issue, we examine the profound effect artificial intelligence and machine learning are having on manufacturing and business processes. We follow technology, innovation, and money as automation becomes the new key indicator of growth in our industry.
Box Build
One trend is to add box build and final assembly to your product offering. In this issue, we explore the opportunities and risks of adding system assembly to your service portfolio.
IPC APEX EXPO 2024 Pre-show
This month’s issue devotes its pages to a comprehensive preview of the IPC APEX EXPO 2024 event. Whether your role is technical or business, if you're new-to-the-industry or seasoned veteran, you'll find value throughout this program.
- Articles
- Columns
Search Console
- Links
- Events
||| MENU - smt007 Magazine
Intel Gaudi, Xeon and AI PC Accelerate Meta Llama 3 GenAI Workloads
April 22, 2024 | Intel CorporationEstimated reading time: 2 minutes
Meta launched Meta Llama 3, its next-generation large language model (LLM). Effective on launch day, Intel has validated its AI product portfolio for the first Llama 3 8B and 70B models across Intel® Gaudi® accelerators, Intel® Xeon® processors, Intel® Core™ Ultra processors and Intel® Arc™ graphics.
“Intel actively collaborates with the leaders in the AI software ecosystem to deliver solutions that blend performance with simplicity. Meta Llama 3 represents the next big iteration in large language models for AI. As a major supplier of AI hardware and software, Intel is proud to work with Meta to take advantage of models such as Llama 3 that will enable the ecosystem to develop products for cutting-edge AI applications,” said Wei Li, Intel vice president and general manager of AI Software Engineering.
As part of its mission to bring AI everywhere, Intel invests in the software and AI ecosystem to ensure that its products are ready for the latest innovations in the dynamic AI space. In the data center, Intel Gaudi and Intel Xeon processors with Intel® Advanced Matrix Extension (Intel® AMX) acceleration give customers options to meet dynamic and wide-ranging requirements.
Intel Core Ultra processors and Intel Arc graphics products provide both a local development vehicle and deployment across millions of devices with support for comprehensive software frameworks and tools, including PyTorch and Intel® Extension for PyTorch® used for local research and development and OpenVINO™ toolkit for model development and inference.
About the Llama 3 Running on Intel:
Intel’s initial testing and performance results for Llama 3 8B and 70B models use open source software, including PyTorch, DeepSpeed, Intel Optimum Habana library and Intel Extension for PyTorch to provide the latest software optimizations. For more performance details, visit the Intel Developer Blog.
Intel® Gaudi® 2 accelerators have optimized performance on Llama 2 models – 7B, 13B and 70B parameters – and now have initial performance measurements for the new Llama 3 model. With the maturity of the Intel Gaudi software, Intel easily ran the new Llama 3 model and generated results for inference and fine tuning. Llama 3 is also supported on the recently announced Intel® Gaudi® 3 accelerator.
Intel Xeon processors address demanding end-to-end AI workloads, and Intel invests in optimizing LLM results to reduce latency. Intel® Xeon® 6 processors with Performance-cores (code-named Granite Rapids) show a 2x improvement on Llama 3 8B inference latency compared with 4th Gen Intel® Xeon® processors and the ability to run larger language models, like Llama 3 70B, under 100ms per generated token.
Intel Core Ultra and Intel Arc Graphics deliver impressive performance for Llama 3. In an initial round of testing, Intel Core Ultra processors already generate faster than typical human reading speeds. Further, the Intel® Arc™ A770 GPU has Xe Matrix eXtensions (XMX) AI acceleration and 16GB of dedicated memory to provide exceptional performance for LLM workloads.
What’s Next: In the coming months, Meta expects to introduce new capabilities, additional model sizes and enhanced performance. Intel will continue to optimize performance for its AI products to support this new LLM.
Suggested Items
I-Connect007 Editor’s Choice: Five Must-Reads for the Week
05/03/2024 | Nolan Johnson, I-Connect007This week’s most important news is strategic—and telling. When one puts together the IPC industry reports, we simply have to include the recent conversation with Shawn DuBravac and Tom Kastner. On the design side, check out the latest “On The Line With…” podcast featuring Brad Griffin from Cadence Design Systems, discussing SI and PI in the realm of intelligent system design.
First Two WorldView Legion Spacecraft Performing Well After Launch
05/03/2024 | BUSINESS WIREMaxar Intelligence, provider of secure, precise geospatial intelligence, today confirmed the first two WorldView Legion satellites are performing well after being launched on a SpaceX Falcon 9 rocket earlier today from Vandenberg Space Force Base, California.
Dubai Launches Global Blueprint for Artificial Intelligence
05/02/2024 | BUSINESS WIREDubai has launched a blueprint for Artificial Intelligence (AI), a yearly plan that will focus on harnessing the technology’s potential to improve quality of life around the world.
Intel Takes Next Step Toward Building Scalable Silicon-Based Quantum Processors
05/02/2024 | BUSINESS WIRENature published an Intel research paper, “Probing single electrons across 300-mm spin qubit wafers,” demonstrating state-of-the-art uniformity, fidelity and measurement statistics of spin qubits.
Micron First to Ship Critical Memory for AI Data Centers
05/01/2024 | MicronMicron Technology, Inc. announced it is leading the industry by validating and shipping its high-capacity monolithic 32Gb DRAM die-based 128GB DDR5 RDIMM memory in speeds up to 5,600 MT/s on all leading server platforms.