Skip to content

Hon Hai Technology Group (Foxconn) showcases AI solutions for servers, EV driving at NVIDIA GTC

Group’s first participation features groundbreaking autonomous driving research

Hon Hai Technology Group (“Foxconn”), the world’s largest electronics manufacturing and service provider, today announced its participation at NVIDIA GTC, showcasing the Group’s robust expertise in AI solutions that span exclusive, state-of-the-art server systems, advanced driver assistance systems for intelligent electric vehicles and groundbreaking deep learning research for autonomous driving called “QCNet.”

Foxconn Chairman and CEO Young Liu attended the NVIDIA GTC 2024 keynote, and personally witnessed the release of new AI server products. He thanked NVIDIA CEO Jensen Huang for his support at Hon Hai Tech Day in October last year. The presence of Foxconn’s team of around two dozen executives at the AI conference in San Jose, California, this year is a first for the Group, underscored its gold-level sponsorship of the event and emphasis on the importance of AI in its close collaboration with NVIDIA.

As generative AI continues to sweep across industries, addressing the diverse computing demands of customers with the right AI data center infrastructure solutions has become the Group’s top priority. Foxconn subsidiary Ingrasys Technology Inc is exhibiting a comprehensive, future-proof AI total solution powered by NVIDIA technology, empowering customers to build their ideal AI data centers.

Ingrasys offers a broad portfolio of 1U/2U/4U NVIDIA MGX-based servers for a variety of use cases with a time-to-market advantage — providing a modular architecture to integrate current and future NVIDIA GPUs, CPUs, DPUs, and NVIDIA AI Enterprise software.

An ideal choice for AI training workloads, Ingrasys’ newest AI accelerator, the GB6181, is designed to accommodate eight NVIDIA H100 Tensor Core GPUs or support next-generation GPUs, serving as the building block for high-performance AI data centers.

Visitors will also have a chance to explore the ES2100, a 2U NVMe-oF storage system featuring the NVIDIA Spectrum-2 Ethernet switch to provide higher throughput for extreme networking performance. Its modular and innovative midplaneless design allows users to easily upgrade the system by replacing two switch modules.

Now, to fuel a new wave of generative AI applications, Ingrasys is leveraging the NVIDIA GB200 NVL72, a next-generation AI liquid-cooled rack solution. It combines 36 NVIDIA GB200 Grace Blackwell Superchips, which include 72 NVIDIA Blackwell-based GPUs and 36 NVIDIA Grace CPUs interconnected by fifth-generation NVIDIA NVLink to act as a single massive GPU.

“Our collaboration with NVIDIA helps us deliver the latest accelerated computing technologies to our customers so they can build AI-powered data centers to suit a wide range of applications,” said Benjamin Ting, President of Ingrasys. “Using the NVIDIA MGX platform, we’re able to adopt modular designs to quickly and cost-effectively build different server configurations and reduce time to market.”

“Foxconn offers a diverse product lineup powered by NVIDIA accelerated computing that addresses the unique requirements of different industries using the NVIDIA MGX modular reference server architecture,” said Kaustubh Sanghani, vice president of GPU product management at NVIDIA. “Through its support of NVIDIA Blackwell architecture-based processors, networking and software, Ingrasys will help fuel generative AI and accelerated computing with its next-generation systems.”

Complementing Ingrasys’ rack solution is an advanced liquid-cooling solution, including liquid-to-air side car and liquid-to-liquid CDU solutions, with a robust cooling capacity up to 1300kW. These solutions are both powerful and energy-efficient, perfectly suited for cooling the next-generation AI infrastructure.

In addition to the data center infrastructure solution, in the field of autonomous driving, the Group is exhibiting the Smart Drive intelligent driving controller based on the NVIDIA Orin X processor, demonstrating Foxconn’s diverse solutions in the field of ADAS hardware.

According to different product positioning, ADAS controllers are provided in three levels: Basic, Advanced and Premium. Among them, the Premium level is equipped with two Orin NX processors, which can support even the most challenging computing power requirements. Equipped with water-cooling heat dissipation technology, it provides a stable and efficient operating environment for the system and ensures consistent product performance.

Presented at one of its first public forums, Dr Yung-Hui Li, Director of the Artificial Intelligence Research Center at Hon Hai Research Institute, shared “QCNet: Query-Centric Trajectory Prediction for Autonomous Driving,” a neural architecture that aims to increase driving safety by overcoming some important challenges in predicting where other cars and pedestrians will travel.

The deep learning model “QCNet” technology being developed by Foxconn and City University of Hong Kong exhibits unprecedented performance on large-scale trajectory prediction datasets. It has been ranked as the champion on the leaderboard of Argoverse2 Motion Forecasting Competition since 9 months ago. Considering the extremely fierce competitive landscape in the field of autonomous driving technology, this is a highly remarkable achievement and demonstrates the effectiveness of the Group’s designs and its technological leadership in AI.

SOURCE: Foxconn

Welcome back , to continue browsing the site, please click here