NVIDIA Hopper Architecture

The engine for the world’s AI infrastructure makes an order-of-magnitude performance leap.

The Accelerated Computing Platform for
Next-Generation Workloads

Learn about the next massive leap in accelerated computing with the NVIDIA Hopper™ architecture. Hopper securely scales diverse workloads in every data center, from small enterprise to exascale high-performance computing (HPC) and trillion-parameter AI—so brilliant innovators can fulfill their life's work at the fastest pace in human history.

Explore the Technology Breakthroughs

Built with over 80 billion transistors using a cutting edge TSMC 4N process, Hopper features five groundbreaking innovations that fuel the NVIDIA H200 and H100 Tensor Core GPUs and combine to deliver incredible speedups over the prior generation on generative AI training and inference.

Transformer Engine

Transformer Engine

The NVIDIA Hopper architecture advances Tensor Core technology with the Transformer Engine, designed to accelerate the training of AI models. Hopper Tensor Cores have the capability to apply mixed FP8 and FP16 precisions to dramatically accelerate AI calculations for transformers. Hopper also triples the floating-point operations per second (FLOPS) for TF32, FP64, FP16, and INT8 precisions over the prior generation. Combined with Transformer Engine and fourth-generation NVIDIA® NVLink®, Hopper Tensor Cores power an order-of-magnitude speedup on HPC and AI workloads.

NVLink, NVSwitch, and NVLink Switch System

To move at the speed of business, exascale HPC and trillion-parameter AI models need high-speed, seamless communication between every GPU in a server cluster to accelerate at scale.

Fourth-generation NVLink can scale multi-GPU input and output (IO) with NVIDIA DGX™ and HGX™ servers at 900 gigabytes per second (GB/s) bidirectional per GPU, over 7X the bandwidth of PCIe Gen5.  

Third-generation NVIDIA NVSwitch™ supports Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ in-network computing, previously only available on Infiniband, and provides a 2X increase in all-reduce throughput within eight H200 or H100 GPU servers compared to the previous-generation A100 Tensor Core GPU systems.

DGX GH200 systems with NVLink Switch System support clusters of up to 256 connected H200s and deliver 57.6 terabytes per second (TB/s) of all-to-all bandwidth.

NVLink Switch System
NVIDIA Confidential Computing

NVIDIA Confidential Computing

While data is encrypted at rest in storage and in transit across the network, it’s unprotected while it’s being processed. NVIDIA Confidential Computing addresses this gap by protecting data and applications in use. The NVIDIA Hopper architecture introduces the world’s first accelerated computing platform with confidential computing capabilities.

With strong hardware-based security, users can run applications on-premises, in the cloud, or at the edge and be confident that unauthorized entities can’t view or modify the application code and data when it’s in use. This protects confidentiality and integrity of data and applications while accessing the unprecedented acceleration of H200 and H100 GPUs for AI training, AI inference, and HPC workloads.

Second-Generation MIG

With Multi-Instance GPU (MIG), a GPU can be partitioned into several smaller, fully isolated instances with their own memory, cache, and compute cores. The Hopper architecture further enhances MIG by supporting multi-tenant, multi-user configurations in virtualized environments across up to seven GPU instances, securely isolating each instance with confidential computing at the hardware and hypervisor level. Dedicated video decoders for each MIG instance deliver secure, high-throughput intelligent video analytics (IVA) on shared infrastructure. And with Hopper’s concurrent MIG profiling, administrators can monitor right-sized GPU acceleration and optimize resource allocation for users.

For researchers with smaller workloads, rather than renting a full CSP instance, they can elect to use MIG to securely isolate a portion of a GPU while being assured that their data is secure at rest, in transit, and at compute.

Second-Generation MIG
DPX Instructions

DPX Instructions

Dynamic programming is an algorithmic technique for solving a complex recursive problem by breaking it down into simpler subproblems. By storing the results of subproblems so that you don’t have to recompute them later, it reduces the time and complexity of exponential problem solving. Dynamic programming is commonly used in a broad range of use cases. For example, Floyd-Warshall is a route optimization algorithm that can be used to map the shortest routes for shipping and delivery fleets. The Smith-Waterman algorithm is used for DNA sequence alignment and protein folding applications.

Hopper’s DPX instructions accelerate dynamic programming algorithms by 40X compared to traditional dual-socket CPU-only servers and by 7X compared to NVIDIA Ampere architecture GPUs. This leads to dramatically faster times in disease diagnosis, routing optimizations, and even graph analytics.

Preliminary specifications, may be subject to change
DPX instructions comparison HGX H100 4-GPU vs dual socket 32 core IceLake

Ready for Enterprise AI?

Enterprise adoption of AI is now mainstream, and organizations need end-to-end, AI-ready infrastructure that will accelerate them into this new era.

NVIDIA H100 Tensor Core GPUs for mainstream servers come with the NVIDIA AI Enterprise software suite, simplifying AI adoption with the highest performance.

Take a deep dive into the NVIDIA Hopper architecture.