NVIDIA ConnectX InfiniBand Adapters

Enhancing Top Supercomputers and Clouds

Leveraging faster speeds and innovative In-Network Computing, NVIDIA® ConnectX® InfiniBand smart adapters achieve extreme performance and scale. NVIDIA ConnectX lowers cost per operation, increasing ROI for high-performance computing (HPC), machine learning (ML), advanced storage, clustered databases, low-latency embedded I/O applications, and more.

Products

ConnectX-8

The ConnectX-8 InfiniBand SuperNIC provides 800 gigabits per second (Gb/s) of data throughput with support for NVIDIA In-Network Computing acceleration engines to deliver the performance and robust feature set needed to power trillion-parameter-scale AI factories and scientific computing workloads.


ConnectX-7

The ConnectX-7 smart host channel adapter (HCA), featuring the NVIDIA Quantum-2 InfiniBand architecture, provides the highest networking performance available. To take on the world’s most challenging workloads, ConnectX-7 provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing acceleration engines for additional acceleration. ConnectX-7 delivers the scalability and feature-rich technology needed for supercomputers, artificial intelligence, and hyperscale cloud data centers.


ConnectX-6

The ConnectX-6 smart host channel adapter (HCA), featuring the NVIDIA Quantum InfiniBand architecture, delivers high-performance and NVIDIA In-Network Computing acceleration engines for maximizing efficiency in HPC, artificial intelligence, cloud, hyperscale, and storage platforms.


ConnectX-5

The ConnectX-5 smart host channel adapter (HCA) with intelligent acceleration engines enhances HPC, ML, and data analytics, as well as cloud and storage platforms. With support for two ports of 100Gb/s InfiniBand and Ethernet network connectivity, PCIe Gen3 and Gen4 server connectivity, very high message rates, PCIe switches, and NVMe over Fabrics offloads, ConnectX-5 is a high-performance and cost-effective solution for a wide range of applications and markets.

ConnectX-4 VPI EDR/100GbE

ConnectX-4 Virtual Protocol Interconnect (VPI) smart adapters support EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity. Providing data centers high performance and flexible solutions for HPC (high performance computing), Cloud, database, and storage platforms, ConnectX-4 smart adapters combine 100Gb/s bandwidth in a single port with the lowest available latency, 150 million messages per second and application hardware offloads.

ConnectX-3 Pro VPI FDR and 40/56GbE

ConnectX-3 Pro smart adapters with Virtual Protocol Interconnect (VPI) support InfiniBand and Ethernet connectivity with hardware offload engines for Overlay Networks ("Tunneling"). ConnectX-3 Pro provides great performance and flexibility for PCI Express Gen3 servers deployed in public and private clouds, enterprise data centers, and high-performance computing.


OCP Adapters

Open Compute Project (OCP) defines a mezzanine form factor that features best-in-class efficiency to enable the highest data center performance.

Multi-Host Solutions

The innovative NVIDIA Multi-Host® technology allows multiple compute or storage hosts to connect into a single adapter.

Socket-Direct Adapters

NVIDIA Socket Direct® technology enables direct PCIe access to multiple CPU sockets, eliminating the need for network traffic to traverse the inter-process bus.

Resources

Your Next Steps

Configure Your Cluster

Shop for NVIDIA RTX graphics cards through our store or through our partners.

Take Networking Courses

Talk to an NVIDIA product specialist about your professional needs.

Ready to Purchase?

Signup for NVIDIA RTX and design and visualization news from NVIDIA.