ConnectX-6 Dx EN adapter card, 100GbE, Single-port QSFP56, PCIe 4.0 x16, No Crypto, Tall Bracket
作为世界上最先进的云智能网卡,ConnectX-6 Dx 提供多达两个 25、50 或 100Gb/s 的端口或一个 200Gb/s 的单端口以太网连接,采用 50Gb/s PAM4 SerDes 技术和 PCIe Gen 4.0 主机连接。
ConnectX-6 Dx 继续在 NVIDIA 在可扩展云结构方面的创新路径中,在各种规模上提供无与伦比的性能和效率。ConnectX-6 Dx 的创新硬件卸载引擎(包括 IPsec 和 TLS 内联动态数据加密)非常适合在现代数据中心环境中实现安全网络连接。
NVIDIA® ConnectX®-6 Dx is a highly secure and advanced smart network interface card (SmartNIC) that accelerates mission-critical cloud and data center applications, including security, virtualization, SDN/NFV, big data, machine learning, and storage. ConnectX-6 Dx provides up to two ports of 100Gb/s or a single port of 200Gb/s Ethernet connectivity and is powered by 50Gb/s (PAM4) or 25/10 Gb/s (NRZ) SerDes technology.
ConnectX-6 Dx features virtual switch (vSwitch) and virtual router (vRouter) hardware accelerations delivering orders-of-magnitude higher performance than softwarebased solutions. ConnectX-6 Dx supports a choice of single-root I/O virtualization (SR-IOV) and VirtIO in hardware, enabling customers to best address their application needs. By offloading cloud networking workloads, ConnectX-6 Dx frees up CPU cores for business applications while reducing total cost-of-ownership.
In an era where data privacy is key, ConnectX-6 Dx provides built-in inline encryption/ decryption, stateful packet filtering, and other capabilities, bringing advanced security down to every node with unprecedented performance and scalability.
Built on the solid foundation of NVIDIA’s ConnectX line of SmartNICs, ConnectX-6 Dx offers best-in-class RDMA over Converged Ethernet (RoCE) capabilities, enabling scalable, resilient, and easy-to-deploy RoCE solutions. For data storage, ConnectX-6 Dx optimizes a suite of storage accelerations, bringing NVMe-oF target and initiator offloads.
Network Interface
> Dual ports of 10/25/40/50/100 GbE, or a single port of 200GbE
Host Interface
> 16 lanes of PCIe Gen4, compatible with PCIe Gen2/Gen3
> Integrated PCI switch
> NVIDIA Multi-Host and NVIDIA Socket Direct™
Virtualization/Cloud Native
> SR-IOV and VirtIO acceleration
> Up to 1K virtual functions per port
> 8 physical functions
> Support for tunneling
> Encap/decap of VXLAN, NVGRE, Geneve, and more
> Stateless offloads for overlay tunnels
NVIDIA ASAP2 Accelerated Switching & Packet Processing
> SDN acceleration for:
> Bare metal
> Virtualization
> Containers
> Full hardware offload for OVS data plane
> Flow update through RTE_Flow or TC_Flower
> Flex-parser: user-defined classification
> Hardware offload for:
> Connection tracking (Layer 4 firewall)
> NAT
> Header rewrite
> Mirroring
> Sampling
> Flow aging
> Hierarchical QoS
> Flow-based statistics
Cybersecurity
> Inline hardware IPsec encryption and decryption
> AES-GCM 128/256-bit key
> RoCE over IPsec
> Inline hardware TLS encryption and decryption
> AES-GCM 128/256-bit key
> Data-at-rest AES-XTS encryption and decryption
> AES-XTS 256/512-bit key
> Platform security
> Hardware root-of-trust
> Secure firmware update
Stateless Offloads
> TCP/UDP/IP stateless offload
> LSO, LRO, checksum offload
> Receive side scaling (RSS) also on encapsulated packet
> Transmit side scaling (TSS)
> VLAN and MPLS tag insertion/stripping
> Receive flow steering
Storage Offloads
> Block-level encryption: XTS-AES 256/512-bit key
> NVMe over Fabrics offloads for target machine
> T10 DIF signature handover operation at wire speed, for ingress and egress traffic
> Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF
Advanced Timing and Synchronization
> Advanced PTP
> IEEE 1588v2 (any profile)
> PTP hardware clock (PHC) (UTC format)
> Nanosecond-level accuracy
> Line rate hardware timestamp (UTC format)
> PPS in and configurable PPS out
> Time-triggered scheduling
> PTP-based packet pacing
> Time-based SDN acceleration (ASAP2 )
> Time-sensitive networking (TSN)
> Dedicated precision timing card option
RDMA over Converged Ethernet (RoCE)
> RoCE v1/v2
> Zero-touch RoCE: no ECN, no PFC
> RoCE over overlay networks
> Selective repeat
> Programmable congestion control interface
> GPUDirect® Management and Control
> NC-SI, MCTP over SMBus and MCTP over PCIe—Baseboard
Management Controller interface
> NCSI over RBT in Open Compute Project (OCP) 2.0/3.0 cards
> PLDM for Monitor and Control DSP0248
> PLDM for Firmware Update DSP0267
> I2C interface for device control and configuration
Remote Boot
> Remote boot over Ethernet
> Remote boot over iSCSI
> UEFI and PXE support for x86 and Arm servers F