MQM9700-NS2R 400G NDR InfiniBand Smart Switch – 64 Ports On-Board Subnet Manager C2P Airflow

รายละเอียดสินค้า:

ชื่อแบรนด์: Mellanox
หมายเลขรุ่น: MQM9700-NS2R (920-9B210-00RN-0M2)
เอกสาร: MQM9700 series.pdf

การชำระเงิน:

จำนวนสั่งซื้อขั้นต่ำ: 1 ชิ้น
ราคา: Negotiate
รายละเอียดการบรรจุ: กล่องด้านนอก
เวลาการส่งมอบ: ขึ้นอยู่กับสินค้าคงคลัง
เงื่อนไขการชำระเงิน: ที/ที
สามารถในการผลิต: จัดหาโดย Project/Batch
ราคาถูกที่สุด ติดต่อ

ข้อมูลรายละเอียด

หมายเลขรุ่น: MQM9700-NS2R (920-9B210-00RN-0M2) อัตราการส่งผ่าน: 400ก
พอร์ต: 64 เทคโนโลยี: Infiniband
ความเร็วสูงสุด: NDR แพคเกจการขนส่ง: การบรรจุ

รายละเอียดสินค้า

NVIDIA Quantum-2 MQM9700-NS2R 64-Port 400Gb/s Managed InfiniBand Switch with Reverse Airflow (C2P)
Fully Managed NDR Switch | On-Board Subnet Manager | C2P Reverse Airflow

Engineered for extreme-scale AI and HPC environments. The MQM9700-NS2R delivers 64 non-blocking ports of 400Gb/s InfiniBand in a compact 1U chassis, featuring an integrated on-board Subnet Manager for up to 2,000 nodes, SHARPv3 in-network computing, and connector-to-power (C2P) reverse airflow for data centers requiring rear-to-front cooling.

Product Overview

Hong Kong Starsurge Group presents the NVIDIA Quantum-2 MQM9700-NS2R — a high-performance, fully managed 400Gb/s InfiniBand switch with reverse airflow (connector-to-power). As part of the QM9700 series, this switch provides 32 OSFP ports supporting 64× 400Gb/s connections (or up to 128 ports at 200Gb/s via port-split technology). With a landmark 51.2 terabits per second bidirectional throughput and over 66.5 billion packets per second capacity, the MQM9700-NS2R includes an on-board Subnet Manager that enables simple out-of-the-box bring-up for up to 2,000 nodes, making it ideal for research institutions, finance, manufacturing, and enterprise AI clusters demanding low latency, lossless fabric, and simplified management.

Key Features
  • Unprecedented Port Density: 64 ports of 400Gb/s NDR InfiniBand in 1U, non-blocking architecture.
  • Double-Density 200Gb/s Mode: Supports up to 128 ports of 200Gb/s using NVIDIA port-split technology, reducing TCO for leaf-spine topologies.
  • Integrated Subnet Management: On-board Subnet Manager manages up to 2,000 nodes out-of-the-box; accessible via CLI, WebUI, SNMP, or JSON interfaces.
  • In-Network Computing: SHARPv3 (Scalable Hierarchical Aggregation and Reduction Protocol) delivers 32x higher AI acceleration compared to prior generation.
  • Advanced Fabric Services: RDMA, adaptive routing, congestion control, enhanced VL mapping, and self-healing network capabilities.
  • Reverse Airflow (C2P): Connector-to-Power airflow direction ideal for data center layouts with cold aisle on the port side.
  • Redundancy & Reliability: 1+1 redundant hot-swappable power supplies, hot-swappable fan units.
  • Backward Compatible: Supports previous InfiniBand generations (HDR, EDR, FDR).
Technology: NVIDIA Quantum-2 Platform with On-Board Intelligence

Built on the NVIDIA Quantum-2 ASIC, the MQM9700-NS2R leverages state-of-the-art 400Gb/s SerDes technology. It incorporates Remote Direct Memory Access (RDMA) for low CPU overhead and high throughput, adaptive routing to avoid fabric hotspots, and NVIDIA SHARPv3 for in-network reductions, dramatically accelerating collective operations in MPI and AI frameworks. The integrated Subnet Manager runs MLNX-OS software, providing full chassis management through multiple interfaces (CLI, WebUI, SNMP, JSON), reducing operational complexity and enabling rapid deployment.

With support for multiple topologies — including Fat Tree, SlimFly, DragonFly+, and multi-dimensional Torus — the MQM9700-NS2R enables architects to build cost-effective, highly resilient networks for next-generation supercomputing while maintaining simple management.

Typical Deployments
  • AI & Machine Learning Clusters: High-speed GPU interconnects for large language model training and inference with integrated subnet management.
  • High-Performance Computing (HPC): Research simulations, weather modeling, genomics, and quantum computing labs requiring self-healing fabrics.
  • Hyperscale Data Centers: Spine-leaf architectures with massive east-west traffic and reverse airflow compatibility.
  • Enterprise Cloud & Finance: Low-latency trading systems and real-time analytics where managed simplicity matters.
  • Government & Education: National labs, university supercomputing centers needing deterministic performance and easy out-of-box setup.
Compatibility

Fully interoperable with NVIDIA InfiniBand ecosystem: ConnectX-6/7 adapters, LinkX cables, and Quantum-2 series switches. Backwards compatible with HDR, EDR, and FDR devices. Supports major Linux distributions (RHEL, Ubuntu, Rocky Linux) and Windows Server with appropriate drivers. The on-board Subnet Manager is compatible with NVIDIA UFM for enhanced telemetry and larger fabrics.

Technical Specifications
Parameter Specification (MQM9700-NS2R)
Ports 32 OSFP connectors supporting 64 ports of 400Gb/s InfiniBand (NDR) or 128 ports of 200Gb/s
Aggregate Throughput 51.2 Tb/s bidirectional, non-blocking
Packet Forwarding Capacity Over 66.5 billion packets per second (BPPS)
Management Type Fully managed with on-board Subnet Manager (supports up to 2,000 nodes)
CPU & Memory x86 Coffee Lake i3, 8GB DDR4 SO-DIMM (2666 MT/s), 16GB M.2 SSD
Power Supply 1+1 redundant, hot-swappable, 200-240Vac, 80 Plus Gold, ENERGY STAR certified
Cooling / Airflow Reverse airflow: Connector-to-Power (C2P), hot-swappable fan units
Dimensions (HxWxD) 1.7 in (43.6mm) x 17.0 in (438mm) x 26.0 in (660.4mm)
Weight Approx. 14.5 kg
Operating Temperature 0°C to 40°C
Humidity (Operating) 10% to 85% non-condensing
Altitude Up to 3050m
Regulatory Compliance CE, FCC, VCCI, ICES, RCM, RoHS, CB, cTUVus, CU
Warranty 1 year standard (extended options available)
Selection Guide: QM9700 Series & Airflow Options
Orderable Part Number (OPN) Description Airflow Direction Management
MQM9700-NS2F 64 ports 400Gb/s, managed P2C (power-to-connector / forward) On-board Subnet Manager
MQM9700-NS2R 64 ports 400Gb/s, managed C2P (connector-to-power / reverse) On-board Subnet Manager
MQM9790-NS2F 64 ports 400Gb/s, unmanaged P2C (forward) External Subnet Manager required
MQM9790-NS2R 64 ports 400Gb/s, unmanaged C2P (reverse) External Subnet Manager required

For C2P airflow (connector-to-power), cold air enters from the OSFP connector side and exhausts through the power supply side. Verify your data center cooling layout before ordering. MQM9700-NS2R is the ideal managed choice for racks requiring reverse airflow.

Advantages of Choosing MQM9700-NS2R
  • Integrated Management: On-board Subnet Manager reduces external dependencies, lowers TCO for clusters up to 2,000 nodes.
  • Reverse Airflow Flexibility: C2P cooling matches specific data center hot/cold aisle configurations, improving thermal efficiency.
  • Highest Radix in 1U: 64x 400G ports minimizes switch tiers, lowers latency, and simplifies cabling.
  • Energy Efficient: Gold+ power supplies and smart fan control reduce OPEX.
  • Future-Ready Scalability: Support for SHARPv3 and adaptive routing ensures investment protection for next-gen AI frameworks.
  • Easy Provisioning: WebUI, CLI, SNMP, and JSON interfaces enable automation and rapid deployment.
Service & Support

Hong Kong Starsurge Group provides end-to-end support including pre-sales consulting, integration services, and global logistics. Our technical team offers deployment assistance, RMA services, and extended warranty options. For volume orders, we deliver customized cabling and configuration validation. Multilingual support available for APAC, EMEA, and Americas regions.

Frequently Asked Questions (FAQ)
What is the difference between MQM9700-NS2R and MQM9700-NS2F?
The only difference is airflow direction: NS2R has C2P (connector-to-power, reverse airflow) while NS2F has P2C (power-to-connector, forward airflow). Both are fully managed with identical performance and port configurations.
How many nodes can the on-board Subnet Manager handle?
The integrated Subnet Manager supports up to 2,000 nodes out-of-the-box. For larger fabrics, you can integrate an external NVIDIA UFM (Unified Fabric Manager) to scale further while retaining the switch's managed capabilities.
Does it support 200Gb/s port splitting?
Yes, NVIDIA port-split technology allows each 400Gb/s port to operate as two 200Gb/s ports, effectively delivering up to 128 ports of 200Gb/s.
Which cables are compatible with reverse airflow switch?
OSFP connectors: passive/active copper cables, active fiber cables, and optical modules. Airflow direction does not affect cable compatibility.
Is this switch backward compatible with HDR (200G) InfiniBand?
Yes, the Quantum-2 series supports HDR, EDR, and FDR speeds with appropriate cables and adapters, ensuring smooth migration.
What management interfaces are available?
CLI, web-based UI (WebUI), SNMP v1/v2/v3, and JSON-RPC interfaces. Full MLNX-OS software suite included.
Important Notices & Precautions
• Verify C2P airflow orientation matches your rack: cold aisle must be on the connector (OSFP) side.
• Use only NVIDIA-qualified optical modules and cables to maintain signal integrity and compliance.
• Operating altitude up to 3050m; temperature not to exceed 40°C.
• For initial setup, ensure management ports (RJ45/USB) are accessible.
• Some advanced features may require software license; consult Starsurge sales team.
• Specifications not publicly confirmed by NVIDIA are marked as provided — please confirm before ordering.
About Hong Kong Starsurge Group Co., Limited

Founded in 2008, Starsurge is a technology-driven provider of network hardware, IT services, and system integration solutions. We serve government, healthcare, manufacturing, education, finance, and enterprise clients worldwide. With an experienced technical and sales team, Starsurge delivers tailored networking infrastructure — including IoT solutions, custom software development, and global logistics. Our customer-first approach ensures reliable quality, responsive service, and scalable network designs. As an authorized channel partner for leading brands, we bridge cutting-edge technology with real-world deployment success.

Key Facts at a Glance
64
400Gb/s ports
51.2 Tb/s
bisection bandwidth
1U
rack height
SHARPv3
in-network AI accel.
C2P
reverse airflow
2000
nodes managed (on-board)
Compatibility Matrix (Verified)
• NVIDIA ConnectX-6 / ConnectX-7 InfiniBand adapters
• NVIDIA Quantum-2 QM9700/9790 series switches
• LinkX OSFP cables (passive copper up to 3m, active optical up to 500m)
• Software: MLNX-OS (on-board), UFM for enhanced management
• Operating systems: RHEL 8/9, Ubuntu 20.04/22.04, Windows Server 2022 with Mellanox WinOF-2
• Subnet Manager compatibility: OpenSM, UFM, native on-board SM
Buyer Checklist – MQM9700-NS2R (Managed, Reverse Airflow)
✓ Verify required airflow direction: C2P (connector-to-power) matches your rack's cold aisle placement.
✓ On-board Subnet Manager is ready for up to 2,000 nodes; larger fabrics plan for external UFM.
✓ Plan cabling: 400G OSFP to OSFP or split to 2x200G.
✓ Confirm power input: 200-240Vac with redundant feeds.
✓ For large scale deployments, request a topology design review from Starsurge engineers.
Related Products
Related Guides & Resources
  • Whitepaper: "Scaling AI Fabrics with NVIDIA Quantum-2 Managed Switches" (available on request)
  • Deployment Guide: Fat Tree vs. DragonFly+ with QM9700 Series
  • Airflow Best Practices: Choosing P2C vs C2P for Data Center Cooling
  • Compatibility List: NVIDIA Certified OSFP Cables for NDR

ต้องการทราบรายละเอียดเพิ่มเติมเกี่ยวกับผลิตภัณฑ์นี้
ฉันสนใจ MQM9700-NS2R 400G NDR InfiniBand Smart Switch – 64 Ports On-Board Subnet Manager C2P Airflow คุณช่วยส่งรายละเอียดเพิ่มเติมเช่นประเภทขนาดปริมาณวัสดุ ฯลฯ ให้ฉันได้ไหม
ขอบคุณ!