Temescal

ARM Purification

CLICK HERE: free registration for Semiconductor Today and Semiconductor Today ASIACLICK HERE: free registration for Semiconductor Today and Semiconductor Today ASIA

Join our LinkedIn group!

Follow ST on Twitter

IQE

11 November 2016

Mellanox announces first 200Gb/s data-center interconnect solutions

Mellanox Technologies Ltd of Sunnyvale, CA, USA and Yokneam, Israel (a supplier of end-to-end InfiniBand and Ethernet interconnect solutions and services for data-center servers and storage systems) has announced what it says are the first 200Gb/s data-center interconnect solutions.

ConnectX-6 adapters, Quantum switches and LinkX cables and transceivers together provide a complete 200Gb/s HDR InfiniBand interconnect infrastructure for the next generation of high-performance computing, machine learning, big data, cloud, Web 2.0 and storage platforms.

The ConnectX-6 adapters include single/dual-port 200Gb/s Virtual Protocol Interconnect ports options, which double the data speed compared with the previous generation. It also supports both the InfiniBand and the Ethernet standard protocols, and provides flexibility to connect with any CPU architecture – x86, GPU, POWER, ARM, FPGA and more. With what is claimed to be unprecedented performance of 200 million messages per second, ultra-low latency of 0.6 microseconds and in-network computing engines such as MPI-Direct, RDMA, GPU-Direct, SR-IOV, data encryption as well as the innovative Mellanox Multi-Host technology, ConnectX-6 will enable the most efficient compute and storage platforms in the industry, claims Mellanox.

The Quantum 200Gb/s HDR InfiniBand switch is claimed to be the fastest switch supporting 40 ports of 200Gb/s InfiniBand or 80 ports of 100Gb/s InfiniBand connectivity for a total of 16Tb/s of switching capacity, with an extremely low latency of 90ns. Quantum advances the support of in-network computing technology, delivers optimized and flexible routing engines, and is the most scalable switch IC available, adds the firm. Quantum IC will be the building block for multiple switch systems – from 40 ports of 200Gb/s or 80 ports of 100Gb/s for Top-of-Rack solutions – to 800 ports of 200Gb/s and 1600 ports of 100Gb/s modular switch systems.

To complete the end-to-end 200Gb/s InfiniBand infrastructure, Mellanox LinkX solutions will offer a family of 200Gb/s copper and silicon photonics fiber cables.

The 200Gb/s HDR InfiniBand solutions enable users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data-center total cost of ownership.

"The ability to effectively utilize the exponential growth of data and to leverage data insights to gain that competitive advantage in real time is key for business success, homeland security, technology innovation, new research capabilities and beyond," says president & CEO Eyal Waldman. "The network is a critical enabler in today's system designs that will propel the most demanding applications and drive the next life-changing discoveries," he adds.

"Ten years ago, when Intersect360 Research began its business tracking the HPC market, InfiniBand had just become the predominant high-performance interconnect option for clusters, with Mellanox as the leading provider," comments Intersect360 Research's CEO Addison Snell. "Over time, InfiniBand continued to grow, and today it is the leading high-performance storage interconnect for HPC systems as well. This is at a time when high-data-rate applications like analytics and machine learning are expanding rapidly, increasing the need for high-bandwidth, low-latency interconnects into even more markets. HDR InfiniBand is a big leap forward," he adds.

"The leadership scale science and data analytics problems we are working to solve today and in the near future require very high bandwidth, linking compute nodes, storage and analytics systems into a single problem-solving environment," says Arthur Bland, OLCF project director, Oak Ridge National Laboratory (ORNL). "With HDR InfiniBand technology, we will have an open solution that allows us to link all of our systems at very high bandwidth," he adds.

"Data movement throughout the system is a critical aspect of current and future systems. Open network technology will be a key consideration as we plan the next generation of large-scale systems, including ones that will achieve Exascale performance," notes Bronis de Supinski, chief technology officer in Livermore Computing. "HDR InfiniBand solutions represent an important development in this technology space."

"HDR InfiniBand will provide us with the performance capabilities needed for our applications," adds Parks Fields, SSI team lead HPC-design at the Los Alamos National Laboratory.

The full suite of Mellanox's end-to-end high-performance InfiniBand and Ethernet solutions (including the new 200G HDR InfiniBand solutions) are on display in booth #2631 at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC16) in Salt Lake City, UT, USA (14-17 November). Mellanox's 200Gb/s HDR solutions will become generally available in 2017.

Tags: Mellanox silicon photonics AOC

Visit: http://sc16.supercomputing.org

Visit: www.mellanox.com/sc16

Share/Save/Bookmark
See Latest IssueRSS Feed

EVG