SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Mellanox Technologies, Ltd.
MLNX 124.890.0%Apr 27 5:00 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: PaulAquino11/1/2016 8:57:59 AM
   of 954
 

Mellanox Multi-Host Technology Reshapes Data Center Economics


Last update: 01/11/2016 8:30:00 am

OEMs' Adoption of the New Technology for InfiniBand and Ethernet Platforms Accelerates
SUNNYVALE, Calif. & YOKNEAM, Israel--(BUSINESS WIRE)--November 01, 2016--

Mellanox(R) Technologies, Ltd. (NASDAQ:MLNX), a leading supplier of end-to-end interconnect solutions for data center servers and storage systems, announced today that its innovative Multi-Host interconnect technology has been adopted by multiple OEMs. Multi-Host technology improves data center economics, allowing OEMs to build scale-out heterogeneous compute and storage platforms direct connectivity between multiple compute elements, storage elements and the network. According to OEM benchmark results, Mellanox Multi-Host enhances the data center performance by 150 percent while reducing data center costs by nearly 30 percent.

"We have adopted Mellanox's innovative Multi-Host EDR 100Gb/s InfiniBand technology within Sugon 'M-Pro' servers," said Cao Zhennan, General Manager of the High-Performance Computing Department at Sugon. "Multi-Host can balance the communication performance of processes on all CPUs and reduce network costs through network sharing. Together with our high-density blade technology, the system delivers higher application performance at lower cost."

Multi-Host technology enables multiple compute or storage hosts to direct-connect into a single interconnect adapter. This improves the data center total cost of ownership by reducing the number of cables, network adapters and switches by 4X. The technology is available today in the company's line of ConnectX-4 and ConnectX-5 10/25/40/50/56/100 Gigabit InfiniBand and Ethernet adapters.

ASRock Rack, specialists in the field of Cloud Computing server hardware, designed their blade server with four built-in Mellanox ConnectX(R) -4 Lx Multi-Host adapters. This technology enables 16 nodes to share the total network bandwidth up to 200GbE. 3U16N achieves the highest-density micro-server, maintaining high performance with lower power consumption.

"For applications with high performance requirements such as in high-performance Cloud, and Big Data, the data center infrastructure needs to eliminate performance bottlenecks by reducing communication overhead, while maintaining a high cost/performance ratio," said Weishi Sa, General Manager of ASRock Rack. "Mellanox Technologies' Multi-Host technology provides us with the ideal solution for our Ethernet connectivity need. Due to the lower total cost of ownership and high performance network solution enabled by this technology, we'll be able to provide world leading solutions for our customers."

In addition, the Multi-Host technology has been accepted as a server network standard by Facebook's Open Compute Project (OCP), to enable the delivery of the most efficient server, storage and data center hardware platforms.

"Data center technology is evolving into a new age, in which the network interconnect technology is playing an increasingly important role," said Gilad Shainer, vice president of marketing, Mellanox Technologies. "We are pleased to see our innovative Multi-Host technology has helped Sugon and ASRock Rack establish unparalleled high-performance server solutions. The solutions based on Multi-Host establish a foundation for building next-generation computing and storage data centers."
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext