Mellanox in the eyes of the 2017: where the data, where the thing in it!
Wang Ke Yue 2017/1/6
In just the last in 2016, to provide IT services to the enterprise a lot of manufacturers have good results, as appears in 2016Dostor Billboard storage of those small partners. Among them, Mellanox has become the 2016 net red. However, Mellanox is really "net" red, in the global super-Top500, the global data center storage network ... ... Late last year, Mellanox's vice president of global marketing Gilad Shainer made a special trip to China, talked about how to network red to continue Going forward, 2017, Mellanox will bring what to expect?

Gilad Shainer, Vice President, Global Marketing, Mellanox
The world's super-Top500 show in the way
Talking about the global super-Top500, Mellanox is an inescapable topic, but also Mellanox can show off their most time. Gilad Shainer has already started this topic before I ask.
"TOP 500 is the ranking of the world's supercomputers.We use this ranking to analyze today's Mellanox network still occupies all the top 500 ranking 39%, which is a high share, there are 194 systems using our company's network. It is worth pointing out that, in the Top500 system, which is not just high-performance computing systems, there are a lot of network 2.0 companies, cloud computing companies are ranked in, so if we simply put inside the real high-performance computing System, 65%, or 65% of all high-performance computing systems in the top500 is using our InfiniBand. "Gilad Shainer said.
With the popularity of ultra-computing applications, more and more people began to know the Internet +, large data, artificial intelligence, etc. are beginning to have behind the super-figure. This also makes the supercomputing more and more become a popular topic, and Mellanox more and more for everyone familiar. The deeper reason is that the network can be extended to the Internet of things, autopilot, as well as health, manufacturing, retail and many other industries through machine learning and artificial intelligence, all of which will benefit from Mellanox's high-speed network interconnection.
Prior to the field of high-performance computing specializing in scientific computing is now a broader market to carry out, the first data to push the pressure on the network, you can use more high-speed network to become more and more people's pursuit.
From the "CPU as the core" to move to the "data core"
This pursuit is projected into the data center, the impact of a too cloud-based data center.
Cloud computing pools the resources of computing, networking, storage, and security, meaning that data based on these resources will need to flow more frequently in future data centers. "The modern data center is moving towards a new data center architecture centered on data, where the data is analyzed, not where the data flows to the CPU and not where the data goes to the CPU. It is analyzed, "explains Gilad Shainer.
Gilad Shainer also talked about the benefits of this is obvious: "First: it will improve a lot of speed, do not wait until the CPU after the analysis; Second: It will reduce a lot of unnecessary, repeated data transmission, our The data in the network is able to reduce the latency of communication from two to thirty microseconds in the current traditional mode to three or four microseconds, which is almost ten times the performance improvement.
To this end, Mellanox has been pursuing offloading unloading capabilities with the network hardware to strengthen the network within a calculation, in the hope of helping users to achieve "data as the core" of this new data center architecture.
In this regard, Mellanox does not seem to care about Intel CPU efforts.
The latest InfiniBand 200Gb / s HDR solution
Perhaps, aside from CPU capabilities, Mellanox will not care about Intel's efforts on the Web, as Gilad Shainer describes Mellanox as having an InfiniBand 200Gb / s HDR solution.
"At Mellanox, we continue to develop intelligent networks that will enable the network to process data. The latest product in this direction is HDR InfiniBand 200Gb / s networking products, including switches and network cards, the world's first 200Gb / s And ConnectX-6, the world's first 200-Gbit / s NIC, Quantum is our switch family with 200G end-to-end networking products, "said Gilad Shainer.
Mellanox Connect X-6 adapters, Quantum switches, LinkX cables, and transceivers together form the basis for the next generation of high-performance computing, machine learning, large data, cloud, web 2.0 and storage platforms. 200Gb / s HDR InfiniBand network interconnection infrastructure. The solution enables customers and users to take full advantage of open standards-based technologies to maximize application performance and scalability while significantly reducing the total cost of ownership in the data center. The Mellanox 200Gb / s HDR solution is expected to ship in 2017 on a large scale.
InfiniBand has to 200Gb / s, and do not know Intel OPA will think?
In fact, DOSTOR have reported " a group of IT giants come up with a new bus architecture-the Z-Gen " news, in which those giants including Mellanox. So, even without considering the feelings of Intel, the future Gen-Z will not change the life of InfiniBand, is also unknown.
So, in the "data core" under the framework of the cloud, InfiniBand 200Gb / s HDR solution is indeed a good choice nowadays.
Silicon optical interconnect technology may be the first to appear in the next killer
At the same time, in promoting the data center traffic growth, the current should be a light into the copper back situation. At present, the data center to 40G, 100G module, while the copper circuit 50Gb / s is close to the transmission limit, the chip level light into the copper back is inevitable. More Cisco data show that data center traffic in 2020 will reach the current 5 times, to this end, from the beginning of the 21st century, led by Intel and IBM companies and academic institutions began to focus on the development of silicon optical signal transmission technology, expect one day can use The optical path replaces the data circuits between the chips.
With the dream of all things to promote Internet, but also to silicon optical interconnect technology becomes more urgent. The stronger the performance of a single chip, the more the number of interconnected chips, the lower the interconnection bandwidth more easily become a performance hindrance. Copper circuit is not only difficult to enhance the bandwidth, power consumption and heat can not be discounted, the industry demand for silicon optical technology has come to an imminent degree.
After Intel, Mellanox also successfully cut into this market. At OFC 2016, the new 50Gb / s silicon photonic modulators and detectors are shown and will be used in the future as Mellanox LinkX Series 200Gb / s and 400Gb / s cables and transceivers Key components.
Although the current silicon industry has made some good results, such as Gilad Shainer said: "Mellanox is the first mature silicon products shipped company, the world's first, is able to commercial, very stable Of silicon-based products. "
However, today's silicon interconnects are designed to move information around cloud-based data center servers rather than between smaller chips. And that's probably a bigger challenge for Mellanox to really come to grips with the future "data" center situation that can stand out.
However, it was observed, Intel and IBM have this headache, Mellanox actually there is a great opportunity.
This article copyright is stored online, not reproduced without permission. The article only represents the author's view, if you have different views, welcome to add storage micro-message public on-line (micro signal: doitmedia) to communicate. X, y
http://www.dostor.com/p/43187.html
|