This post is an update of an article that originally appeared on the Dancing Dinosaur blog.
The next generation of mainframe computing will be more open, cooperative, and faster judging from the remarkably similar makeup of three open consortia: CCIX, Gen-Z and OpenCAPI.
CCIX allows processors based on different instruction sets to extend their cache coherency to accelerators, interconnects and I/O.
OpenCAPI enables organizations to connect accelerators and I/O devices without noise but with virtual addressing that eliminates the software inefficiency associated with the conventional I/O or to connect advanced technologies. The focus of OpenCAPI is on attached devices primarily within a server.
Gen-Z, announced around the same time, is a new data access technology that primarily enables read and write operations among disaggregated memory and storage.
Rethink the Datacenter
As mainframe computing evolves, it’s quite likely that your next data center will use all three open consortia.
- The CCIX initial members include Amphenol Corp., Arteris Inc., Avery Design Systems, Atos, Cadence Design Systems, Inc., Cavium, Inc., Integrated Device Technology, Inc., Keysight Technologies, Inc., Micron Technology, Inc., NetSpeed Systems, Red Hat Inc., Synopsys, Inc., Teledyne LeCroy, Texas Instruments and TSMC.
- The OpenCAPI group includes AMD, Dell EMC, Google, Hewlett Packard Enterprise, IBM, Mellanox Technologies, Micron, NVIDIA and Xilinx. Their new specification promises to enable up to 10X faster server performance with the first products expected in the second half of 2017.
- The Gen-Z consortium consists of Advanced Micro Devices, Broadcom, Huawei Technologies, Red Hat, Micron, Xilinx, Samsung, IBM, and Cray. Other founding members are Cavium, IDT, Mellanox Technologies, Microsemi, Seagate, SK Hynix and Western Digital. They plan to develop a scalable computing interconnect that will enable systems to keep with the rapidly rising tide of data that is being generated and needs to be analyzed. This will require the rapid movement of high volumes of data between memory and storage.
The problem each tries to solve involves how to make the number and types of new hardware establish fast communications and work together. In effect, each group aims to boost the speed and interoperability of the computer room servers, devices and components engaged in creating and managing various data and tasked with analyzing large amounts of data.
To a large extent, this results from the failure of Moore’s Law, which cannot continue to double the number of processors repeatedly. Mainframe computing’s future data center will need different tweaks and approaches to deliver improved price and performance.
In August 2016 IBM announced a chip breakthrough, unveiling the industry’s first 7 nm chip that could hold more than 20 billion tiny switches or transistors for improved computing power. The new chips could help meet demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging technologies, according to IBM.
Most chips in today’s mainframe computing environment employ microprocessors between 14 and 22 nanometers (nm). The 7 nm technology allows a substantial power gain. IBM intends to apply the new chips to analyze DNA, viruses, cells and exosomes. IBM expects to test this lab-on-a-chip technology starting with prostate cancer.
The point of this digression into chips and Moore’s Law is to suggest the need for tools and interfaces the three consortia offer. As demand for ultra-fast data analytics and an increase in the number of devices expand, the delays increase. Ask yourself: how long do you want to wait for your doctor to analyze prostate or lymph cells? When a life that is dear to you rides on that analysis each microsecond matters.
Download Syncsort’s 2018 State of the Mainframe survey report to see the 5 key trends to watch for in 2018.