Data infrastructure optimization, availability & security software
Data integration & quality software
The Next Wave of technology & innovation

Mainframe: Still Not Crazy After All These Years

Warren Buffett’s Berkshire Hathaway started buying up IBM stock in 2011 and bought still more of IBM later. Despite its disappointing short-term valuation, Berkshire Hathaway is standing by its IBM investment, which is one of Berkshire’s top four plays.

You can ignore the advice of Warren Buffett if you dare, but that hasn’t always worked out well. Buffett has reportedly based his bullish outlook on IBM on its steady “cloud-resistant” mainframe business. For many firms that thrived during the internet era, mainframes don’t enter into many architecture conversations.

That is an omission some may wish to rectify. To make this case, some statistics may be needed:

  • The 2015 z13 system maxes out at 10TB of client memory.
  • The z13 can withstand an 8.0 earthquake.
  • z Systems enjoy the highest standardized security certification (FIPS 140-2, highest level 4 of 4).
  • 23 of the world’s top 25 retailers use a mainframe.
  • 92 of the top 100 banks are mainframe users.
  • All 10 of the top 10 insurers have commitments in mainframe technologies.
  • Around 80% of all corporate data is managed by mainframes.
  • The z13 can process 2.5 billion transactions daily (that’s 100 Cyber Mondays, as IBM’s Mark Anzani, VP of z Systems Strategy, Resilience and Ecosystems, observed).

Some of these statistics were provided by Steve Wexler at IT Trends & Analysis. He has still more statistics that might interest doubters.

Survey Says
In fact, and notwithstanding perceptions to the contrary, the mainframe’s center-stage position in large corporations around the world has not budged. That’s the conclusion of an industry survey sponsored by Syncsort Inc. and conducted in 2015 by Enterprise Systems Media, a publisher of magazines for IT managers and technical professionals. Seven out of ten respondents (IT planners, architects, and managers at global enterprises with $1 billion or more in annual revenues) ranked the use of the mainframe for large-scale transaction processing as very important.

Also in 2015, BMC released its tenth annual mainframe research report. The “comfort” experienced with a mainframe solution may not be mere sentiment. In fact, the report identified security as a central driver to mainframe growth, which is further discussed in Syncsort’s eBook: What is the Mainframe Really Doing?
“This year’s report confirmed the mainframe’s continued growth is being powered by today’s digital ‘Always On’ world and its demand for secure access to applications and data, at mobile speeds and scale. Security was the largest factor for continued investment in the mainframe with 56 percent responding that they see the security strengths of the platform as an advantage. This was followed closely by availability, as 55 percent of respondents leverage the mainframe’s availability benefits.”

In the latest quarter available (2015 Q3), IBM’s mainframe business grew 15%. With the sale of its x86 server business to Lenovo in 2014, IBM’s mainframe business (along with POWER) has come into sharper focus.

IBM z Series mainframe photo
IBM z Series: Turning more heads toward the mainframe (Credit: and IBM)

Costs More? Think TCO.
When asked whether they contemplated using a mainframe for a green-field project, many architectures would be expected to reply, “Too costly!”

True? It depends.

Analyst Eric Cothenet at Novipro works through a “non-fictional” example for an insurance implementation of Oracle Siebel CRM paired with Oracle’s DBMS. The infrastructure selected used a 204-processor Intel setup, which required the customer to purchase 204 Oracle licenses. Cothenet argues that the architects of this system could have chosen four IBM processors and saved enough on Oracle licenses to pay back the mainframe hardware premium in as little as three years.

There may also be tangible differences in uptime, though metrics in this area may not be readily available. Turn back the clock a few decades to when it was customary for hardware builders to offer MTBF statistics along with other performance specifications. Nowadays, you’ll see it on disk drives and other components, but not for entire systems, and especially not for distributed systems platforms. He suggests that a closer look at MTBF would often show mainframes exhibiting higher uptimes than machines built on commodity hardware.

Organizations would do well to estimate the cost of downtime as part of the Total Cost of Ownership (TCO). When those costs are added, the scales could tilt in the mainframe direction.

Spark Speed: Analytics Integration for IoT
Mainframe shops overall may not be hotbeds of innovation, but there are outposts where two events in 2015 converged: First, the January announcement of the latest z13 Systems iteration, and, second, IBM’s announced Apache Spark initiatives.

  1. Cryptologic and Analytics Enhancements: The first of these was notable for the addition of new encryption and analytics capabilities in z13. Add mainframes to public cloud scenarios using new integrated cryptographic features. For example, IBM has announced the Crypto Express5S, a tamper-sensing, tamper-responding programmable crypto feature that provides for a tamper-resistant hardware security module. Encryption can be implemented at a transaction level — in near-rear time — using a secure IBM CCA coprocessor. Analytics computations are facilitated by Single-Instruction Multiple Data instructions that can “decrease the amount of code and accelerate code that handles integer, string, character, and floating-point data types. The SIMD instructions improve performance of complex mathematical models and allow integration of business transactions and analytic workloads on z Systems.”
  2. IBM Turns up the Heat on Spark: IBM’s announced support for Spark meant it would be prodding 3,500 of its developers and researchers to find ways to integrate, extend, or contribute to the Spark ecosystem.

Tools like Syncsort’s Spark connector could be the yeast needed to ferment high-performance architectures designed to support the Internet of Things (IoT). Along with the obvious data sources — financial logs are typically cited as the low-hanging fruit — future transactions may be triggered by sensor events at the endpoint of public or private networks. The volume, variety, and especially velocity of IoT data may stretch “elastic cloud” solutions in ways that do not prove sustainable for some applications.

The approach Amazon Web Services is taking with Netflix might not only entail occasional catastrophic outages, but could prove to have a high TCO for enterprises lacking a Netflix-class IT budget — or tolerance for risk.

The history of “timesharing” from the ‘70s might be a lesson here. It was steadily increasing timesharing and network costs that spawned local computing: first with minicomputers, and then with the personal computer. Today, whenever cloud services exceed a threshold that a business must set for itself, further investments in mainframe technology may become comparatively more attractive.

Hybrid solutions that incorporate high-performance, on-premises mainframes and public cloud could well hold the key to how IoT will transform the very definition what constitutes a transaction.

Postscript about Cattle-Raising
Ten or twenty years from now, Apple and Google may not be riding as high as they seem to be today. One possible reason could be IBM’s steady investment in R&D. In 2014, IBM was awarded more U.S. patents than any other company for the 22nd consecutive year. Its 7,534 patents for that year were the most ever awarded to a single company in a single year.

With 30% of its profits coming from mainframe products and services, IBM isn’t likely to risk having that revenue stream turn toward buggy-whip irrelevance.

No doubt, IBM has some serious calving to do to match the longevity of its mainframe cash cow. But no one is questioning its underlying fertility in the world of ideas.

Related Posts