Data infrastructure optimization software
Data integration and quality software
Data availability and security software
Cloud solutions

Top 5 Blog Picks about Mainframe DevOps Best Practices for 2017

Here are some editor’s picks from industry experts on Mainframe DevOps best practices to use while you strategize for 2017:

#1:  Making Mainframes DevOps Friendly

Can you do DevOps on your mainframe? That might seem like a silly question. DevOps is among the newest trends in IT, and mainframes are an established, “legacy” technology. Pairing the two may appear to make little sense. Chris Tozzi explains why DevOps on the mainframe is advantageous, and how organizations that have migrated to a DevOps-based workflow can make mainframes part of it. Read on >

#2:  DevOps Test Data: Why Synthetic is Wrong and Policy-Based Masking is Right

To compete in digital markets, businesses must get new code into production quickly, frequently and with high confidence. As a result, DevOps has become an important part of IT — and test/QA such an important part of DevOps. Compuware’s John Crossno discusses 3 key decision factors teams need to consider to carefully re-evaluate their test data strategies before pushing too hard on the gas pedal. Read on >

#3:  Continuous Delivery for Your Mainframe Data

When you hear the word “DevOps,” you probably first think of flashy new infrastructure technologies like Docker containers. But that doesn’t mean DevOps practices are incompatible with older infrastructures. As this article explains, there’s no reason you can’t do continuous delivery – one of the hallmarks of DevOps – on mainframe systems. Read on >

Free Webcast: Mainframe Optimization in 2017

#4:  Microservices and API-First Design Stalking Mainframe Practices

While the rationale varies within software engineering specialties, the concept of brick-by-brick construction underlies much of component-based approaches to building and maintaining software. Much less obvious – and the subject of many debates spanning decades – is just what those components ought to be, who should write them, and how. Mark Underwood explores two concepts that are transforming software in the current era: API-first design and Microservices. Read on >

#5:  The Self-Managing Mainframe and How Tipping Points Surprise Us

The concept of a “Tipping Point” was introduced by Malcom Gladwell in his book of the same name. Gladwell defines a tipping point as “the moment of critical mass, the threshold, the boiling point.”  The idea is that little things build up to a point of irreversible change, after which things happen more quickly and visibly. Similarly, the recent announcement from Compuware about their partnership with Syncsort is a small step that is part of a bigger trend. It takes us nearer to the tipping point and the result may surprise us all. Let me explain. Read on >

Want to learn about another important part of mainframe strategic planning? Read the recent Enterprise Tech Journal Cover Story,  Big Data from Big Iron: How Your Mainframe Data Complete the Puzzle, which highlights key use cases mainframe data in Big Data analytics.



Leave a Comment

Related Posts