In 1908 the Model T automobile made its debut. With a price tag of $825 it brought car ownership within economic reach of the American masses. This was in stark contrast to previous examples that had been exclusive playthings of the rich. A transformation had occurred. Luxury toys were quickly becoming a mainstay of society, and a year later, this was extended further when the retail price dropped to $575.
So what had changed? How did Henry Ford achieve such a revolution?
It was indeed a revolution. Ford had not only simplified vehicle design with the Model T, he had also revised the burgeoning assembly line. Costly skilled craftsmen no longer buzzed around a workshop slowly constructing a car in place. Instead the vehicle moved through a sequence of shortened stages where, step by step, it took shape. Workers repeated carefully honed procedures, and this lead to a reduction in costs with gains in productivity.
This revised method of manufacturing would change how things were produced. It dictated how the future would increasingly be driven by sequences of steps striving to achieve an end result in the most efficient way possible – “systems and processes” had arrived.
Ford’s rethinking of the complex – breaking it down into simple, repeatable components and steps – would become a blueprint for many things: planes, washing machines, phones… Today, we take complexity in our stride and have high expectations that it will just work. Our world has been simplified to an extent where the inner complexity is hidden and of little concern.
Cars are now commonplace and an example of something that is vastly more intricate than any predecessor. They have all rolled off a Ford-style assembly line, have wheels, an engine etc., but this is where the similarities end. Contemporary vehicles are in themselves an assembly of complex components.
Computing systems are also similar. They have evolved from the early days of simple top-down data processing into complex, multi-platform, always-on, immediate, people-driven services that now have an instant impact when unavailable.
So, how do you ensure availability? How do you know your complex car or your complex computing system is running smoothly? How do you tell when a small, yet critical component is failing, and disaster is just around the corner?
It’s a cold, frosty morning. I layer up and head out into the crisp air to scrape ice from the windshield. Car now frost-free, I jump behind the wheel, insert the key into the ignition and turn. An array of colored indicators appears before me; each has a purpose; each instantly tells me the go/no-go status of the essential components running this complex machine. All green…except for the amber warning telling me a tail light has blown. That’s OK, I know I can still drive away. I’ll take a slight detour to pick up a new bulb en route.
Car dashboards make analyzing data on critical functions easy. What about intelligence on critical IT services?
It seems checking the complex can be relatively simple.
Can this “dashboard” analogy be applied to checking the “health” of an essential complex computing system? The ability to hide the complexity yet instantly know if you have a go/no-go situation from an array of colored indicators would be extremely beneficial.
Splunk® IT Service Intelligence (ITSI) does just this… with no ice scraping involved! It enables the constant monitoring of multiple, complex computing systems and processes with simple green/amber/red indicators in a dashboard.
ITSI gives an organization the ability to decompose a complex system or process into individual “simple” components, each of which has its own thresholds for normal status and indicators for acceptable performance. In turn, these individuals can form a hierarchy which builds into the bigger, complex structure and their combined scores bubble up to give an overall dashboard indicator. Like the car, checking the complex becomes relatively simple.
A key feature of ITSI is the ability to drill down through the component hierarchy when a top-line indicator shows something is not working correctly to easily locate the source of trouble.
For example, if a modern, multi-layered, multi-platform system such as Online Banking was mapped into ITSI then it would cover many moving parts:
- Firewalls and other network hardware
- Load balancers
- Web servers
- Middle-tier and back-end software components
As long as ITSI is aware of all the elements within the online banking system then you can confidently monitor its health. You will know as soon as something becomes unavailable or steps out of line and will be able to quickly discover exactly where to focus efforts to regain stability.
But, what happens if a key piece is not monitored? Is it possible to know with confidence that all this complexity is running efficiently and good service is being provided?
Going back to my icy car as an example, if the dashboard did not show the engine was overheating as I hurtled down the freeway (rocking the Bee Gees to the max), I would most likely be totally unaware of a problem until it was too late. I may get a sense that something was wrong, but I’d certainly prefer to know what was occurring before it became a real issue and left me stranded at the side of the road. The “system” could fail, and I’d have no real clue about what to look at first – I’m a software guy, and this is hardware!
Until recently this situation could apply to ITSI’s clever monitoring. There was a major component that could not be monitored: mainframe. Any system relying upon services hosted on z/OS® could not be monitored from end-to-end. You could not get the complete picture. The Operations Team had to hope something never went wrong in that void. And, if it did, ensure the appropriate response details were on hand because time has just run out.
Fortunately, this monitoring gap has been filled with the launch of the Syncsort Ironstream® Module for Splunk IT Service Intelligence. This drop-in module allows mainframe systems to benefit from ITSI’s “early warning” surveillance, in real time, with key metric information available from multiple levels:
- CPC (Central Processor Complex – the ‘server’)
- LPARs (Logical Partition – the ‘virtual machines’ running on the CPC)
- Software (Components running on the LPARs) *
Indicators for critical transactions, performance and bottlenecks allow the mainframe to join all the other mechanisms and components in the infrastructure on the same single pane of glass. The frontline team finally has full visibility across the whole enterprise. They can now hurtle down the virtual freeway (rocking the Bee Gees to the max) confident that a dashboard light will tell when something starts to go wrong and, hopefully, before it becomes a major issue.
Complex is now simple(r) than it was before.
With Mainframe data provided by Ironstream, the service analyzer in Splunk IT Service Intelligence (ITSI) displays an easy to read view of complex data just like a car dashboard, only in this case about the health and performance of critical services.
Next week, at Guide Share UK Conference 2016 (November 1-2), Syncsort will be demonstrating Ironstream integration with Splunk IT Service Intelligence, along with its other key Big Iron to Big Data solutions, including its new mainframe performance optimization software for IBM DB2® and CA IDMS™ from the recent acquisition of Cogito. Demos will take place in booth 13. In addition, at Splunk’s recent annual user conference, Syncsort CEO Josh Rogers talked about the latest addition to Ironstream that brings mainframe data to Splunk’s IT Service Availability (ITSI) application and about the Cogito acquisition and the growing need for products that solve Big Iron to Big Data challenges.
*v1.0 support CICS® and DB2®