Big Iron and Big Data: Helping Mainframe Applications Play Nicely with Hadoop
One of the great benefits of developing software at Syncsort is having a team with a very rich skill set. It allows us to work on cutting-edge Big Data technologies such as Hadoop, and also bridge the gap to help enterprise customers whose business relies on the Mainframe.
Here are some of the ways we’re helping Mainframe users take advantage of the Big Data revolution:
In the latest release of Syncsort’s Hadoop ETL Solution (DMX-h), we’ve added the ability to produce a Mainframe file on the Hadoop Distributed File System (HDFS). This allows businesses to take advantage of the cost savings of storing files on HDFS, while still preserving their data in EBCIC encoding and the original Mainframe format with an associated COBOL copybook. By keeping the data in the mainframe format, the enterprise can comply with data quality and lineage requirements, preserving a single version of the truth.
These initiatives also help enterprises leverage Hadoop for processing their data, while still generating data that can be transported back to the Mainframe. And most importantly, they allow Mainframe data to be integrated with data coming from varied sources, such as Hive, Teradata, Salesforce, etc.
Another exciting development has been our partnership with Splunk and our recently announced new product, Ironstream. By collecting critical Mainframe log data to be visualized in Splunk, Ironstream helps the enterprise achieve a 360-degrees view of all IT systems, including Mainframes.
It gives us great satisfaction to deliver practical and cost-effective solutions that enable enterprise customers to take advantage of the technological advances brought by the Big Data revolution.