Meeting the Challenge of Mainframe Data Access with Hadoop
This blog was originally published by Keylink Technology on their Big Data & Analytics Blog.
Our good friend Arnie Farrelly, VP of Global Support and Services at Syncsort was recently interviewed over at the Syncsort Blog. Arnie’s always been a great supporter of Keylink and our customers in Australia and New Zealand. He’s also in constant contact with many different customers and partners globally, so he definitely has his finger on the pulse of all the various use cases for Hadoop adoption around the world.
Arnie gave a great answer when asked whether Syncsort DMX-h was being used to access mainframe data and bring it into Hadoop, and if companies were often having trouble with this process:
Yes, we’re seeing a lot of customers wanting to do that, and we’re also seeing them struggle with it. There’s not a lot of utilities out there that are easy to use. There are some good tools in the Hadoop stack itself, but they require Hadoop skills to use, and those can be hard to find. Companies don’t necessarily want to spend money and time training a lot of their people in Hadoop development. They look to us to provide an easy-to-use utility that can very quickly, easily and cost-effectively move data from the mainframe, and a variety of other data sources, into Hadoop. We handle complex mainframe COBOL, VSAM and DB2 data head and shoulders better than anyone else. And what we’re hearing from customers is that compared to other products like Informatica, the learning curve on our product is far shorter. In a POC we were working on recently, a customer said, “It took me months to learn the Informatica interface.” He took up our product and in a day, he was working with it, using it effectively. The simplicity of our interface is unique in the market today. It really lets you easily move data off the mainframe and into Hadoop without a lot of hard-to-find skills.
Like to learn how easy it is to access mainframe data with Hadoop? Take a look at the following video for a quick overview and demonstration of working with native mainframe data in Hadoop – just like any other data source.