Expert Interview

Expert Interview with Alex Rosenthal from Guardian about Mainframe Offload to Hadoop

Photo: Alex Rosenthal, Assistant Vice President, Enterprise Data Office at Guardian Life Insurance

Alex Rosenthal, Assistant Vice President, Enterprise Data Office at Guardian Life Insurance

Can you tell us what drove Guardian’s recent changes in Enterprise Data Strategy?

Guardian has embarked on a journey of thoughtful and targeted technology modernization. This includes themes such as Big Data, Consumerization, Omni-Channel, and Cloud Technologies. The Big Data program is focused on building an enterprise data management strategy and creating a strong data culture. Data, both internal and external, must be managed in the same manner as other organizational assets. In order to accomplish this, Guardian must have a central Enterprise Data Office whose work is not centered on individual projects, but on the building of firm-wide data capabilities that can be leveraged by all projects and stakeholders.

The Enterprise Data Office manages activities that can be grouped within the following themes:

  • Data Architecture and Best Practices
  • Data as a Service
  • Master and Reference Data Management
  • Advanced Statistical Modeling and Predictive Analytics
  • Self-Service Data Analysis and Visualization
  • Rapid Development and Enhancement of Data Assets and Marts

We are instituting a data supply chain system similar to what is used in supply chain operations. The system will include data manufacturing, wholesale data distribution, and retail data distribution. Also, we are exploring the management of data movement from outside/in, where Guardian utilizes external market data, and inside/out, where Guardian is the source of curated data.

Can you tell us about the drivers behind your mainframe offload strategy?

Hadoop is a central component of our data lake acquisition program. Guardian has a large mainframe footprint, which includes technologies such as VSAM file systems and Cobol transformation logic. Data projects have historically been complicated and lengthy, as they involve the coordination and focus of multiple development teams and technologies.

In order to master enterprise data, it is important that we bring data in its raw form into a central repository, where the cost of storage is not a factor. We have been centralizing the transformations of our data so that we can consistently baseline and supply enriched data to downstream processes. These guiding principles allow us to effectively manage the documentation, structure, collection, transformation, and distribution of our data assets. As we redirect loads, we will experience a reduction in mainframe MIPS.

What role is the Cognizant/Syncsort Mainframe offload playing in your strategy?

In order for Guardian to be equipped to implement an enterprise big data platform, it was important for us to select the right technologies and partners. Because we have a diverse set of data sources with a large mainframe footprint, we needed to select a tool that would allow us to simplify the conversion from VSAM data structures into usable ASCII delivered to our Hadoop environment. It was also important for us to be able to transform and enrich our data on Hadoop to centralize and reduce the sprawl of business logic. After careful evaluation, we found that Syncsort DMX-h had the capabilities we needed. We also found that it was designed to provide native integrations with Hadoop, resulting in increased efficiencies.

With regard to technology partners, we wanted to make sure we found an organization that had a large pool of strong Hadoop and Syncsort DMX-h resources that could scale up and down as needed to support our data acquisition program. Cognizant was strategically aligned with Syncsort through their Big Frame Program, and was able to meet our needs.

Where are you in deployment and what are your plans going forward?

We have defined a comprehensive big data strategy that includes program and demand management processes. We have also centralized our enterprise data architects to focus on data capabilities and data management patterns. Along with this, we have selected and implemented a major distribution of Hadoop, a NoSQL option, and an ETL solution (DMX-h) that was built to work well in our environment. We have also implemented our custom-built Enterprise Data Marketplace to serve as the central hub for all certified and sharable company data assets, including reports, dashboards, data services, and extracts.

From a data acquisition standpoint, we have embarked on a handful of projects to allow us to master the data for members, products, policies, premiums and claims. This will allow us to support many downstream use cases, including data distribution, reporting, visualization and predictive analytics.

Christy Wilson

Authored by Christy Wilson

Syncsort contributor Christy Wilson began writing for the technology sector in 2011, and has published hundreds of articles related to cloud computing, big data analysis, and related tech topics. Her passion is seeing the fruits of big data analysis realized in practical solutions that benefit businesses, consumers, and society as a whole.
0 comments

Leave a Comment

*