Data infrastructure optimization, availability & security software
Data integration & quality software
The Next Wave of technology & innovation

Exploring the Role of Data Integration in Business Intelligence

One of the most valuable assets companies have today is the wealth of data they generate and store in the course of doing business. Whether it’s customer records, marketing survey information, or reports derived from the organization’s own internal operations, when properly correlated and analyzed, such data can yield a treasure trove of business intelligence insights that have a direct bottom-line impact.

But in a time when many businesses operate on a global scale, using a variety of IT systems to “properly correlate” information from a wide range of sources can be a challenge. The first requirement is to provide a unified view of all of an organization’s data, whatever its source. This process of transforming disparate datasets into a standardized format and collecting that information into a single logical pool (a data warehouse, for example), is what data integration is all about.

The second requirement is to provide a means of storing large amounts of data in a manner that allows specific portions to be quickly assembled for analytical processing in response to queries generated by users or applications. The industry standard tool for accomplishing this is Hadoop. Yet, when it comes to providing a comprehensive, unified view of all of an organization’s data resources, Hadoop has a major limitation – it doesn’t offer native support for the most valuable data reservoir many enterprises possess, the mainframe.

Integrating Mainframe Data Into Hadoop Is a Business Intelligence Necessity

Among Fortune 500 companies, 71% depend on mainframes for their most mission-critical processing. Mainframes run 68% of global IT production workloads and handle 30 billion business transactions a day. Clearly, any business intelligence system that fails to include available mainframe data will fall far short of its full potential. So, integrating the mainframe into the Hadoop environment is vital. But doing so presents some major challenges.

For example, some of the most important mainframe information is carried in COBOL copybooks. These are metadata records that define the physical layout of the primary data, but which are stored separately from that data. Hadoop has no means of understanding such complex relationships, and reformatting the data to a form Hadoop can handle may result in the loss of crucial metadata information. Moreover, governance or compliance mandates may require that data stored in Hadoop retain its original mainframe format.

How to Build a Modern Data Architecture with Legacy Data - banner

Connect for Big Data Bridges the Mainframe-Hadoop Gap

Connect for Big Data is specifically designed to integrate mainframe data, including COBOL copybooks, into Hadoop quickly, easily, and without the necessity of any format changes. Connect for Big Data ensures that once mainframe data is ingested into Hadoop, it can be distributed across the Hadoop file system and processed using MapReduce or Spark™, all with no need for any special transformations or manipulations. This allows Hadoop to handle mainframe data just as it would data from any other source.

Connect for Big Data features an intuitive graphical user interface that requires no specialized mainframe knowledge. In fact, with only a basic understanding of data integration principles, and with no expertise in COBOL, MapReduce or Spark, users can easily design tasks to incorporate mainframe data into their business intelligence applications.

By enabling painless integration of mainframe data with Hadoop, Connect for Big Data enables organizations to extract comprehensive business intelligence insights far beyond what they could previously achieve.

If you’d like to know more about how Connect for Big Data can add value to your company’s business intelligence operation, please take a look at our product page. Also, make sure to check out our eBook “How to Build a Modern Data Architecture with Legacy Data.”

Related Posts