6 Reasons Your Data May Become Unavailable - data availability

6 Reasons Your Data May Become Unavailable

What stands in the way of high availability and your data? Keep reading for a list of common reasons why data may become unavailable.

Highly available data means data that is accessible virtually without interruption.

To be fully accessible, your data needs to be reachable in the sense that the infrastructure and software hosting it runs constantly. Highly accessible data is also high quality data that you can work with readily.

Any problem that disrupts either of these two pillars of data availability and accessibility undercuts your ability to use your data.

data availability

Data Availability Obstacles

The types of problems that might prevent data availability and accessibility include the following:

A failed host server

This is an obvious obstacle to data quality. If the server hosting your data goes down, so does your data — unless you have designed your data infrastructure in such a way that you have redundant host servers or automated failover to a backup server in the event that the main server fails.

Failed storage

Hard disks last only so long before they give up the ghost. Even tape archives, which generally last much longer than hard disks, have their limits. Storage media failure means your data ceases to be available — and could be lost permanently if you did not back it up. The best way to handle this risk is to build redundant storage arrays that are spread across multiple host servers, so that a single storage media failure does not take all of your data offline.   data availability, data accessibility, data quality, Hadoop, legacy data

Network crash

Data science and network administration are relatively distinct disciplines. It can, therefore, be easy for data scientists to overlook the central role that networks play in keeping data available — but they shouldn’t. Gone are the days when all of your databases lived on the same computer that you used to access them. Today, most IT resources are exposed over the network. If the network goes down, data ceases to be available. This is why redundancy and backups are important in building your network infrastructure. Your internal network should have multiple switches and routers in order to provide assurance against failure. You might also consider using multiple Internet connections so that your connection to the public Internet remains available in the case that one Internet Service Provider fails, if you depend on the public Internet to access data.

Poor data quality

Data sets that contain inconsistent data, missing data or redundant data are difficult to work with because they suffer from poor data quality. Even if you can access this data readily, your ability to put it to actual use is undercut by data quality issues. This is why building data quality into your IT operations is crucial.

Data compatibility challenges

Data that is readily usable on one type of platform or environment may not be compatible with another. For example, most mainframe data is not directly compatible with Hadoop. These issues also make data unavailable. The solution is automated data transformation tools that can convert data as needed between different formats.

Slow data transfers

Depending on where your data is stored and where you work with it, it could take a long time to move data between locations. If your data lives in the public cloud and you need to download a large data set to a local computer, it may take a long time due to network bandwidth limitations. Slow data transformations could also get in the way of speedy data movement. Addressing these challenges requires building a data storage architecture that minimizes bottlenecks and friction when moving data. Tools that automate data conversions and transformations can be helpful, too.

Legacy data

Even with data transformation tools at your disposal, some data is just too outdated to be readily accessible using the tools at your disposal. For tips on addressing this challenge, download our latest eBook, Bringing Big Data to Life, which explains how to bring legacy data into Hadoop. You can also check out our Building a Data Lake checklist report for tips on planning and launching successful data lake projects, even if you have legacy data to contend with.

To learn even more about the state of disaster recovery preparedness in organizations today, read Syncsort’s full “State of Resilience“ report.

Christopher Tozzi

Authored by Christopher Tozzi

Christopher Tozzi has written about emerging technologies for a decade. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, is forthcoming with MIT Press in July 2017.
0 comments

Leave a Comment

*