Data infrastructure optimization, availability & security software
Data integration & quality software
The Next Wave of technology & innovation

Data Availability 101: What Data Availability Means and How to Achieve It

What would happen if your business lost access to its mission-critical data? When required data is unavailable, IT operations effectively grind to a halt. According to IBM’s Global Study on the Economic Impact of IT Risk, an IT outage of even 20 minutes can cost a business more than $1 million. Even more important than the direct financial cost may be the loss of your company’s reputation and the trust of its customers. That’s why maintaining a high level of data availability is crucial to your organization’s continued success.

What Is Data Availability?

The term “data availability” refers to the ability to ensure that required data is always accessible when and where needed within an organization’s IT infrastructure, even when disruptions occur. The reality is that data that’s not accessible when needed is worthless. In fact, it’s worse than useless because if systems have been set up based on that availability, a catastrophic chain reaction of failure can occur; data that was counted on to be there is missing, or worse, it’s there but outdated or otherwise compromised.

Server room.

The Role of Data Durability

A critical but sometimes neglected aspect of data availability is not just its accessibility, but its durability. The fact is that even when data remains accessible, its quality naturally diminishes over time unless proactive corrective measures are taken.

Errors may be introduced if some disruption (or design flaw) causes a failure to update all distributed or archived copies of the data after a change. Also, any electronic data storage system is subject to what’s called bit rot. Dan Iacono, a research director at International Data Corporation, defines bit rot as “that one little piece of data, or bit, that goes bad after a period of time.” These uncorrectable bit errors may be caused by flaws that develop in storage media, or perhaps from electronic noise that generates a momentary glitch.

Because of the potential for data quality to deteriorate over time, maintaining high data availability requires taking steps to ensure that stored data is not only accessible but also valid at the time it is needed.

How To Ensure High Data Availability

Here are some keys to maintaining a high level of data availability: 

  • Have a plan – Maintaining data availability should be a central element in your company’s disaster recovery/business continuity plan. This should include RPO (recovery point objective) and RTO (recovery time objective) targets that define, respectively, exactly which data must be restored, and when it must be accessible, for operations to resume after a disruption.
  • Employ redundancy – Having backup copies of your data ensures that the failure of a storage component, or the deterioration of stored data over time, won’t result in permanent loss of the information.
  • Eliminate single points of failure –You should not only have multiple copies of your data, but also multiple access routes to it so that the failure of any one network component, storage device, or even server won’t make the data inaccessible.
  • Institute automatic failover – When an operational disruption occurs, automatic failover can ensure continuous data availability by instantly swapping in a backup to replace the affected component.
  • Take advantage of virtualization – The software-defined model for storage infrastructure helps maximize data availability. Because storage system functionality is accessed through software and is independent of the underlying hardware, you are less vulnerable to component failures or operational disruptions in a local facility.
  • Use the right tools – Rather than attempting to increase data availability in your IT infrastructure through home-grown ad hoc measures, employ tools specifically designed for that purpose. A good example is Syncsort’s MIMIX Availability, which delivers high availability and disaster recovery, including highly aggressive RPO and RTO targets, for IBM i servers. It replicates data changes on production servers to recovery servers in real time, with the ability to copy between different server models, storage types and OS versions.

If you’d like to learn more about how to maximize data availability in your company, please download our State of Resilience report.

Writer bio

Ron Franklin is a graduate of the University of Tennessee with a degree in Electrical Engineering. He is now retired from a career as an engineer and manager for IBM and several other high-tech companies. Ron has been a freelance IT writer since 2011.

Related Posts