Techworld UK published an industry insight article by yours truly today which is focused on best practices for backing up and recovering data in the era of Big Data. Much like I discuss on my recent ‘Backup in the Era of Big Data’ video segment, the massive rates of data growth that organizations are experiencing today is impacting data protection on both the backup and recovery side.
With backup – the amount of data you have to protect on a daily basis keeps growing and it’s getting harder to keep up especially if you are using older, outdated technologies. On the recovery side – single applications are growing to multi-terabytes of data so to recover that can take hours if not days. You may see an application be offline for a significant amount of time while undergoing a recovery process. This makes it difficult to meet service level agreements for recovering mission-critical applications and data in a timely manner.
Block-level backup and Snapshot technology could very well be the answer. Block-level backup is able to track blocks of data that exist beneath the file level. If files are too numerous or too large to move within the required time window, the logical course of action is to investigate if the data blocks can be moved instead.
Snapshot technology works just like a camera as it captures data at a moment in time. This means data can be captured and recovered quickly because it’s a single identifiable image that is consistent. As well, snapshots can use block-level data to re-create individual files or complete data volumes. This enables users to get full backup results at a fraction of the cost and effort that it would take using conventional backup. This approach is especially effective for recovery purposes. But remember, having a catalog of data in place is necessary for the snapshot approach, otherwise you won’t know where the data resides and you’ll end up having hundreds of Snapshots to look through.