One of the things about being in IT is that everyone has their own unique problems at the hands-on level, because no two organizations are identical. Each has its own unique combination of hardware, software, goals, mandates and personnel. Yet at a higher level, many challenges are common across organizations.
This is certainly the case when it comes to data protection. While certain issues would naturally seem to be widespread, it’s always good to have data. With that in mind, Syncsort commissioned UBM TechWeb to conduct a survey of IT and business professionals responsible for data protection. More than 300 were surveyed, and they were not told that Syncsort was the sponsor.
TechWeb then took the survey data and held discussions about it with users and analysts before ultimately producing a whitepaper. Since I didn’t write it, I can say without hesitation that the author, Karen Bannan, did a terrific job putting together one of the most easily digestible papers I’ve read in a long while. For those interested in obtaining a copy, you can do that right here with a simple registration. Karen and I also discussed the results in a recent webinar if you prefer to explore the information that way.
My plan is to discuss the results on this blog in three parts. This is Part 1, “Too Many Products.” Part 2 will be “Backup Takes Too Long” and Part 3 will be called “Recovery is at Risk.” Stay tuned and, as always, we want to hear what you think too. Please feel welcome to comment on the posts whether you agree or disagree with them!
We began the survey by asking the participants to check off any products they were using from a long list of data protection options. The results were as follows:
It is important to note that users were allowed to select more than one choice. In total, there were 693 selections from the 299 people that answered this question. That averages about 2.3 solutions per user, meaning that many respondents had two products while some had three or more.
The most widely used product by far was traditional file-based backup software, still used by 79 percent of respondents. What this means is that I clearly still have a lot of work to do to convince people to change their thinking and stop looking at the world as a bunch of files!
After that, the mix is quite varied. While we didn’t correlate the distribution of answers, it’s interesting that at the same time that 21 percent didn’t check “file-based backup software,” 25 percent did check “primary storage snapshots.” My guess would be that the small difference between these the numbers is not a coincidence. As our partners at NetApp have been saying for years, snapshots are backups if you do them right (i.e. keep copies on more than one tier of storage).
Replication also showed interesting results, with 17 percent of respondents using disk-array based replication and 14 percent using a network-based replication device. That’s 31 percent using electronic replication for disaster recovery, meaning the remaining 69 percent are using off-site tape storage, right? This is actually not the case! However, we’ll save that discussion for Part 3 of this series.
I think it’s unfortunate that so many users are still not taking advantage of the tremendous benefits of electronic replication. Part of the difficulty is that array-based solutions are always silos of protection. Vendor A’s array can only replicate to another array from Vendor A (and sometimes not even across product lines within the same vendor portfolio). Vendor A cannot replicate to Vendor B. This siloed approach rapidly escalates costs and complexity.
The use of network-based replication avoids these silos because it works with different disk arrays. However, solutions like these tend to be very complicated and fragile. It’s never easy to have a single box interface universally with whatever disk arrays you happen to have. This is where NSB really can make a difference. Rather than replicate from all your primary disk arrays, NSB first centralizes all your backups on NetApp and then uses SnapMirror to replicate the data. Sure, it’s NetApp-to-NetApp, but you’ve captured all your primary data (both SAN and DAS) so the job gets done with only one replication solution for your whole shop. It also takes the load off your primary environment at the same time. This delivers all the benefits of network-based replication with none of the fragility or messy support matrices.
So what’s driving all these different solutions? It is clear to me that it is a number of things.
For starters, virtualization certainly plays a role. In our survey, 25 percent of respondents said they are using different products for physical and virtual backup. While significant, I think that number may be peaking and trending downward in future. I believe what we are starting to see organizations consolidate physical and virtual backup. Early phase virtualization projects started small and data protection was handled by the virtualization admin who was naturally inclined to use tools specific to VMs. However, as the number of virtualized workloads in the organization increases and moves beyond test-dev/Tier 3 type apps into Tiers 2 and 1, then the data protection team pulls backup away from the virtualization admin and returns it to the larger, centralized backup process.
SharePoint is also an interesting phenomenon. Our survey showed that only 57 percent of SharePoint users are protecting it as part of their general backup environment, 28 percent are using a SharePoint specific tool, and 15 percent aren’t protecting SharePoint at all! SharePoint is a unique beast when it comes to backup, and it’s clearly creating a lot of issues. While we didn’t ask the 57 percent if they were happy with their SharePoint backup, my guess is that many would not be based on experience.
Finally, I think data growth rates are driving the move to non-traditional technologies like block-level backup. But that’s a topic for Part 2, and this part is long enough already!