Hidden costs in virtualisation backup solutions
Despite this rapid growth, and even though virtual machines outperform the rate of physical servers, most virtual environments are still protected by backup systems that are intended for use with physical servers, and the protection does not match the virtual infrastructure these systems are used on.
Each production environment has requirements that cannot be realistically identified during demo stage or during proof-of-concept, yet failing to account for these requirements can lead to hefty costs once deployment has taken place and time has passed. Understanding your requirements and these costs will help to equip you to make a better appraisal of any virtualisation-focused data protection system you might be considering, and will lead to a far more favourable, long-term outcome.
Growing hardware and network requirements
Backup storage often contains many copies of identical data, and while deduplication technology can detect and reduce the redundant data to reduce the amount of disk space required, this kind of technology isn’t as beneficial as it first seemed. It’s costlier than commodity disk storage, and it only achieves best efficiency when used on a single appliance. The minute an additional storage appliance is added to your network, duplication will start to creep back in, the benefit will be reduced and the situation will be exacerbated as your storage requirements grow.
Similarly, when you start moving data around your network, WAN acceleration technology could be useful to improve speeds and costs, dramatically. WAN accelerators look for and eliminate redundancies in the data before transmission. Here again, external appliances are pricy which bumps up the overall solution cost as well.
Since data protection software has a comprehensive central view of all data and can work bearing in mind version and retention policies - a mature product can handle all of the deduplication responsibilities for both storage and network transfers without the need for external appliances and at a higher efficiency. Advanced solutions will be able to perform source-side deduplication so that redundant copies of data are eliminated before transmission over the network.
Integrated database shocks
Like most applications that manage large amounts of data, backup and recovery solutions need a database. Luckily, one is often included with a software solution, and this lowers the cost and makes it easier to deploy. However, many point-based solutions typically use Microsoft’s SQL Server Express, which works perfectly well for small deployments. Once your deployment increases - supporting many production systems, many months of archives, and possibly multiple locations - you will need an enterprise-class database; more robust database functionality will be needed, which is more costly.
Microsoft’s SQL Server Enterprise may quickly change the economics of your software purchase, so knowing which version your selected backup solution uses and how it will support the long-term needs of your environment, is important from a cost perspective.
Slowing efficiency at scale
Efficiency can also be lost as you scale the number of servers you protect. As most backup systems will have dedicated servers that connect to your storage devices, enabling you to scale-up performance as your backup storage pool grows, a poorly written backup server module will diminish CPU performance and disk I/O bandwidth. This forces you to deploy more backup servers, earlier than necessary. It drives cost, both in extra software licenses and hardware, plus incremental human intervention required to manage those extra servers.
On the other hand, a well written backup server module will use resources with efficiency, and support multiple simultaneous read and write threads, getting the most value out of your hardware investment, and delaying future investments.
Multiples mean doubling costs and effort
It’s wise to consider solutions that will support multiple platforms – your physical, virtual and even cloud-based applications – all from a single management console. It’s tempting to go with a virtualisation-focused point backup solution, however, it comes at a cost. If a new backup system cannot support physical servers, you end up with a fragmented backup system, with double the costs when it comes to backup licences, maintenance contracts, management consoles, administrator skills, processes and policies.
The high cost of vendor lock-in without workload portability
Virtualisation has moved beyond mainstream architecture in our data centres. The core of virtualisation, the hypervisor, is beginning to commoditise and there’s much more choice these days. Based on financial and operational pressures, you might need to have multiple hypervisors and to move some or all workloads to a different Hypervisor.
This means that the data protection solution you choose must be able to cover all the hypervisors you might consider using (not just what you’re using today) and must be able to move backups from one to another quickly, without manual intervention. The data backup solution you choose must have cross-platform recovery in order to give you protection from vendor lock-in.
True workload portability is the ability to recover a backup from any platform onto any other platform, and it is this that lets you experiment with very little risk and build a culture and operational mode of agility, because you can adopt new technology quicker as the cost of a mistake is very low. True workload portability is what will help us realise the dream – the IT department that delivers data, applications and services reliably while able to adopt new beneficial technologies just as quickly and seamlessly.