Companies facing IT budgetary pressures should not overlook testing their disaster recovery plans, especially those with virtualised environments, warned Glasshouse Technologies.
Jim Spooner, UK strategy services manager at the consultancy, which specialises in transforming a company’s IT infrastructure, was speaking at last month’s Data Centre World exhibition in London.
He said that while most companies had some element of disaster recovery (DR) in play in case of acts such as terrorism, internal mistakes, staff sabotage, power blackouts and geographic issues (such as flooding), a lot of companies are not properly testing their DR plans, especially in times where IT budgets are increasingly being constrained or trimmed.
“Testing DR is often overlooked, but it is a key issue,” said Spooner. “If you have invested 2 million ($2.7 million) in your DR environment, it makes no sense not to spend 50,000 ($69,500) testing your plan. Labs can be rented for DR testing purposes, and this should take place at the weekends in order to ensure minimal disruption to existing systems.”
“It is also worth remembering that often it is the non technical issue that can be a show stopper,” he continued. “For example, the primary site loses power but it will get power back online. When and who decides if that is a disaster?”
And when IT staff are suddenly confronted with a system collapse, “people can run around like headless chickens, deciding which is the best backup to use,” he said.
Spooner advised that companies should set recovery tiers. This means that companies must prioritise their most important assets and assign appropriate recovery times. “Not everything needs to be recovered in four hours,” he said. “So decide what applications and systems can take longer to recover.”
“Classify what data is important, he added. “Some datasets don’t require the highest level of recoverability. You must understand different classes of service and how to manage those configurations.”
Spooner also warned against what he calls rolling disasters’, where a company is replicating data off site, as can be found in a typical virtualised or cloud-based infrastructure.
“What happens if the recovery data itself is corrupted?” he asked. “It is a vital these issues are considered,” he said.
“In large organisations, not having data in synch is often worse than not having any data at all,” said Spooner. “For example, an invoice has been generated and the amount has been paid, but what department was it for?”
He also questioned how a company goes about protecting itself and its data when it is actually in disaster recovery mode. “How do they protect their new data that is being generated, as it can often take three weeks or more to get back to normal operations,” he said.
Spooner urged IT to get management support (or buy-in) for their DR plans. “DR is a business issue, not an IT issue,” he said.
Organisations must understand their business drivers and they must realise that technology is a secondary issue, he advised. He also urged companies to test the DR plan, benchmark their SLAs against reality; and get outside help from experts. And perhaps most important of all, he advised firms to stay flexible’ – “your plan and tactics will change over time.”
This story, “The Disaster Recovery Imperative” was originally published byTechworld.com.
- The script of disruption and a new order
- Here are the coronavirus emergency declarations US states have made
- Donna Brazile: Elections during coronavirus – Americans shouldn't have to risk their lives to vote
The Disaster Recovery Imperative have 573 words, post on www.cio.com at March 10, 2009. This is cached page on VietNam Breaking News. If you want remove this page, please contact us.