Legacy Migration
Migrating from legacy storage with the minimum disruption requires a good strategy, support from the business and the right technology, such as storage virtualisation or the use of migration appliances.

Before carrying out any migration, it is wise to analyse the content of the data. Web documents, web pages and e-mails can constitute 80% of a company's data, with 50-60% of the documents stored being duplicates.

"By analysing your content - its current state and its quality - in advance, you can reduce the challenge of migration". Data de-duplication can also bring significant cost savings.



As well as analysing the content, organisations can ease migration by carefully examining how applications are accessing the storage (for example files, databases and raw volumes), and by looking at the connectivity that is being used (direct attached, NAS, fibre channel SAN, or IP SAN).

Migration Events
The secret to minimising disruption when migrating storage is to build the plan around migration, events related to the storage that is linked to particular servers and applications.

"A good plan will give a single application outage, keeping the rest online."

Organisations looking to perform data migration face a Catch-22 situation, that a "big bang" migration, where all data is migrated in a single action, requires rigorous testing and auditing, which can mean lengthy downtime of 10 days or more.

The alternative, migrating data piecemeal over a longer period, means that as systems are constantly changing and growing, the migration will need to continually adapt to these changes, taking longer and longer.

One of the ways to reduce the testing time in big bang migrations is to use analysis and testing tools which use in-memory and massively parallel data processing techniques so tests can be conducted quickly, and migrations shortened to less than 48 hours in some cases.