December 9, 2009 4:52 PM GMT
Deploying new systems usually means a data migration from a legacy system with poorer data quality than the data model in the new system allows. Obviously the ideal situation to correct the data prior to migration, but what if that can’t be done?
Small volume fallout is typically acceptable and resolved manually or otherwise post-production migration. The migration approach, whether big bang or not, is also a factor that can influence strategies for handling data quality resolution.
Where high volumes of data are involved however you may be faced with a dilemma of a large volume of fallout that can only be corrected over a long period of time or else delaying the production migration until an acceptable level of data quality is achieved.
Other considerations might be if the low quality data can be identified as rarely used in business process or not impacting critical functionality, then it may be better not to spend budget on correcting it at all until the point of use.
Alternatively finding ways to migrate the data to the new system and then flagging it to be fixed via overtime, or marking it with a confidence level indicator may be an option.