When it comes to migrating your OSS data to a new platform architects are often faced with the difficult task of deciding what the best approach is for loading data into the new OSS platform. Most OSS software packages have some sort of standardized API that can be used to load data into the application. It typically applies business rules and ensures the integrity of the data being loaded into the application is maintained. In many cases leveraging these API's for data load is the right way to go when it comes to OSS data migration. It is often the safest approach and for most OSS architects the safest approach often translates to be the BEST approach. After all, playing the percentages hardly ever gets questioned.
However, there are many examples where the API approach can actually cause a migration to be more risky or even cause it to fail completely. Don't get me wrong there are a number of valid and compelling arguments for leveraging prepackaged APIs for loading data into OSS platforms. I would go as far as saying that in most cases it's the approach that should be taken. However, in my experience as an OSS professional there is definitely no "one size fits all" approach to OSS data migration. Every OSS migration project needs to analyze the target application, the data requirements and deployment approach to determine what approach is best for them. In some cases the API simply gets in the way of meeting your objectives.
On a recent OSS migration project I had determined that direct data load into the application tables and data files was the most appropriate way to migrate our clients' legacy OSS data given the project requirements. This decision was not taken lightly however. A proof of concept and deep analysis of the legacy data and application configuration were completed before the approach was taken. The proof of concept helped us conclude that the prepackaged API had a number of shortcomings when it was analyzed purely in the context of our OSS data load objectives. I concluded that using the API might actually prevent us from achieving our goal of migrating our customer's data over a weekend outage. In essence, we had all the evidence we need to show that in fact the richness of the API was overkill for what was required by our migration solution.
Essentially, the OSS platform had an API that was designed and built for OLTP type functions and offered limited bulk load capability. Given that the OSS migration that we were doing was a very large migration (50 million+ network inventory elements and all of their relationships) it was projected that our migration would take weeks to complete. Obviously, this created an impossible deployment situation and actually made the migration more complex and risky.
Through detailed API tracing and profiling it was decided that the migration team could easily create custom database scripts and packages that loaded network inventory data into the application tables and data files in a matter of a couple of days. By cutting out unnecessary validation in the API layer we were in a much stronger position to meet our deployment requirements.
I guess the lesson here is don't always assume that your OSS data migrations need to be conventional and don't be afraid to consider a non API approach, it might just be the RIGHT one.