We can do all or part of it, depending on your own resources. The price is strictly based on the number of hours it will take, so the condition of your existing data is the largest determiner. If there's a lot of cleanup to be done, you may want to weigh up the cost against the value of bringing them over. But if you do decide to go ahead, we'll supply the experienced professionals to make sure you get all of the data you require, cleanly and as efficiently as possible.
Data Migration Considerations
Data that have been around awhile in a legacy system tend to have been tailored, over time if not in the beginning, to work in ways that suit the existing informatics infrastructure. That is, an individual record, for instance, may only make sense when combined with a particular software application and/or other pieces of data. The entire convoluted ways that data have been managed may be a nightmare, but it's a nightmare that has been used for a long time in many cases, and one that many are familiar with. This concept is known as "data gravity".
Data migration is typically done using a 3-step process called "ETL", which means Extract, Transform and Load.
The first step of the ETL process is to import structured and unstructured data into a single repository. There are ETL tools that make this step fairly quick and easy. Once done, the boundary has been drawn around which data are being migrated. However, they must be scrutinized and cleaned up before it's ok to bring them into your new cloud SQL Server or Postgres database.
If your old data are already in a compatible format (i.e. SQL Server database records) than they may not need formatting, but only cleaning up, and then they can be migrated as a "pass-through. Otherwise, several process may need to take place during the transformation step:
- Data discovery. The first step in data transformation is to identify the meaning and projected use of the data in their source format. A data profiling tool helps accomplish this. Assessing the data in this way helps define the re-formatting that needs to occur.
- Data mapping. This is the actual mapping of what records go where.
- Generating code. In order for the transformation process to be completed, a code must be created to run the transformation job. These codes are generated using a data transformation tool/platform.
In addition to these basic steps, or instead of re-formatting, the data may need to be prepared in one or more of the following ways:
- Filter. This involves selecting the fields or columns to be migrated.
- Enrich. Format data according to planned usage. For example mm/dd/yy may change to mm/dd/yyyy.
- Split. Similar to re-arranging how records are formatted, a column may need to be split into multiple columns, or two combined, etc.
- Join. Data from multiple sources may need to be combined into a single table or field.
- De-duplicate. Obviously, duplicate records need to be removed.
After all necessary transformation operations have been performed, the data are ready to be loaded into the new database so they can be accessed in the new system.
The process gets underway as the old records are loaded into the data warehouse of the new LIMS database. This includes:
- Execute code. The data transformation process that has been planned and coded is now put into action, and the data are converted to the correct output. They now exist in the LabLynx LIMS data warehouse of the cloud-hosted database (or on your own servers if that has been your chosen option).
- Review. Transformed data are checked to make sure the process has produced the desired results.
Once the process is complete and the data reviewed, the migration has been accomplished. The legacy data are accessible and can be searched using multiple search criteria any time in the LIMS data warehouse.
Pricing for services is at Services Pricing.