Imagine the scenario; you have been brought into your organisation to sort out long-term data management across the enterprise. You could be working in a museum, educational institution, financial services company, pharmaceutical company – you name it, I bet the same scenario would apply.
You are happily working out your strategy for storing all of the data that the enterprise is creating daily, currently in your shared drive, perhaps researching suppliers and then it gets sprung on you – the warehouse with all of the servers containing years’ worth of deserted data. No one quite knows what is on these servers, or indeed whether the data contained on them is important, there is no metadata to look to for clues. Perhaps the data held is related to expired clinical trials or mortgage applications. Perhaps there are a plethora of files from a previous client that you worked with or video files from a performance art collection that was filed away as nobody was sure what to do with them.
Whatever the situation, the issue is the same
You need to make sure your data is managed properly today so that it is easy to find and use in the future. If you don’t preserve your data adequately now when you store it and ensure the appropriate metadata and context is stored with it, it will make it much more difficult for you to find that data again when you need it.
Let’s imagine what your approach might be here. You know that you need to deal with the data that is sat on these tapes or servers and you are being put under pressure to find out what is on them. Do you say that it is not possible and throw out all of those years’ worth of data?
The typical response to such a problem is to deal with the immediate and visible problem of the racks of servers. However, how do you ensure that you are not going to be metaphorically painting the Golden Gate bridge (by the time you’ve finished you have to start again)?
Stem the tide of data loss
It is a commonly known statistic that the data you have now is the smallest amount of data that you will have going forward from today. So how do you ensure that you do not get yourself into a sticky situation? The answer is that you need to stem the tide of data loss now and ensure that the problem of the tapes in a box or servers in a warehouse does not exponentially grow. The longer you put off dealing with the problem, the bigger the problem will become and the more difficult it will be to tackle.
The key to managing and safeguarding your data for the long-term is to ensure that the data that you are creating today is managed for the future; 100% data integrity and easily searchable with added context so the data is still meaningful when you refer back to it.
At Arkivum, we can help you with pulling together your business case as to why this is important now, through to delivering a well-oiled long-term data management machine that puts you ahead of the competition.
18 Jul, 2018
Why do I need a data lifecycle management system anyway?
Data is the lifeblood of any business nowadays, whether it’s patient records or trade reporting data or student research, it’s important that long-term data is handled in…
28 Mar, 2018
Data Owner Beware! New Data Integrity Guidance from MHRA
With the new Data Integrity guidance form MHRA this March we should revisit the importance of data integrity in Research & Development (R&D) as well as more heavily regulated parts of the pharma pipeline…
01 May, 2018
Blog series (part 1): The four economic factors that make the “cost of doing nothing” more expensive than “doing something”
Over this series of blogs, I will discuss the four big economic drivers and the associate fallacies that can be de-bunked which could help you, or your organisation rethink an approach to data for long term data preservation and integrity.