Authored by Daniel Hickmore, VP of Health & Life Sciences @ Arkivum
In previous blogs we talked about data appreciating in value over time and that data is not a durable asset but very fragile. Data is not a durable asset like a house, but more like a very fragile oil painting that needs to be very carefully looked after and maintained with lots of resources involved. Also like an oil painting, data gets more valuable over time. Of course, that’s not all data but certainly pharmaceutical research data and regulated data.
Today we will be talking about the fallacy that data is non-rival in consumption and creates a free rider potential.
We all use data, nearly all day, every day. We spend increasing amounts of time on our phones, on google, on Facebook and so on, and all of this data costs us, though we often don’t realise, or want to worry, about how much we pay for it. Most of us consider data like this, often useful data, or entertaining data to be free; I would argue it is never free.
The problem is we have come to believe the urban myth that once a data repository or store is created, once the process of data preservation has been set up, expanding the user base costs nothing. In economics we describe this as data being non-rival in consumption and often it creates a free rider potential.
I would argue that we can present the case of long term data management, archiving and preservation in a different way and persuade our business users of the same. It is not free just because it is there.
Are all users equal?
First, let’s explore some questions. Are all users equal in a pharmaceutical or research led organisation? I would suggest there are primary, secondary and even tertiary users. Primary users may be: records or document management; legal; archiving; regulatory and even trial master file management.
All of these users are mostly considering the risk rather than the benefit for the data.
Secondary users, such as: research and development groups; target discovery; preclinical trials; business users are looking at the benefits of sharing and using interoperable data, rather than the risks associated with not managing the data. These benefits may include significant, strategic competitive advantage to the organisation itself.
Tertiary users may be future potential users, for example: investigator; eDiscovery; acquiring companies; compliance. It is likely that each business user has a different use case for the data, a different way of wanting to look and consider the data and access it. It is therefore essential to make the data interoperable and available across the organisation in a way that is flexible enough for these varieties of use cases. Each use case has a cost associated with it, and each user will create incremental cost to the management of the data.
As we discussed in an earlier blog it would be better to consider archiving, preservation, usable and access of critical data as an overlay to the business, not just a department cost.
In terms of free rider potential, if the data is taken as a whole and considered for criticality strategically then there are no free riders because sharing and collaboration is essential for the success of a research led and pharmaceutical company today.
A better way is the opportunity to incrementally increase value with accessibility, searchability and preservation in terms of time and people. If that data is also authentic and credible the value goes up even further. Value will increase with volume, variety, users, governance and time, but so will cost. Please refer to my previous blog for a breakdown of all the functions and jobs required to look after data.
Measuring the value of data is not constrained by typical transactional limitations. In fact, data currency exhibits a network effect, where data can be used at the same time across multiple use cases thereby increasing its value to the organisation. This makes data a powerful currency in which to invest. The prudent valuation classification for data allows you to express intangible, but quantifiable value to data.
Data economic valuation process
We start the data economic valuation process by focusing on an organisation’s key business initiative. Once we have identified a key business initiative upon which to focus, then we will triage that business initiative to identify 1) the business decisions that need to be made to support the business initiative, and 2) the data that might be useful in enabling “better” or improved decisions
- Step 1: Determine financial value of the targeted business initiative. The first step should identify the targeted business initiative, and then capture the key financial metrics in order to create a rough estimate of the financial impact of the targeted business initiative.
- Step 2: Identify business decisions that support targeted business initiative. The second step combines business stakeholder interviews with a facilitated workshop to identify / brainstorm the decisions that the stakeholders need to make in support of the targeted business initiative.
- Step 3: Quantify value of individual decisions.The next step determines a rough order of magnitude financial value for each of the decisions. Note: the financial value is likely to be a range, but we will pick the most likely value (mean, mode, median) in order to keep the exercise manageable.
- Step 4: Assess value of each data source for each decision.Next, we need to determine a rough order of magnitude value for each data source with respect to how important each data source is in supporting the respective decisions.
- Step 5: Aggregate economical value for each data source.The final step aggregates the financial value of each data source across all the different business decisions to come up with a rough order of magnitude value for each data source. While this may not be a hard and fast number, it will provide the basis against which to make data acquisition, enhancement and enrichment decisions.
Further advantages to the business is that it can help answer the critical challenges faced by the industry and company, such as:
- Improve cross-sell profiling
- Improve customer segmentation
- Improve targeting prioritising
- Improve offer effectiveness
- Improve re-targeting effectiveness
- Improve close effectiveness
- Reduce time-to-close
- Improve customer satisfaction
The advantage of this data economic valuation process includes:
- By starting with a key business initiative, you have established the financial basis for “prudent value” that we can use as the basis for ascertaining the economic value of the supporting data sources
- You are forced through a process of identifying the different decisions necessary to support the targeted business initiative, and to associate a rough order magnitude of value to improving the effectiveness or outcomes from those decisions
- Forces the business users to contemplate and rank the perceived value of each data source vis-à-vis the decision that they are trying to optimise
- Finally, the valuation formula puts you in a position to attach reasonable financial value to the different data sources that can ultimately prioritise data acquisition, cleansing, transformation and enrichment activities
Ideally, one would want to take this exercise to the next level and add a process for determining the cost of acquiring each of the data sources. The cost would need to consider not only the cost to acquire the data, but also the cost to clean it up, align it, transform it and enrich it. Maybe that’s a topic for my Big Data MBA class to explore.
Not all data is created equal, that is, some data is more important than other data in informing the decisions that support the organisation’s key business initiative. Consequently, it’s important to have a rough estimate as to what data is most important to you in order to guide your “data as an asset” management strategy.