Data Value-Creation Loop
Thrive in the open data economy by closing the loop towards speed and value
Last updated
Thrive in the open data economy by closing the loop towards speed and value
Last updated
Copyright 2024 Ocean Protocol Foundation Ltd.
The core infrastructure is in place for an open data economy. Dozens of teams are building on it. But it’s not 100% obvious for teams how to make $.
We ask:
How do people sustain and thrive in the emerging open data economy?
Our answer is simple: ensure that they can make money!
However, this isn’t enough. We need to dive deeper.
The next question is:
How do people make money in the open data economy?
Our answer is: create value from data, make money from that value, and loop back and reinvest this value creation into further growth.
We call this the Data Value-Creation Loop. The figure above illustrates.
Let’s go through the steps of the loop.
At the top, the user gets data by buying it or spending $ to create it.
Then, they build an AI model from the data.
Then they make predictions. E.g. “ETH will rise in next 5 minutes”
Then, they choose actions. E.g. “buy ETH”.
In executing these actions, they data scientist (or org) will make $ on average.
The $ earned is put back into buying more data, and other activities. And the loop repeats.
In this loop, dapp builders can help their users make money; data scientists can earn directly; and crypto enthusiasts can catalyze the first two if incentivized properly (e.g. to curate valuable data).
If we unroll the loop, we get a data value supply chain. In most supply chains, the most value creation is at the last step, right before the action is taken. Would you rather a farmer in Costa Rica selling a sack of coffee beans for $5, or Starbucks selling 5 beans’ worth of coffee for $5?
Therefore, for data value supply chains, the most value creation in the prediction step.
To the question “How do people make money in the open data economy?”, the “create value from data!” almost seem like a truism. Don’t fool yourself. It’s highly useful in practice: focus only on activities that fully go through the data value-creation loop.
However, this is still too open-ended. We need to dive deeper.
There are perhaps dozens of verticals or hundreds of possible opportunities of creating and closing data value-creation loops. How to select which? We’ve found that two measuring sticks help the most.
Key criteria:
How quickly one can go through the data value-creation loop?
What’s the $ size of the opportunity
For (2), it’s not just “what’s the size of the market”, it’s also “can the product make an impact in the market and capture enough value to be meaningful”.
We analyzed dozens of possible verticals with according to these criteria. For any given data application, the loop should be fast with serious $ opportunity.
Here are some examples.
Small $, slow. Traditional music is small $ and slow, because incumbents like Universal dominate by controlling the back catalogue.
Large $, slow. Medicine is large $ but slow, due to the approval process. Small $, fast. Decentralized music is fast but small $ (for now! Fingers crossed).
We want: large $, fast. Here are the standouts.
Decentralized Finance (DeFi) is a great fit. One can loop at the speed of blocks (or faster), and trade volumes have serious $.
LLMs and modern AI is close: one can loop quickly, and with the right application make $. The challenge is: what’s the right application?
We encourage you - as a builder - to choose projects that close the data value-creation loops. Especially loops with maximum $ and speed.
We follow our advice for internal projects too. Predictoor, Data Farming, and DeFi-oriented data challenges are standout examples.
To sustain and thrive in the open data economy: make money!
Do this by closing the data value-creation loop, in a vertical / opportunity where you can loop quickly and the $ opportunity is large.