Provider encrypts the URL and metadata during publishing and decrypts the URL when the dataset is downloaded or a compute job is started. It enables access to the data assets by streaming data (and never the URL). It performs checks on-chain for buyer permissions and payments. It also provides compute services (connects to a C2D environment).
Provider is a multichain component, meaning that it can handle these tasks on multiple chains with the proper configurations. The source code of Provider can be accessed from here.
As mentioned in the Setup a Server document, all Ocean components can be deployed in two types of configurations: simple, based on Docker Engine and Docker Compose, and complex, based on Kubernetes with Docker Engine. In this document, we will present how to deploy Provider in each of these configurations.
Deploying Provider using Docker Engine and Docker Compose
In this guide, we will deploy Provider for Sepolia (Eth test network). Therefore, please note that in the following configuration files, "11155111" is the chain ID for Sepolia.
Prerequisites
A server for hosting Provider. See this guide for how to create a server;
Docker Compose and Docker Engine are installed and configured on the server. See this guide for how to install these products.
The RPC URLs and API keys for each of the networks to which the Provider will be connected. See this guide for how to obtain the URL and the API key.
The private key which will be used by Provider to encrypt/decrypt URLs.
Steps
The steps to deploy the Provider using Docker Engine and Docker Compose are:
1. Create the /etc/docker/compose/provider/docker-compose.yml file
From a terminal console, create /etc/docker/compose/provider/docker-compose.yml file, then copy and paste the following content to it. Check the comments in the file and replace the fields with the specific values of your implementation.
version:'3'services:provider: image: oceanprotocol/provider-py:latest =>(check on https://hub.docker.com/r/oceanprotocol/provider-py for specific tag)
container_name:providerrestart:on-failureports: - 8030:8030networks:backend:environment:ARTIFACTS_PATH:"/ocean-contracts/artifacts"NETWORK_URL:'{"80001":"https://sepolia.infura.io/v3/<your INFURA project id>"}'PROVIDER_PRIVATE_KEY:'{"80001":"<your private key"}'LOG_LEVEL:DEBUGOCEAN_PROVIDER_URL:'http://0.0.0.0:8030'OCEAN_PROVIDER_WORKERS:"1"IPFS_GATEWAY:"< your IPFS gateway >"OCEAN_PROVIDER_TIMEOUT:"9000"OPERATOR_SERVICE_URL:"https://stagev4.c2d.oceanprotocol.com"=> (use custom value for Operator Service URL)AQUARIUS_URL:"http//localhost:5000"=> (use custom value Aquarius URL)REQUEST_TIMEOUT:"10"networks:backend:driver:bridge
Create the /etc/systemd/system/[email protected] file then copy and paste the following content to it. This example file could be customized if needed.
Once started, the Provider service is accessible on localhost port 8030/tcp. Run the following command to access the Provider. The output should be similar to the one displayed here.
If needed, use docker CLI to check provider service logs.
First, identify the container id:
$dockerpsCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
594415b13f8c oceanprotocol/provider-py:v2.0.2 "/ocean-provider/doc…" 12 minutes ago Up About a minute 0.0.0.0:8030->8030/tcp, :::8030->8030/tcp provider
Then, check the logs from the Provider's docker container:
$dockerlogs--followprovider[2023-06-14 09:31:02 +0000] [8] [INFO] Starting gunicorn 20.0.4[2023-06-14 09:31:02 +0000] [8] [INFO] Listening at: http://0.0.0.0:8030 (8)[2023-06-14 09:31:02 +0000] [8] [INFO] Using worker: sync[2023-06-14 09:31:02 +0000] [10] [INFO] Booting worker with pid: 102023-06-14 09:31:02 594415b13f8c rlp.codec[10] DEBUG Consider installing rusty-rlp to improve pyrlp performance with a rust based backend
2023-06-1409:31:12594415b13f8cocean_provider.run[10]INFOincomingrequest=http,GET,172.18.0.1,/?2023-06-1409:31:12594415b13f8cocean_provider.run[10]INFOrootendpointcalled2023-06-1409:31:12594415b13f8cocean_provider.run[10]INFOrootendpointresponse=<Response1031bytes [200 OK]>[2023-06-14 09:41:53 +0000] [8] [INFO] Starting gunicorn 20.0.4[2023-06-14 09:41:53 +0000] [8] [INFO] Listening at: http://0.0.0.0:8030 (8)[2023-06-14 09:41:53 +0000] [8] [INFO] Using worker: sync[2023-06-14 09:41:53 +0000] [10] [INFO] Booting worker with pid: 102023-06-14 09:41:54 594415b13f8c rlp.codec[10] DEBUG Consider installing rusty-rlp to improve pyrlp performance with a rust based backend
2023-06-1409:42:40594415b13f8cocean_provider.run[10]INFOincomingrequest=http,GET,172.18.0.1,/?2023-06-1409:42:40594415b13f8cocean_provider.run[10]INFOrootendpointcalled2023-06-1409:42:40594415b13f8cocean_provider.run[10]INFOrootendpointresponse=<Response1031bytes [200 OK]>
Deploying Provider using Kubernetes with Docker Engine
In this example, we will run Provider as a Kubernetes deployment resource. We will deploy Provider for Sepolia (Eth test network). Therefore, please note that in the following configuration files, "11155111" is the chain ID for Sepolia.
Prerequisites
A server for hosting Ocean Marketplace. See this guide for how to create a server;
Kubernetes with Docker Engine is installed and configured on the server. See this chapter for information on installing Kubernetes.
The RPC URLs and API keys for each of the networks to which the Provider will be connected. See this guide for how to obtain the URL and the API key.
The private key that will be used by Provider to encrypt/decrypt URLs.
Aquarius is up and running
Steps
The steps to deploy the Provider in Kubernetes are:
From a terminal window, create a YAML file (in our example the file is named provider-deploy.yaml) then copy and paste the following content. Check the comments in the file and replace the fields with the specific values of your implementation (RPC URLs, the private key etc.).
The next step is to create a Kubernetes service (eg. ClusterIP, NodePort, Loadbalancer, ExternalName) for this deployment, depending on the environment specifications. Follow this link for details on how to create a Kubernetes service.