All pages
Powered by GitBook
1 of 6

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Install

To get started with the Ocean CLI, follow these steps for a seamless setup:

Clone the Repository

Begin by cloning the repository. You can achieve this by executing the following command in your terminal:

Cloning the repository will create a local copy on your machine, allowing you to access and work with its contents.

Install NPM Dependencies

After successfully cloning the repository, you should install the necessary npm dependencies to ensure that the project functions correctly. This can be done with the following command:

Build the TypeScript code

To compile the TypeScript code and prepare the CLI for use, execute the following command:

Now, let's configure the environment variables required for the CLI to function effectively. 🚀

Setting Environment Variables 🌐

To successfully configure the CLI tool, two essential steps must be undertaken: the setting of the account's private key and the definition of the desired RPC endpoint. These actions are pivotal in enabling the CLI tool to function effectively.

Private Key Configuration

The CLI tool requires the configuration of the account's 'private key'(by exporting env "PRIVATE_KEY") or a 'mnemonic'(by exporting env "MNEMONIC"). Both serve as the means by which the CLI tool establishes a connection to the associated wallet. It plays a crucial role in authenticating and authorizing operations performed by the tool. You must choose either one option or the other. The tool will not utilize both simultaneously.

or

RPC Endpoint Specification

Additionally, it is imperative to specify the RPC endpoint that corresponds to the desired network for executing operations. The CLI tool relies on this user-provided RPC endpoint to connect to the network required for its functions. This connection to the network is vital as it enables the CLI tool to interact with the blockchain and execute operations seamlessly.

Furthermore, there are additional environment variables that can be configured to enhance the flexibility and customization of the environment. These variables include options such as the metadataCache URL and Provider URL, which can be specified if you prefer to utilize a custom deployment of Aquarius or Provider in contrast to the default settings. Moreover, you have the option to provide a custom address file path if you wish to use customized smart contracts or deployments for your specific use case. Remember setting the next environment variables is optional.

Usage

To explore the commands and option flags available in the Ocean CLI, simply run the following command:

With the Ocean CLI successfully installed and configured, you're ready to dive into its capabilities and unlock the full potential of Ocean Protocol. If you encounter any issues during the setup process or have questions, feel free to seek assistance from the team. 🌊

$ git clone https://github.com/oceanprotocol/ocean-cli.git
support
Available CLI commands & options
npm install
npm run build
export PRIVATE_KEY="XXXX"
export MNEMONIC="XXXX"
export RPC='XXXX'
export AQUARIUS_URL='XXXX'
export PROVIDER_URL='XXXX'
export ADDRESS_FILE='../path/to/your/address-file'
npm run cli h

Ocean CLI

CLI tool to interact with the oceanprotocol's JavaScript library to privately & securely publish, consume and run compute on data.

Welcome to the Ocean CLI, your powerful command-line tool for seamless interaction with Ocean Protocol's data-sharing capabilities. 🚀

The Ocean CLI offers a wide range of functionalities, enabling you to:

  • Publish 📤 data services: downloadable files or compute-to-data.

  • Edit ✏️ existing assets.

  • 📥 data services, ordering datatokens and downloading data.

  • 💻 on public available datasets using a published algorithm. Free version of compute-to-data feature is available

Key Information

The Ocean CLI is powered by the JavaScript library, an integral part of the toolset. 🌐

Let's dive into the CLI's capabilities and unlock the full potential of Ocean Protocol together! If you're ready to explore each functionality in detail, simply go through the next pages.

Consume
Compute to Data
ocean.js
Ocean Protocol

Publish

Once you've configured the RPC environment variable, you're ready to publish a new dataset on the connected network. The flexible setup allows you to switch to a different network simply by substituting the RPC endpoint with one corresponding to another network. 🌐

For setup configuration on Ocean CLI, please consult first install section

To initiate the dataset publishing process, we'll start by updating the helper DDO(Decentralized Data Object) example named "SimpleDownloadDataset.json." This example can be found in the ./metadata folder, located at the root directory of the cloned Ocean CLI project.

The provided example creates a consumable asset with a predetermined price of 2 OCEAN. If you wish to modify this and create an asset that is freely accessible, you can do so by replacing the value of "stats.price.value" with 0 in the JSON example mentioned above.

Now, let's run the command to publish the dataset:

Executing this command will initiate the dataset publishing process, making your dataset accessible and discoverable on the Ocean Protocol network. 🌊

{
	"@context": ["https://w3id.org/did/v1"],
	"id": "",
	"nftAddress": "",
	"version": "4.1.0",
	"chainId": 80001,
	"metadata": {
		"created": "2021-12-20T14:35:20Z",
		"updated": "2021-12-20T14:35:20Z",
		"type": "dataset",
		"name": "ocean-cli demo asset",
		"description": "asset published using ocean cli tool",
		"tags": ["test"],
		"author": "oceanprotocol",
		"license": "https://market.oceanprotocol.com/terms",
		"additionalInformation": {
			"termsAndConditions": true
		}
	},
	"services": [
		{
			"id": "ccb398c50d6abd5b456e8d7242bd856a1767a890b537c2f8c10ba8b8a10e6025",
			"type": "access",
			"files": {
				"datatokenAddress": "0x0",
				"nftAddress": "0x0",
				"files": [
					{
						"type": "url",
						"url": "https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-abstract10.xml.gz-rss.xml",
						"method": "GET"
					}
				]
			},
			"datatokenAddress": "",
			"serviceEndpoint": "https://v4.provider.oceanprotocol.com",
			"timeout": 86400
		}
	],
	"event": {},
	"nft": {
		"address": "",
		"name": "Ocean Data NFT",
		"symbol": "OCEAN-NFT",
		"state": 5,
		"tokenURI": "",
		"owner": "",
		"created": ""
	},
	"purgatory": {
		"state": false
	},
	"datatokens": [],
	"stats": {
		"allocated": 0,
		"orders": 0,
		"price": {
			"value": "2"
		}
	}
}
npm run cli publish metadata/simpleDownloadDataset.json
Publish dataset

Edit

To make changes to a dataset, you'll need to start by retrieving the asset's Decentralized Data Object (DDO).

Retrieve DDO

Obtaining the DDO of an asset is a straightforward process. You can accomplish this task by executing the following command:

npm run cli getDDO 'assetDID'
Retrieve DDO

Edit the Dataset

After retrieving the asset's DDO and saving it as a JSON file, you can proceed to edit the metadata as needed. Once you've made the necessary changes, you can utilize the following command to apply the updated metadata:

Consume

The process of consuming an asset is straightforward. To achieve this, you only need to execute a single command:

In this command, replace assetDID with the specific DID of the asset you want to consume, and download-location-path with the desired path where you wish to store the downloaded asset content

Once executed, this command orchestrates both the ordering of a and the subsequent download operation. The asset's content will be automatically retrieved and saved at the specified location, simplifying the consumption process for users.

npm run cli editAsset 'DATASET_DID' 'PATH_TO_UPDATED_FILE`
npm run cli download 'assetDID' 'download-location-path'
datatoken
Consume

Run C2D Jobs

Get Compute Environments

To proceed with compute-to-data job creation, the prerequisite is to select the preferred environment to run the algorithm on it. This can be accomplished by running the CLI command getComputeEnvironments likewise:

Start a Compute Job 🎯

Initiating a compute job can be accomplished through two primary methods.

  1. The first approach involves publishing both the dataset and algorithm, as explained in the previous section, Once that's completed, you can proceed to initiate the compute job.

  2. Alternatively, you have the option to explore available datasets and algorithms and kickstart a compute-to-data job by combining your preferred choices.

To illustrate the latter option, you can use the following command:

In this command, replace DATASET_DID with the specific DID of the dataset you intend to utilize and ALGO_DID with the DID of the algorithm you want to apply. By executing this command, you'll trigger the initiation of a compute-to-data job that harnesses the selected dataset and algorithm for processing.

Start a Free Compute Job 🎯

For running the algorithms free by starting a compute job, these are the following steps.Note Only for free start compute, the dataset is not mandatory for user to provide in the command line. The required command line parameters are the algorithm DID and environment ID, retrieved from getComputeEnvironments command.

  1. The first step involves publishing the algorithm, as explained in the previous section, Once that's completed, you can proceed to initiate the compute job.

  2. Alternatively, you have the option to explore available algorithms and kickstart a free compute-to-data job by combining your preferred choices.

To illustrate the latter option, you can use the following command for running free start compute with additional datasets:

In this command, replace DATASET_DID with the specific DID of the dataset you intend to utilize and ALGO_DID with the DID of the algorithm you want to apply and the environment for free start compute returned from npm run cli getComputeEnvironments. By executing this command, you'll trigger the initiation of a free compute-to-data job with the alogithm provided. Free start compute can be run without published datasets, only the algorithm and environment is required:

NOTE: For zsh console, please surround [] with quotes like this: "[]".

Download Compute Results 🧮

To obtain the compute results, we'll follow a two-step process. First, we'll employ the `getJobStatus`` method, patiently monitoring its status until it signals the job's completion. Afterward, we'll utilize this method to acquire the actual results.

Retriving Algorithm Logs

To monitor the algorithm logs execution and setup configuration for algorithm, this command does the trick!

Monitor Job Status

To track the status of a job, you'll require both the dataset DID and the compute job DID. You can initiate this process by executing the following command:

Executing this command will allow you to observe the job's status and verify its successful completion.

Download C2D Results

For the second method, the dataset DID is no longer required. Instead, you'll need to specify the job ID, the index of the result you wish to download from the available results for that job, and the destination folder where you want to save the downloaded content. The corresponding command is as follows:

npm run cli getComputeEnvironments
Publish a Dataset
Publish a Dataset
Start a compute job
Start a free compute job
Get Job Status
Download C2D Job Results
npm run cli startCompute 'DATASET_DID' 'ALGO_DID'
npm run cli freeStartCompute ['DATASET_DID1','DATASET_DID2'] 'ALGO_DID' 'ENV_ID'
npm run cli freeStartCompute [] 'ALGO_DID' 'ENV_ID'
npm run cli computeStreamableLogs
npm run cli getJobStatus 'DATASET_DID' 'JOB_ID'
 npm run cli downloadJobResults 'JOB_ID' 'RESULT_INDEX' 'DESTINATION_FOLDER'