LogoLogo
WebsitePredictoorData ChallengesData FarmingOcean.pyOcean.js
  • 👋Ocean docs
  • 🌊Discover Ocean
    • Why Ocean?
    • What is Ocean?
    • What can you do with Ocean?
    • OCEAN: The Ocean token
    • Networks
    • Network Bridges
    • FAQ
    • Glossary
  • 📚User Guides
    • Basic concepts
    • Using Wallets
      • Set Up MetaMask
    • Host Assets
      • Uploader
      • Arweave
      • AWS
      • Azure Cloud
      • Google Storage
      • Github
    • Liquidity Pools [deprecated]
  • 💻Developers
    • Architecture Overview
    • Ocean Nodes
      • Node Architecture
    • Contracts
      • Data NFTs
      • Datatokens
      • Data NFTs and Datatokens
      • Datatoken Templates
      • Roles
      • Pricing Schemas
      • Fees
    • Publish Flow Overview
    • Revenue
    • Fractional Ownership
    • Community Monetization
    • Metadata
    • Identifiers (DIDs)
    • New DDO Specification
    • Obsolete DDO Specification
    • Storage Specifications
    • Fine-Grained Permissions
    • Retrieve datatoken/data NFT addresses & Chain ID
    • Get API Keys for Blockchain Access
    • Barge
      • Local Setup
    • Ocean.js
      • Configuration
      • Creating a data NFT
      • Publish
      • Mint Datatokens
      • Update Metadata
      • Asset Visibility
      • Consume Asset
      • Run C2D Jobs
    • Ocean CLI
      • Install
      • Publish
      • Edit
      • Consume
      • Run C2D Jobs
    • DDO.js
      • Instantiate a DDO
      • DDO Fields interactions
      • Validate
      • Edit DDO Fields
    • Compute to data
    • Compute to data
    • Uploader
      • Uploader.js
      • Uploader UI
      • Uploader UI to Market
    • VSCode Extension
    • Old Infrastructure
      • Aquarius
        • Asset Requests
        • Chain Requests
        • Other Requests
      • Provider
        • General Endpoints
        • Encryption / Decryption
        • Compute Endpoints
        • Authentication Endpoints
      • Subgraph
        • Get data NFTs
        • Get data NFT information
        • Get datatokens
        • Get datatoken information
        • Get datatoken buyers
        • Get fixed-rate exchanges
        • Get veOCEAN stats
    • Developer FAQ
  • 📊Data Scientists
    • Ocean.py
      • Install
      • Local Setup
      • Remote Setup
      • Publish Flow
      • Consume Flow
      • Compute Flow
      • Ocean Instance Tech Details
      • Ocean Assets Tech Details
      • Ocean Compute Tech Details
      • Datatoken Interface Tech Details
    • Join a Data Challenge
    • Sponsor a Data Challenge
    • Data Value-Creation Loop
    • What data is valuable?
  • 👀Predictoor
  • 💰Data Farming
    • Predictoor DF
      • Guide to Predictoor DF
    • FAQ
  • 🔨Infrastructure
    • Set Up a Server
    • Deploy Aquarius
    • Deploy Provider
    • Deploy Ocean Subgraph
    • Deploy C2D
    • For C2D, Set Up Private Docker Registry
  • 🤝Contribute
    • Collaborators
    • Contributor Code of Conduct
    • Legal Requirements
Powered by GitBook
LogoLogo

Ocean Protocol

  • Website
  • Blog
  • Data Challenges

Community

  • Twitter
  • Discord
  • Telegram
  • Instagram

Resources

  • Whitepaper
  • GitHub
  • Docs

Copyright 2024 Ocean Protocol Foundation Ltd.

On this page
  • Get Compute Environments
  • Start a Compute Job 🎯
  • Start a Free Compute Job 🎯
  • Download Compute Results 🧮
  • Retriving Algorithm Logs
  • Monitor Job Status
  • Download C2D Results

Was this helpful?

Edit on GitHub
Export as PDF
  1. Developers
  2. Ocean CLI

Run C2D Jobs

Last updated 2 months ago

Was this helpful?

Get Compute Environments

To proceed with compute-to-data job creation, the prerequisite is to select the preferred environment to run the algorithm on it. This can be accomplished by running the CLI command getComputeEnvironments likewise:

npm run cli getComputeEnvironments

Start a Compute Job 🎯

Initiating a compute job can be accomplished through two primary methods.

  1. The first approach involves publishing both the dataset and algorithm, as explained in the previous section, Once that's completed, you can proceed to initiate the compute job.

  2. Alternatively, you have the option to explore available datasets and algorithms and kickstart a compute-to-data job by combining your preferred choices.

To illustrate the latter option, you can use the following command:

npm run cli startCompute 'DATASET_DID' 'ALGO_DID'

In this command, replace DATASET_DID with the specific DID of the dataset you intend to utilize and ALGO_DID with the DID of the algorithm you want to apply. By executing this command, you'll trigger the initiation of a compute-to-data job that harnesses the selected dataset and algorithm for processing.

Start a Free Compute Job 🎯

For running the algorithms free by starting a compute job, these are the following steps.Note Only for free start compute, the dataset is not mandatory for user to provide in the command line. The required command line parameters are the algorithm DID and environment ID, retrieved from getComputeEnvironments command.

  1. Alternatively, you have the option to explore available algorithms and kickstart a free compute-to-data job by combining your preferred choices.

To illustrate the latter option, you can use the following command for running free start compute with additional datasets:

npm run cli freeStartCompute ['DATASET_DID1','DATASET_DID2'] 'ALGO_DID' 'ENV_ID'

In this command, replace DATASET_DID with the specific DID of the dataset you intend to utilize and ALGO_DID with the DID of the algorithm you want to apply and the environment for free start compute returned from npm run cli getComputeEnvironments. By executing this command, you'll trigger the initiation of a free compute-to-data job with the alogithm provided. Free start compute can be run without published datasets, only the algorithm and environment is required:

npm run cli freeStartCompute [] 'ALGO_DID' 'ENV_ID'

NOTE: For zsh console, please surround [] with quotes like this: "[]".

Download Compute Results 🧮

To obtain the compute results, we'll follow a two-step process. First, we'll employ the `getJobStatus`` method, patiently monitoring its status until it signals the job's completion. Afterward, we'll utilize this method to acquire the actual results.

Retriving Algorithm Logs

To monitor the algorithm logs execution and setup configuration for algorithm, this command does the trick!

npm run cli computeStreamableLogs

Monitor Job Status

To track the status of a job, you'll require both the dataset DID and the compute job DID. You can initiate this process by executing the following command:

npm run cli getJobStatus 'DATASET_DID' 'JOB_ID'

Executing this command will allow you to observe the job's status and verify its successful completion.

Download C2D Results

For the second method, the dataset DID is no longer required. Instead, you'll need to specify the job ID, the index of the result you wish to download from the available results for that job, and the destination folder where you want to save the downloaded content. The corresponding command is as follows:

 npm run cli downloadJobResults 'JOB_ID' 'RESULT_INDEX' 'DESTINATION_FOLDER'

The first step involves publishing the algorithm, as explained in the previous section, Once that's completed, you can proceed to initiate the compute job.

💻
Publish a Dataset
Publish a Dataset
Start a compute job
Start a free compute job
Get Job Status
Download C2D Job Results