ODE CLAY

AI for Earth Challenge 2024

Earth observation is having a moment. There’s tremendous energy and opportunity for impact. We invite engineers, scientists, researchers, and enthusiasts to build with us.

Sign Up Now

About the Challenge

The team at Clay, in collaboration with Clark University and Development Seed, created the #AIForEarthChallenge2024 to unlock new insights about the Earth, develop powerful tools to better understand and protect our planet, and create a benchmark to reward AI satellite models with operational excellence.

This challenge represents our commitment toward collaborating in the open to scale solutions for climate, nature, and people. It is supported through Clay's Social Impact Partnership with Amazon Web Services.

About Clay

Clay is a mission-driven 501(c)(3) nonprofit project. Our team is focused on supporting the emergent field of AI development for Earth observation through common infrastructure, benchmarks, and applications to concentrate effort and accelerate impact.

Overview

We have chosen seven tasks intended to highlight the operational value of foundation models and the type of signal and function needed, as proposed here.

Type of signal anchor: Spatial, Spectral, and Temporal
Type of functions: Object Detection, Change Detection, Classification, Regression, Generative

  • Landcover classification [Spectral, Classification]: Classify an AoI on yearly land cover into the following 9 classes: No Data, Water, Trees, Flooded vegetation, Crops, Built area, Bare ground, Snow/ice, Clouds, Rangeland. The scoring and time reference will be Impact Observatory 10m landcover.

  • Aquaculture detection [Spatial, Object Detection]: Identify the presence and location of aquaculture facilities on a given year. Sourcing and time reference is the open databases of aquaculture from Canada, California, and Norway. This will be assessed with a simulation of human-in-the-loop learning, where 1) first 10 similarity search results are returned 2) positive and negative results are labeled, and 3) search is performed once again, using these results for the evaluation metric.

  • Carbon above ground stock [Spectral, Regression]: Estimate the above-ground carbon stock, with HGB at 300m resolution estimates as scoring and time reference.

  • Crop yield estimation [Temporal, Regression]: Estimate crop yields at 1km2 resolution of maize and wheat using this open dataset of China as scoring and time reference.

  • Disaster response floods [Spectral, Change detection]: Map the extent of a flood, with NRT as scoring and time reference. [Note this dataset license is NC-SA]

  • Disaster fire severity [Temporal, Change detection ]: Map the severity of a fire, with NTBS as scoring and time reference.

  • Cloud Gap Imputation [Temporal, Generative]: Fill in cloudy pixels in a time series (3 scenes) of Sentinel-2 imagery. The time difference between the scenes varies from a couple of days, to a couple of months. The scoring and time reference will be this HLS multi-temporal cloud gap imputation dataset.

Challenge Tasks

Participants are required to develop one Apache-licensed Jupyter Notebook in Python that can perform the following tasks using:

  • Only one foundation model across all tasks. [See Foundation Model Training Requirements below]

  • For inference: Only fully open data (including commercial use) that is also openly available (available on public access on AWS, PC or Google, e.g., Sentinel, Landsat).

  • You can use finetune, LoRA, embeddings or any other technique as you see fit to augment the foundation model’s performance.

  • Code should scale to produce results over a given region smaller than 2000 km2.

  • Your whole notebook must run in less than 2h on a `g5.xlarge` AWS EC2 instance.

Foundation Model Training Requirements

To submit your entry, participants can choose from the following options:

  • Prize eligible: The model must be fully open-source and trained only on fully open data. You must release the code as Apache license or equivalent, and the training can only use fully open data (including commercial use) for training. You can use the scoring referenced where their license allows. You must specify sources. You are eligible for the award.

  • Ranking eligible: If your model uses any commercial or proprietary datasets, we will still rank your score, but you won’t be eligible for the compute prize. You don’t need to specify sources, but we encourage it, especially if you use Clay as a base model.

Evaluation and Scoring

The notebook should take as inputs a geojson and a year.

We will only adjust the region and time. If the code fails to run or takes more than 10 hours to complete, the team will be given 24 hours to submit a version with the corrected code.

The notebook will be scored on a location and year disclosed 2 days after the submission deadline.

The evaluation metric will be the F1 score for classification and object detection tasks, RMSE for regression tasks, and MAE for the generative task. Overall ranking will be determined by the average of the entrant's rank on each task.

On the test set: We do not hold up hidden test sets for global historical datasets. We consider “overfitting” to the whole Earth across all tasks into a single model a valid strategy.

Note: We are considering expanding the criteria to a “few shot challenge” where we allow the model to iterate with (automated) feedback on the results, to mimic how a model would be used operationally.

    Submission

    Participants must submit their notebooks as a pull request to the designated open repository. The submitted code will be run using a continuous integration (CI) system that anyone can access at any time.

      Award

      We will announce the winning models at NYC Climate Week 2024. If those models only use open-source and open data, they will receive up to $10,000 in compute resources on AWS to be used before EOY to scale up and improve their solution.

      Timeline

      • Challenge announcement and registration opens: May 17, 2024

      • Submission deadline: Sep 1, 2024, 23:59 UTC

      • Evaluation period: Sep 2-15, 2024

      • Winner announcement and showcase during NYC Climate week

      Participation

      • Participants can submit multiple entries individually or collaborate in teams or as a company.

      • Both open-source and commercial models can participate in the challenge. However, only models that exclusively use open data and open code for training will be eligible for the prize.

      • We encourage participants to showcase the value of their models, regardless of their licensing.

      • If financial resources for model training pose a barrier for you to join the challenge, please reach out to us for support in accessing available resources.

      Join the
      challenge

      Let’s push the boundaries of open-source AI for Earth observation. Together, we can unlock new insights and create powerful tools for climate, nature, and people.

      Thank you!