NAV
cURL JavaScript Python Ruby

Private Docs Introduction

API Endpoint

https://api.scale.com/v1/

Scale's Private Documentation is meant to provide early-access to Scale features not yet ready to be publicly announced.

Changelog

Date Changes
2021-06-19 - Sensor Fusion is now documented publicly
2021-04-06 - Nucleus Product Docs have moved in-app
2021-03-10 - Added lidar polygons

Sensor Fusion / LIDAR Annotation

This is now publicly documented.

Sensor Fusion Debug Tool

This is now publicly documented.

Sensor Fusion Segmentation

This is now publicly documented.

2D/3D Linking

This is now publicly documented.

Dependent Tasks

Example Python Code (click on the Python tab to see example code)

import requests

attachments = []
for frame in range(100):
  attachment = 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/kitti-road-2011_10_03_drive_0047/frames/frame%d.json' % frame
  attachments.append(attachment)

payload = {
  'callback_url': 'http://www.example.com/callback',
  'instruction': 'Please label all cars, pedestrians, and cyclists in each frame.',
  'attachment_type': 'json',
  'attachments': attachments,
  'labels': ['car', 'pedestrian', 'cyclist'],
  'max_distance_meters': 30,
  'dependents': {
    'defs': [
      {
        'type': 'lidarlinking',
        'annotation_type': 'annotation',
        'instruction': 'Adjust the annotations around the cars.',
        'callback_url': 'http://www.example.com/callback2',
      },
    ],
    'require_audit': False,
  }
}

headers = {"Content-Type": "application/json"}

task_request = requests.post("https://api.scale.com/v1/task/lidarannotation",
  json=payload,
  headers=headers,
  auth=('YOUR_SCALEAPI_KEY', ''))

print task_request.json()

Some of our endpoints (e.g. lidarsegmentation, lidarannotation) can create dependent tasks once the root task is completed.
You can now declare dependent tasks when creating the root task, rather than waiting for the root task to complete before manually creating the dependent tasks.
To enable dependent tasks, you must include a dependents object when creating your task. The dependents object contains the sub-field defs, an array containing dependent task objects.

Sensor Fusion Tasks

Currently, we support creating lidarlinking and lidarsegmentation tasks based off a lidarannotation task.

Definition: dependents

Example dependents object

{
  "defs": [
    {
      "type": "lidarlinking",
      "annotation_type": "annotation",
      "instruction": "Adjust the annotations around the cars.",
      "callback_url": "http://www.example.com/callback2"
    }
  ],
  "require_audit": true
}

The dependents object contains sub-field defs and require_audit.
defs is an array of objects that contain dependent tasks params.

Parameter Optional? Type Description
defs no Array<DependentDef> Definitions of the tasks that will be created once this task is complete.
require_audit yes boolean Whether or not to wait for a customer audit to fix/approve a task, before creating the dependent tasks.

Definition: DependentDef

Example DependentDef object

{
  "type": "lidarlinking",
  "annotation_type": "annotation",
  "instruction": "Adjust the annotations around the cars.",
  "callback_url": "http://www.example.com/callback2",
}

DependentDef objects are used to describe the dependent task to be created, the properties of which are the same as the properties of a regular request to the corresponding endpoint, without the params that refer to the base task (e.g. lidar_task for lidarlinking tasks). These params will be added to the dependent task when it is created. In addition, a type parameter must be specified to identify the type of task to create. For example, if one were to create a dependent lidar linking task based off a lidar task, a possible DependentDef is provided to the right.

To learn more about lidarlinking params and tasks, see 2D/3D Linking.
To learn more about lidarsegmentation params and tasks, see Sensor Fusion Segmentation

Parameter Optional? Type Description
type no string Type of task that will be created as dependent task. (lidarlinking or lidarsegmentation)
instruction no string A markdown-enabled string explaining how to draw the annotations. You can use markdown to show example images, give structure to your instructions, and more.
callback_url no string The full url (including the scheme http:// or https://) of the callback when the task is completed. See the Callback section for more details about callbacks.
annotation_type lidarlinking only string The 2D annotation type to return, either imageannotation, annotation, cuboidannotation, or polygonannotation.
labels lidarsegmentation only array An array of strings describing the different types of objects you’d like to be used to segment the image. You may include at most 50 objects.

Dependent Tasks API Endpoints

Here are some additional API endpoints to help manage and edit dependent tasks.
Each of these enbdpoints will return the root task's object after the modifications have been applied.
These actions can only be done if dependent tasks are not yet created. They will not work if the dependent task is already created.

Change Options

/v1/task/:taskId/dependents/options
To change the options associated with dependent tasks (can only be done if the task is not complete, not just if dependent tasks have not been created).
POST this endpoint with an object of options (e.g. {'require_audit': false} to allow dependent tasks to be created immediately upon completion).

Force Dependent Tasks Creation

/v1/task/:taskId/dependents/force_creation
Use this endpoint if you would like to create dependent tasks and skip the audit (assuming require_audit is true on a particular task). This will fail if the task is not completed, or dependent tasks have already been created.

Sensor Fusion Labeling Menu

Scale supports a standard group of labels for sensor fusion annotation, but we are open to discussing further customization for your particular use case. If you have additional labels you'd like to discuss, please email us.

Note that we currently label tracks that are outdoors-only, i.e. outside of cars, trucks, buildings, etc. The labels we currently support are the following:

Stationary or Moving Tracks

Stationary-only Tracks

Additional Notes on General Labeling Guidelines:

AV Image Labeling Menu

Scale supports a standard group of labels for image annotation for autnomous driving use cases, but we are open to discussing further customization for your particular use case. If you have additional labels you'd like to discuss, please email us.

The labels we currently support are the following:

Vehicle Labels

Attributes Menu:

Pedestrian & Animal Labels

Attributes Menu:

Stationary Object Labels

Attributes Menu:

Lane Line Labels

Attributes Menu:

Additional Notes on General Labeling Guidelines:

Ouster Integrations

We have partnered with Ouster, Inc. to very easily label data provided by Ouster sensors. We currently have integrations with our Sensor Fusion / LIDAR Annotation and Semantic Segmentation endpoints.

Sensor Fusion

Example Python Code (click on the Python tab to see example code)

import requests

compound_attachments = [{
    'png_url': 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_%06d.png' % i,
    'pose_url': 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_%06d.json' % i,
    'meta_url': 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json',
} for i in range(100, 200)]

payload = {
  'callback_url': 'http://www.example.com/callback',
  'instruction': 'Please label all cars, pedestrians, and cyclists in each frame.',
  'attachment_type': 'json',
  'lidar_format': 'ouster',
  'compound_attachments': compound_attachments,
  'labels': ['car', 'pedestrian', 'cyclist'],
}

headers = {"Content-Type": "application/json"}

task_request = requests.post("https://api.scale.com/v1/task/lidarannotation",
  json=payload,
  headers=headers,
  auth=('YOUR_SCALEAPI_KEY', ''))

print task_request.json()

Rather than generating point cloud JSON files manually, you can simply submit the PNG images, poses, and associated metadata from the Ouster sensor to create a fully-featured LIDAR annotation task.

HTTP Request

POST https://api.scale.com/v1/task/lidarannotation

Parameters

Parameters are the same as in our Sensor Fusion / LIDAR Annotation endpoint, except instead of sending an attachments param, you send a compound_attachments param containing a list of dictionaries (each dictionary corresponding to a frame) with three keys each: png_url, pose_url, and meta_url which point to, respectively, the PNG representing the output of the Ouster sensor for a particular frame, the JSON representing the pose of the vehicle during said frame, and the metadata JSON for your particular Ouster sensor. Additionally, one must send another lidar_format="ouster' param to indicate usage of the Ouster integration.

Semantic Segmentation

curl "https://api.scale.com/v1/task/segmentannotation" \
  -H "Content-Type: application/json" \
  -u "{{ApiKey}}:" \
  -d '
{
    "callback_url": "http://www.example.com/callback",
    "instruction": "Please segment the image using the given labels.",
    "attachment_type": "image",
    "compound_attachment": {
        "png_url": "https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_000100.png",
        "meta_url": "https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json"
    },
    "format": "ouster",
    "labels": ["background", "road", "vegetation", "lane marking"],
    "instance_labels": ["vehicle", "pedestrian"],
}'
import scaleapi

client = scaleapi.ScaleClient('{{ApiKey}}')

client.create_segmentannotation_task(
    callback_url='http://www.example.com/callback',
    instruction='Please segment the image using the given labels.',
    attachment_type='image',
    attachment='http://i.imgur.com/XOJbalC.jpg',
    compound_attachment={
        "png_url": "https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_000100.png",
        "meta_url": "https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json"
    },
    format="ouster",
    labels=['background', 'road', 'vegetation', 'lane marking'],
    instance_labels=['vehicle', 'pedestrian'],
    allow_unlabeled=False
)
var scaleapi = require('scaleapi');

var client = scaleapi.ScaleClient('{{ApiKey}}');

client.createSegmentannotationTask({
  callback_url: 'http://www.example.com/callback',
  instruction: 'Please segment the image using the given labels.',
  attachment_type: 'image',
  compound_attachment: {
    png_url: 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_000100.png',
    meta_url: 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json'
  },
  format: 'ouster',
  labels: ['background', 'road', 'vegetation', 'lane marking'],
  instance_labels: ['vehicle', 'pedestrian'],
  allow_unlabeled: false
}, (err, task) => {
    // do something with task
});
require 'scale'
scale = Scale.new(api_key: '{{ApiKey}}')

scale.create_segmentannotation_task({
  callback_url: 'http://www.example.com/callback',
  instruction: 'Please segment the image using the given labels.',
  attachment_type: 'image',
  compound_attachment: {
    png_url: 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_000100.png',
    meta_url: 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json'
  },
  format: 'ouster',
  labels: ['background', 'road', 'vegetation', 'lane marking'],
  instance_labels: ['vehicle', 'pedestrian'],
  allow_unlabeled: false
})
=> #<Scale::Api::Tasks::Segmentannotation:0x007fcc11092f10 @task_id="58a6363baa9d139b20a4252f", @type="segmentannotation", @instruction="Please segment the image using the given labels.", @params={"allow_unlabeled"=>false, "labels"=>['background', 'road', 'vegetation', 'lane marking'], "instance_labels"=>['vehicle', 'pedestrian'], "attachment_type"=>"image", "attachment"=>"http://i.imgur.com/XOJbalC.jpg"}, @urgency="standard", @response=nil, @callback_url="http://www.example.com/callback", @created_at=2017-02-16 23:31:07 UTC, @status="pending", @completed_at=nil, @callback_succeeded_at=nil, @metadata={}>

The above command returns an object structured like this:

{
  "task_id": "5774cc78b01249ab09f089dd",
  "created_at": "2016-9-03T07:38:32.368Z",
  "callback_url": "http://www.example.com/callback",
  "type": "segmentannotation",
  "status": "pending",
  "instruction": "Please segment the image using the given labels.",
  "urgency": "standard",
  "params": {
    "allow_unlabeled": false,
    "labels": [
      "background",
      "road",
      "vegetation",
      "lane marking"
    ],
    "instance_labels": [
      "vehicle",
      "pedestrian"
    ],
    "attachment_type": "image",
    "attachment": "https://scaleapi-cust-lidar.s3.amazonaws.com/ouster-cust/segment/e163a3f9-34ea-42c0-badc-4da7411b8d6e" // automatically converted image
  },
  "metadata": {}
}

Our semantic segmentation integration extracts the intensity channel from your Ouster sensor data, rectifies it, and segments the resulting grayscale image.

HTTP Request

POST https://api.scale.com/v1/task/segmentannotation

Parameters

Parameters are the same as in our Semantic Segmentation endpoint, except instead of sending an attachment param, you send a compound_attachment param containing a dictionary with two keys: png_url and meta_url which point to, respectively, the PNG representing the output of the Ouster sensor for a particular frame, and the metadata JSON for your particular Ouster sensor. Additionally, one must send another format="ouster" param to indicate usage of the Ouster integration.

Real-Time Validation

In order to ensure high quality and provide faster feedback to our labelers, we offer customers the option to run your own custom validations on task responses before Scale completes tasks and sends the completion callbacks. Currently, we only support real-time validation for imageannotation and videoplaybackannotation tasks.

Integration Steps

First, you must create a self-hosted validation endpoint that Scale will call to validate responses. Then, please contact Scale to enable real-time validation.

Once real-time validation is enabled, Scale will send requests to your validation endpoint. We will set the scale-callback-auth HTTP header on each request for you to authenticate these requests, similarly to the authentication scheme for https://docs.scale.com/reference#authentication. These validation requests will have a JSON body with the following fields:

Validation Request Fields

Parameter Type Description
task Object Task object.
response Object Response object in task type-specific format, see docs for details about the format for each task type.

Note: If validation takes a long time to run, please let us know.

Your validation endpoint should return an object structured like this:

{
  "pass": false,
  "issues": {
    "uuid_1": [
      {
        "errorType": "extraneous_annotation",
        "locations": [{ "x": 100, "y": 200 }],
        "extra": "don't need this annotation"
      }
    ],
    "uuid_2": [
      { "errorType": "bad_point" },
      { "errorType": "extra_point" }
    ]
  },
  "globalIssues": [{ "errorType": "this error is global" }]
}

After you've run your custom validation logic on the task response, please return a response with the following fields in the response body:

Validation Response Format

Parameter Type Description
pass boolean Whether validation on the response succeeded. Required.
issues object If validation failed, a mapping from annotation ID to a list of Issue objects. Optional.
globalIssues Issues array Issue objects that aren't associated with any particular annotation. Optional.

Definition: Issue

Key Type Description
errorType string Type of error. We recommend that this is an enum. Required.
locations Point array Pixel location(s) in the image of the error. Point is in the format {x: number y: number}. Optional.
frameNum int The frame number related to the issue. For video annotation tasks. Optional.
extra object Feedback about the error that will be displayed to labelers, which should be human-readable. Optional.

Testing

In order to facilitate testing of your validation endpoint, we provide a testing endpoint that you can call to manually trigger real-time validation. When you call the testing endpoint, Scale will send a validation request to the specified callbackURL. Once we receive a validation response, we will validate the response format and return a response to the original testing request.

HTTP Request

POST https://api.scale.com/v1/linting/task/<TASKID>/send-lint-callback

Parameters

Parameter Type Description
callbackURL String URL of your validation endpoint.

Response

Same as the validation response format. If there is a validation error, a response with status code 400 will be returned.

Mapping

Groups (beta)

In addition to the normal LabelDescription nesting that Scale has, you can now attach additional grouping info to each LabelDescription. Groups can be used with Rules.

New Parameter Type Description
groups Array<string> A list of groups that this label belongs to. If this choice has subchoices, those subchoices will also belong to these groups

In the JSON example to the right:

Example LabelDescription with groups

// lines:
[
  {
    "choice": "Curb",
    "groups": ["Roundabout Edge"]
  },
  {
    "choice": "Lane Line",
    "groups": ["Roundabout Edge"],
    "subchoices": [
      { "choice": "Single Solid", "groups": ["Colored Line"] },
      "Double Solid"
    ]
  }
]

Rules (beta)

Rules can be defined under Task API params to enforce certain annotation relationships.

must_derive_from

This rules enforces that if line annotations are used to form, or in other words, "derive" a polygon annotation, then the labels of the involved annotations must be of a certain set.

Parameter Type Description
from Array<string> A list of line labels or group names
to Array<string> A list of polygon labels or group names whose edges must be from lines

In the JSON example to the right, the rules can be read as:

{
  "geometries": ...
  "base_annotations": ...
  "rules": {
    "must_derive_from": [
      { "from": ["Roundabout Edge"], "to": ["Roundabout Center"] },
      { "from": ["Single Solid", "Double Solid"], "to": ["Shoulder Zone"] }
    ]
  }
}

Lidar Preprocessing Additional Params

This section documents additional options that can be passed to the Create LidarTopdown Task API (similar to the ImageAnnotation API)

key type default description
shouldClipIntensity boolean true If true, uses colorIntensityMultiplier to tweak the ortho image contrast amount
colorIntensityMultiplier number 1 If > 1, further increase the ortho image contrast, but dim features may get dimmer.
deviceHeight number 1.2 The height of the lidar device relative to the ground in meters. If a point on the ground has height z in the device coordinate frame, then z + deviceHeight should be about 0. Used to filter out points that are too high/low more accurately.

Example process_attachments_options section. Other top-level keys will be the same, eg. "attributes", "geometries":

{
  "process_attachments_options": {
    "shouldClipIntensity": true,
    "colorIntensityMultiplier": 1.5,
    "deviceHeight": 1.2
  }
}

Base Annotations Additional Params (Experimental)

This section documents additional options that can be passed to the Create LidarTopdown Task API (similar to the ImageAnnotation API)

For most cases, you can leave out the options section entirely

key type default if true if false
unlock_all boolean false All base annotations will be unlocked for the labeler Only base annotations inside the Annotatable Region will be unlocked
remove_bordering_annotations boolean false Hide all base annotations that have any vertex outside the Annotatable Region All base annotations that touch the Annotatable Region will be visible to the labeler
ignore_input_annotatable_regions boolean false Ignore any Annotatable Regions and No Data Zones in base_annotations.world. Relabel them as "Previous". Use a new Annotatable Region The first input Annotatable Region will be chosen as the true AR

Example base_annotations sections. Other top-level keys will be the same, eg. "attributes", "geometries":

{
  "base_annotations": {
    "world": "https://<url_to_annotations>"
  }
}
{
  "base_annotations": {
    "world": "https://<url_to_annotations>",
    "options": {
      "unlock_all": true,
      "remove_bordering_annotations": true,
      "ignore_input_annotatable_regions": true
    }
  }
}