Private Docs Introduction
API Endpoint
https://api.scale.com/v1/
Scale's Private Documentation is meant to provide early-access to Scale features not yet ready to be publicly announced.
Changelog
Date | Changes |
---|---|
2021-06-19 | - Sensor Fusion is now documented publicly |
2021-04-06 | - Nucleus Product Docs have moved in-app |
2021-03-10 | - Added lidar polygons |
Sensor Fusion / LIDAR Annotation
This is now publicly documented.
Sensor Fusion Debug Tool
This is now publicly documented.
Sensor Fusion Segmentation
This is now publicly documented.
2D/3D Linking
This is now publicly documented.
Dependent Tasks
Example Python Code (click on the
Python
tab to see example code)
import requests
attachments = []
for frame in range(100):
attachment = 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/kitti-road-2011_10_03_drive_0047/frames/frame%d.json' % frame
attachments.append(attachment)
payload = {
'callback_url': 'http://www.example.com/callback',
'instruction': 'Please label all cars, pedestrians, and cyclists in each frame.',
'attachment_type': 'json',
'attachments': attachments,
'labels': ['car', 'pedestrian', 'cyclist'],
'max_distance_meters': 30,
'dependents': {
'defs': [
{
'type': 'lidarlinking',
'annotation_type': 'annotation',
'instruction': 'Adjust the annotations around the cars.',
'callback_url': 'http://www.example.com/callback2',
},
],
'require_audit': False,
}
}
headers = {"Content-Type": "application/json"}
task_request = requests.post("https://api.scale.com/v1/task/lidarannotation",
json=payload,
headers=headers,
auth=('YOUR_SCALEAPI_KEY', ''))
print task_request.json()
Some of our endpoints (e.g. lidarsegmentation, lidarannotation) can create dependent tasks
once the root task is completed.
You can now declare dependent tasks when creating the root task, rather than waiting for the root task to complete before manually creating the dependent tasks.
To enable dependent tasks, you must include a dependents
object when creating your task. The dependents
object contains the sub-field defs
, an array containing dependent task
objects.
Sensor Fusion Tasks
Currently, we support creating lidarlinking
and lidarsegmentation
tasks based off a lidarannotation
task.
Definition: dependents
Example
dependents
object
{
"defs": [
{
"type": "lidarlinking",
"annotation_type": "annotation",
"instruction": "Adjust the annotations around the cars.",
"callback_url": "http://www.example.com/callback2"
}
],
"require_audit": true
}
The dependents
object contains sub-field defs
and require_audit
.
defs
is an array of objects that contain dependent tasks params.
Parameter | Optional? | Type | Description |
---|---|---|---|
defs |
no | Array<DependentDef> |
Definitions of the tasks that will be created once this task is complete. |
require_audit |
yes | boolean |
Whether or not to wait for a customer audit to fix/approve a task, before creating the dependent tasks. |
Definition: DependentDef
Example
DependentDef
object
{
"type": "lidarlinking",
"annotation_type": "annotation",
"instruction": "Adjust the annotations around the cars.",
"callback_url": "http://www.example.com/callback2",
}
DependentDef
objects are used to describe the dependent task to be created, the properties of which are the same as the properties of a regular request to the corresponding endpoint, without the params that refer to the base task (e.g. lidar_task
for lidarlinking tasks). These params will be added to the dependent task when it is created. In addition, a type
parameter must be specified to identify the type of task to create. For example, if one were to create a dependent lidar linking task based off a lidar task, a possible DependentDef
is provided to the right.
To learn more about lidarlinking
params and tasks, see 2D/3D Linking.
To learn more about lidarsegmentation
params and tasks, see Sensor Fusion Segmentation
Parameter | Optional? | Type | Description |
---|---|---|---|
type |
no | string |
Type of task that will be created as dependent task. (lidarlinking or lidarsegmentation ) |
instruction |
no | string |
A markdown-enabled string explaining how to draw the annotations. You can use markdown to show example images, give structure to your instructions, and more. |
callback_url |
no | string |
The full url (including the scheme http:// or https:// ) of the callback when the task is completed. See the Callback section for more details about callbacks. |
annotation_type |
lidarlinking only |
string |
The 2D annotation type to return, either imageannotation , annotation , cuboidannotation , or polygonannotation . |
labels |
lidarsegmentation only |
array |
An array of strings describing the different types of objects you’d like to be used to segment the image. You may include at most 50 objects. |
Dependent Tasks API Endpoints
Here are some additional API endpoints to help manage and edit dependent tasks.
Each of these enbdpoints will return the root task's object after the modifications have been applied.
These actions can only be done if dependent tasks are not yet created. They will not work if the dependent task is already created.
Change Options
/v1/task/:taskId/dependents/options
To change the options associated with dependent tasks (can only be done if the task is not complete, not just if dependent tasks have not been created).
POST
this endpoint with an object of options (e.g. {'require_audit': false} to allow dependent tasks to be created immediately upon completion).
Force Dependent Tasks Creation
/v1/task/:taskId/dependents/force_creation
Use this endpoint if you would like to create dependent tasks and skip the audit (assuming require_audit is true on a particular task). This will fail if the task is not completed, or dependent tasks have already been created.
Sensor Fusion Labeling Menu
Scale supports a standard group of labels for sensor fusion annotation, but we are open to discussing further customization for your particular use case. If you have additional labels you'd like to discuss, please email us.
Note that we currently label tracks that are outdoors-only, i.e. outside of cars, trucks, buildings, etc. The labels we currently support are the following:
Stationary or Moving Tracks
Pedestrian
: The standard pedestrian boxing is spacious enough to contain all of the pedestrians' limbs and slight posture movements. This includes sitting and standing individual adults, children, and babies.- The pedestrian label can be broken down to
Adult Pedestrian
andChild Pedestrian
.
- The pedestrian label can be broken down to
Torso-Width Pedestrian
: The alternative for the standard pedestrian labeling. Constrains width of cuboid to the pedestrian's shoulder width. Cuboid should be extended to top of pedestrian's head and bottom of their feet.Pedestrian with Object
: Pedestrian holding a large object, like a baby or a sign, or a pedestrian pushing a bicycle, motorcycle, cart, stroller, etc. This category also includes pedestrians on scooters or skateboards. This cuboid includes both the pedestrian and the object. This does not include pedestrians carrying normal day-to-day (e.g. backpack, tote bag, purse). This includes pedestrians in wheelchairs.- Pedestrians holding other adult pedestrians (or their hands) are labelled separately as two pedestrians, not as one “pedestrian with object”.
Wheeled Pedestrian
: Pedestrian(s) on non-motorized scooter, skateboard, roller skates, and segway. Bicycles, motor scooters, motorcycles do not count. These should be different than pedestrian with object.Cyclist
: Pedestrian(s) on bicycle. This cuboid includes both the pedestrian(s) and the bike.Bicycle
: Bike without pedestrian.Motorcyclist
: Pedestrian(s) on motorcycle or ATV. This cuboid includes both the pedestrian(s) and the motorcycle/ATV.Motorcycle
: Parked Motorbike(s) or ATV(s) without rider.Car
: Standard car. Includes vans & SUVs.- Includes car door if open. Does not include car exhaust or other nearby irrelevant LiDAR points.
- Can inlcude parked car attribute: vehicles now have a parked car attribute. Mark as “yes” if parked.
Truck
: Pickup truck or freight truck.Bus
: Any type of bus.Other Vehicle
: Standalone motorcycles and bikes (without pedestrians on them). Also includes unusual vehicles such as trucks with cars on top, cars with trailers attached, standalone trailers, trams, boats on land, golf carts, tandem bicycles, tuk-tuks, and news vans.Animals
: Dogs, cats, and other similarly sized animals. Does not include flying animals.Vehicle towing
: A vehicle (such as a car, mini-van, pick-up truck, SUV, van) that is towing something. This is a vehicle that would normally be in the “Car” category, except that it is towing something.Towed object
: The thing that is being towed. (boat, car, flatbed trailer, etc.)Trailer:
: Any vehicle trailer, both for vehicles or large vehicles (regardless of whether currently being towed or not). And containers that are not being towed. Note, there is overlap withTowed object
.Train
: Any vehicle that travels on rails e.g. light rail / tram / train. For trains that consist of several linked units, annotate each segment with a bounding box.Construction Vehicle
: Vehicles primarily designed for construction. Typically very slow moving or stationary.Stroller:
: Any stroller. If a person is in the stroller, include in the annotation. If a pedestrian is pushing the stroller, then they should be labeled separately.Wheelchair:
: Any type of wheelchair. If a pedestrian is pushing the wheelchair then they should be labeled separately. This includes motorized/electric and non motorized/electric wheelchairs.
Stationary-only Tracks
Construction Cones and Poles
: Any temporary cone or short temporary pole (usually orange or striped and used in construction), placed to redirect traffic. Does not include permanent structures.Construction Zone Signs and Construction Sign Boards
: Any construction related signs or boards (including electronic signs) that are meant to direct traffic.Garbage Bins and Dumpsters:
: Any garbage bins and dumpsters.Temporary Construction Barrier:
: Temporary small walls used to block off construction zones. Typically plastic and brightly colored about knee or waist height.No Label Zone:
: Apply this label to parking lots, highway lanes moving in the opposite direction from the robot vehicle or other driveable space separated from the robot vehicle by a solid, continuous barrier. Create a single cuboid covering the entire area.
Additional Notes on General Labeling Guidelines:
- Stationary objects are defined as objects that do not appear to move substantially for the duration of the scene that they are visible in.
- Slight pedestrian posture movements are ignored.
- If an object's heading cannot be determined absolutely or guessed reasonably, we do not annotate the object.
- If an object has no heading (e.g. construction cone) we make a heading in an arbitrary direction.
- Reflections or pictures of target objects are not annotated.
- We annotate target objects in any outdoors environment, including highways, driveways, parking lots, and fields with adequate LiDAR points.
- If an object labeled x becomes object(s) that should be labeled differently, we abandon the original cuboid trail and start differently labeled new cuboid trail(s).
- For example, a cyclist who dismounts their bike to become a person walking their bike (“pedestrian with object”) is two differently labeled cuboid trails, one picking up in the frame where the other left off.
- If an object is occluded for part of the cuboid trail, we delete the cuboids in the frames in which the object is occluded. The frames of the object before and after occlusion are the same cuboid trail.
- Cuboid overlap is sometimes permitted, especially with boxing pedestrians in a crowd.
AV Image Labeling Menu
Scale supports a standard group of labels for image annotation for autnomous driving use cases, but we are open to discussing further customization for your particular use case. If you have additional labels you'd like to discuss, please email us.
The labels we currently support are the following:
Vehicle Labels
Ego Vehicle
: Vehicle that camera is located on.Car
: Standard car. Includes pickup trucks and vans. Includes car door if open. Includes side mirror.Truck
: Freight trucks, semi-trucks, trucks with cars on top.Motorcycle
: Motorcycles - does not include pedestrian.Bicycle
: Bicycles - does not include pedestrian.ATV
: All-terrain vehicle - does not include pedestrian.Bus
: Buses and Shuttles. Can be split into 'Bendy Bus' and 'Rigid Bus'.Trailer
: Any vehicle trailer or shipping container.Other Towed Object
: Object that is towed by a vehicle such as boat. Does not include trailer or shipping container.Train
: Any vehicle that travels on rails such as light rails, trams, or trains.Construction Vehicle
: Vehicles primarily designed for construction such as excavators, bulldozers, or trenchers.Other Vehicle
: Unusual vehicles such as cars with trailers attached, standalone trailers, trams, boats on land, golf carts, tandem bicycles, or tuk-tuks.Toll Booth
: A booth where drivers pay a toll. Each booth will be annotated individually.
Attributes Menu:
- Moving/Stationary
- Emergency Vehicle (includes ambulance, firetruck, police car)
- Emergency Lights (Flashing/Not Flashing)
- Police Vehicle
- Pivot (can the vehicle bend on a pivot)
- Parked/Not Parked
- Bendy/Rigid for
Bus
- Visibility (0/25%/50%/75%) - Recommended
- Occlusion (0/25%/50%/75%)
- Truncation (0/25%/50%/75%)
Pedestrian & Animal Labels
Pedestrian
: The standard pedestrian boxing is spacious enough to contain all of the pedestrians' limbs and slight posture movements. This includes sitting and standing individual adults, children, and babies. If the pedestrian is occluded, the occluded portion will be estimated.Pedestrian with Object
: Pedestrian holding a large object, like a baby or a sign, or a pedestrian pushing a bicycle, motorcycle, cart, stroller, etc. This category also includes pedestrians on scooters or skateboards and pedestrians in personal mobility vehicles. The box includes both the pedestrian and the object.Construction Worker
: A person whose main purpose is construction work.Police Officer
: A policeman or policewoman. Includes traffic controller, traffic guard, and traffic flagger.Cyclist
: Pedestrian(s) on bicycle. This cuboid includes both the pedestrian(s) and the bike.Motorcyclist
: Pedestrian(s) on motorcycle or ATV. This cuboid includes both the pedestrian(s) and the motorcycle/ATV.Animal
: Dogs, cats, and other similarly sized animals. Does not include flying animals.
Attributes Menu:
- Standing/Moving/Sitting/Lying Down (for pedestrian, pedestrian with object, construction worker, police officer)
- On Ground/Off Ground (for animal)
- Has Rider/No Rider (for cyclist, motorcyclist)
- Adult/Child for
Pedestrian
- Small/Large for
Animal
- Visibility (0/25%/50%/75%) - Recommended
- Occlusion (0/25%/50%/75%)
- Truncation (0/25%/50%/75%)
Stationary Object Labels
Street Signs
: These can be any street signs, road signs, electronic road signs. This does not include the pole attached to signs.Traffic Lights
: Traffic lights. Colors can be assigned via attributes (see attributes below).Pedestrian Lights
: Pedestrian crosswalk lights. Colors can be assigned via attributes (see attributes below).Poles
: The vertical portion of any standalone poles. These can be poles connected to street lights, traffic lights, or telephone wires, or signs. This does not include poles in fences or gates and white traffic poles in the road.Road Barriers
: Any barriers placed as obstacles on the road in order to direct traffic. They are typically plastic or concrete. They can be permanent or temporary. This includes any barriers used during construction to redirect traffic - but does not include traffic cones.Traffic Cones
: Any traffic cone/pole in the road/scene. Includes flexible posts, caution/warning cones, skinny cones, and white traffic poles.Drivable Space
: Any surface on which a vehicle can drive, with no concern of traffic rules (e.g. Roads, parking lots, driveways, road shoulders).Ego Vehicle
: The vehicle taking the photo, that is sometimes visible at the bottom of the image.Parking Space
: Any surface on which a vehicle can legally be parked.Movable Obstacle
: Any object on the driveable surface that is too big to drive over such as tree branch or full trash bag.
Attributes Menu:
- Red/Green/Yellow/Unknown for
Traffic Lights
- Temporary/Permanent
- Construction
- Visibility (0/25%/50%/75%) - Recommended
- Occlusion (0/25%/50%/75%)
- Truncation (0/25%/50%/75%)
Lane Line Labels
Single Broken
: Regular dashed lines that usually sit between lanes.Single Solid
: Solid lines that typically sit at the extreme ends of the road where the road meets a boundaryDouble
: Double lines typically can be yellow (center of 2-way road) or white (lane barrier)End of Lane
: End of Lane lines exist near freeway exits and are denoted by shorter and thicker lane markings. They are like single broken lines but the line breaks occur at shorter intervals.Edge of Road
: The end of paved/drivable part of the road.Botts Dots
: Sequences of raised dots/reflective markings (usually white or yellow) that form a lane lineStop Line
: Thick lines that usually run perpendicular to the car, and sit in front of stop signs, pedestrian crossings, railway crossings, and traffic junctions.
Attributes Menu:
- Right of Vehicle/Left of Vehicle
- Visible/Inferred (e.g. fully faded lines can be inferred)
- Diverging (e.g. near off-ramp of freeway)
- Relevant/Irrelevant (in terms of relevancy to cars traveling path)
Additional Notes on General Labeling Guidelines:
- We currently limit to 6 label types in a single task. If you need more than 6 label types for a single image, then consider breaking the task up into two separate tasks.
- Some label types will need to be separated into different tasks because they require different types of annotation. For example, Lane Line Labels and Vehicle Labels, which require typically require line annotation and box annotation, respectively.
- In your labeling instructions and in your task JSON, you should always include a minimum width and minimum height (in pixels) for the objects you'd like annotated.
- In your labeling instructions, you should always include a “maximum occlusion threshold” (E.g. do not annotate objects if they are more than
X%
occluded.)
Ouster Integrations
We have partnered with Ouster, Inc. to very easily label data provided by Ouster sensors. We currently have integrations with our Sensor Fusion / LIDAR Annotation and Semantic Segmentation endpoints.
Sensor Fusion
Example Python Code (click on the
Python
tab to see example code)
import requests
compound_attachments = [{
'png_url': 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_%06d.png' % i,
'pose_url': 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_%06d.json' % i,
'meta_url': 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json',
} for i in range(100, 200)]
payload = {
'callback_url': 'http://www.example.com/callback',
'instruction': 'Please label all cars, pedestrians, and cyclists in each frame.',
'attachment_type': 'json',
'lidar_format': 'ouster',
'compound_attachments': compound_attachments,
'labels': ['car', 'pedestrian', 'cyclist'],
}
headers = {"Content-Type": "application/json"}
task_request = requests.post("https://api.scale.com/v1/task/lidarannotation",
json=payload,
headers=headers,
auth=('YOUR_SCALEAPI_KEY', ''))
print task_request.json()
Rather than generating point cloud JSON files manually, you can simply submit the PNG images, poses, and associated metadata from the Ouster sensor to create a fully-featured LIDAR annotation task.
HTTP Request
POST https://api.scale.com/v1/task/lidarannotation
Parameters
Parameters are the same as in our Sensor Fusion / LIDAR Annotation endpoint, except instead of sending an attachments
param, you send a compound_attachments
param containing a list of dictionaries (each dictionary corresponding to a frame) with three keys each: png_url
, pose_url
, and meta_url
which point to, respectively, the PNG representing the output of the Ouster sensor for a particular frame, the JSON representing the pose of the vehicle during said frame, and the metadata JSON for your particular Ouster sensor.
Additionally, one must send another lidar_format="ouster'
param to indicate usage of the Ouster integration.
Semantic Segmentation
curl "https://api.scale.com/v1/task/segmentannotation" \
-H "Content-Type: application/json" \
-u "{{ApiKey}}:" \
-d '
{
"callback_url": "http://www.example.com/callback",
"instruction": "Please segment the image using the given labels.",
"attachment_type": "image",
"compound_attachment": {
"png_url": "https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_000100.png",
"meta_url": "https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json"
},
"format": "ouster",
"labels": ["background", "road", "vegetation", "lane marking"],
"instance_labels": ["vehicle", "pedestrian"],
}'
import scaleapi
client = scaleapi.ScaleClient('{{ApiKey}}')
client.create_segmentannotation_task(
callback_url='http://www.example.com/callback',
instruction='Please segment the image using the given labels.',
attachment_type='image',
attachment='http://i.imgur.com/XOJbalC.jpg',
compound_attachment={
"png_url": "https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_000100.png",
"meta_url": "https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json"
},
format="ouster",
labels=['background', 'road', 'vegetation', 'lane marking'],
instance_labels=['vehicle', 'pedestrian'],
allow_unlabeled=False
)
var scaleapi = require('scaleapi');
var client = scaleapi.ScaleClient('{{ApiKey}}');
client.createSegmentannotationTask({
callback_url: 'http://www.example.com/callback',
instruction: 'Please segment the image using the given labels.',
attachment_type: 'image',
compound_attachment: {
png_url: 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_000100.png',
meta_url: 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json'
},
format: 'ouster',
labels: ['background', 'road', 'vegetation', 'lane marking'],
instance_labels: ['vehicle', 'pedestrian'],
allow_unlabeled: false
}, (err, task) => {
// do something with task
});
require 'scale'
scale = Scale.new(api_key: '{{ApiKey}}')
scale.create_segmentannotation_task({
callback_url: 'http://www.example.com/callback',
instruction: 'Please segment the image using the given labels.',
attachment_type: 'image',
compound_attachment: {
png_url: 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/Scale/sensor1_000100.png',
meta_url: 'https://s3-us-west-1.amazonaws.com/scaleapi-cust-lidar/Ouster/ouster_format_test/sensor1.json'
},
format: 'ouster',
labels: ['background', 'road', 'vegetation', 'lane marking'],
instance_labels: ['vehicle', 'pedestrian'],
allow_unlabeled: false
})
=> #<Scale::Api::Tasks::Segmentannotation:0x007fcc11092f10 @task_id="58a6363baa9d139b20a4252f", @type="segmentannotation", @instruction="Please segment the image using the given labels.", @params={"allow_unlabeled"=>false, "labels"=>['background', 'road', 'vegetation', 'lane marking'], "instance_labels"=>['vehicle', 'pedestrian'], "attachment_type"=>"image", "attachment"=>"http://i.imgur.com/XOJbalC.jpg"}, @urgency="standard", @response=nil, @callback_url="http://www.example.com/callback", @created_at=2017-02-16 23:31:07 UTC, @status="pending", @completed_at=nil, @callback_succeeded_at=nil, @metadata={}>
The above command returns an object structured like this:
{
"task_id": "5774cc78b01249ab09f089dd",
"created_at": "2016-9-03T07:38:32.368Z",
"callback_url": "http://www.example.com/callback",
"type": "segmentannotation",
"status": "pending",
"instruction": "Please segment the image using the given labels.",
"urgency": "standard",
"params": {
"allow_unlabeled": false,
"labels": [
"background",
"road",
"vegetation",
"lane marking"
],
"instance_labels": [
"vehicle",
"pedestrian"
],
"attachment_type": "image",
"attachment": "https://scaleapi-cust-lidar.s3.amazonaws.com/ouster-cust/segment/e163a3f9-34ea-42c0-badc-4da7411b8d6e" // automatically converted image
},
"metadata": {}
}
Our semantic segmentation integration extracts the intensity channel from your Ouster sensor data, rectifies it, and segments the resulting grayscale image.
HTTP Request
POST https://api.scale.com/v1/task/segmentannotation
Parameters
Parameters are the same as in our Semantic Segmentation endpoint, except instead of sending an attachment
param, you send a compound_attachment
param containing a dictionary with two keys: png_url
and meta_url
which point to, respectively, the PNG representing the output of the Ouster sensor for a particular frame, and the metadata JSON for your particular Ouster sensor.
Additionally, one must send another format="ouster"
param to indicate usage of the Ouster integration.
Real-Time Validation
In order to ensure high quality and provide faster feedback to our labelers, we offer customers the option to run your own custom validations on task responses before Scale completes tasks and sends the completion callbacks. Currently, we only support real-time validation for imageannotation
and videoplaybackannotation
tasks.
Integration Steps
First, you must create a self-hosted validation endpoint that Scale will call to validate responses. Then, please contact Scale to enable real-time validation.
Once real-time validation is enabled, Scale will send requests to your validation endpoint. We will set the scale-callback-auth
HTTP header on each request for you to authenticate these requests, similarly to the authentication scheme for https://docs.scale.com/reference#authentication. These validation requests will have a JSON body with the following fields:
Validation Request Fields
Parameter | Type | Description |
---|---|---|
task |
Object | Task object. |
response |
Object | Response object in task type-specific format, see docs for details about the format for each task type. |
Note: If validation takes a long time to run, please let us know.
Your validation endpoint should return an object structured like this:
{
"pass": false,
"issues": {
"uuid_1": [
{
"errorType": "extraneous_annotation",
"locations": [{ "x": 100, "y": 200 }],
"extra": "don't need this annotation"
}
],
"uuid_2": [
{ "errorType": "bad_point" },
{ "errorType": "extra_point" }
]
},
"globalIssues": [{ "errorType": "this error is global" }]
}
After you've run your custom validation logic on the task response, please return a response with the following fields in the response body:
Validation Response Format
Parameter | Type | Description |
---|---|---|
pass |
boolean | Whether validation on the response succeeded. Required. |
issues |
object | If validation failed, a mapping from annotation ID to a list of Issue objects. Optional. |
globalIssues |
Issues array | Issue objects that aren't associated with any particular annotation. Optional. |
Definition: Issue
Key | Type | Description |
---|---|---|
errorType |
string | Type of error. We recommend that this is an enum. Required. |
locations |
Point array |
Pixel location(s) in the image of the error. Point is in the format {x: number y: number} . Optional. |
frameNum |
int | The frame number related to the issue. For video annotation tasks. Optional. |
extra |
object | Feedback about the error that will be displayed to labelers, which should be human-readable. Optional. |
Testing
In order to facilitate testing of your validation endpoint, we provide a testing endpoint that you can call to manually trigger real-time validation. When you call the testing endpoint, Scale will send a validation request to the specified callbackURL
. Once we receive a validation response, we will validate the response format and return a response to the original testing request.
HTTP Request
POST https://api.scale.com/v1/linting/task/<TASKID>/send-lint-callback
Parameters
Parameter | Type | Description |
---|---|---|
callbackURL |
String | URL of your validation endpoint. |
Response
Same as the validation response format. If there is a validation error, a response with status code 400 will be returned.
Mapping
Groups (beta)
In addition to the normal LabelDescription nesting that Scale has, you can now attach additional grouping info to each LabelDescription. Groups can be used with Rules.
New Parameter | Type | Description |
---|---|---|
groups | Array<string> |
A list of groups that this label belongs to. If this choice has subchoices, those subchoices will also belong to these groups |
In the JSON example to the right:
- The label
Single Solid
belongs to groupsRoundabout Edge
andColored Line
- The label
Double Solid
only belongs to the groupRoundabout Edge
Example LabelDescription with groups
// lines:
[
{
"choice": "Curb",
"groups": ["Roundabout Edge"]
},
{
"choice": "Lane Line",
"groups": ["Roundabout Edge"],
"subchoices": [
{ "choice": "Single Solid", "groups": ["Colored Line"] },
"Double Solid"
]
}
]
Rules (beta)
Rules can be defined under Task API params to enforce certain annotation relationships.
must_derive_from
This rules enforces that if line annotations are used to form, or in other words, "derive" a polygon annotation, then the labels of the involved annotations must be of a certain set.
Parameter | Type | Description |
---|---|---|
from | Array<string> |
A list of line labels or group names |
to | Array<string> |
A list of polygon labels or group names whose edges must be from lines |
In the JSON example to the right, the rules can be read as:
Roundabout Centers
must derive from anyRoundabout Edge
lines- and
Shoulder Zones
must derive from anySingle Solid
orDouble Solid
lines
{
"geometries": ...
"base_annotations": ...
"rules": {
"must_derive_from": [
{ "from": ["Roundabout Edge"], "to": ["Roundabout Center"] },
{ "from": ["Single Solid", "Double Solid"], "to": ["Shoulder Zone"] }
]
}
}
Lidar Preprocessing Additional Params
This section documents additional options that can be passed to the Create LidarTopdown Task API (similar to the ImageAnnotation API)
key | type | default | description |
---|---|---|---|
shouldClipIntensity |
boolean | true | If true, uses colorIntensityMultiplier to tweak the ortho image contrast amount |
colorIntensityMultiplier |
number | 1 | If > 1, further increase the ortho image contrast, but dim features may get dimmer. |
deviceHeight |
number | 1.2 | The height of the lidar device relative to the ground in meters. If a point on the ground has height z in the device coordinate frame, then z + deviceHeight should be about 0. Used to filter out points that are too high/low more accurately. |
Example process_attachments_options section. Other top-level keys will be the same, eg. "attributes", "geometries":
{
"process_attachments_options": {
"shouldClipIntensity": true,
"colorIntensityMultiplier": 1.5,
"deviceHeight": 1.2
}
}
Base Annotations Additional Params (Experimental)
This section documents additional options that can be passed to the Create LidarTopdown Task API (similar to the ImageAnnotation API)
For most cases, you can leave out the options
section entirely
key | type | default | if true | if false |
---|---|---|---|---|
unlock_all |
boolean | false | All base annotations will be unlocked for the labeler | Only base annotations inside the Annotatable Region will be unlocked |
remove_bordering_annotations |
boolean | false | Hide all base annotations that have any vertex outside the Annotatable Region | All base annotations that touch the Annotatable Region will be visible to the labeler |
ignore_input_annotatable_regions |
boolean | false | Ignore any Annotatable Regions and No Data Zones in base_annotations.world. Relabel them as "Previous". Use a new Annotatable Region | The first input Annotatable Region will be chosen as the true AR |
Example base_annotations sections. Other top-level keys will be the same, eg. "attributes", "geometries":
{
"base_annotations": {
"world": "https://<url_to_annotations>"
}
}
{
"base_annotations": {
"world": "https://<url_to_annotations>",
"options": {
"unlock_all": true,
"remove_bordering_annotations": true,
"ignore_input_annotatable_regions": true
}
}
}