Result Types
Workspace
Workspace details for a specific workspace given by a provided secret key.
Applicable Functions
Attributes
Workspace(
id="ws_1c8aab980f174b0296c7e35e88665b13",
name="Raighne's Workspace",
owner="user_6323fea23e292439f31c58cd",
tier="Developer",
create_date=1701927649302
)
Name | Type | Description |
---|---|---|
id | str | Unique ID of the current workspace. |
name | str | Name of the current workspace. You can modify this on Nexus directly by clicking on the Settings button on the top right of the Workspace Dashboard. |
owner | str | Unique user ID of the workspace owner. This is tied to the email used to sign up for the Nexus account. |
tier | str | Current workspace tier that determines access to advanced features and increased resource quotas. To enjoy these benefits, check out how to Upgrade Your Plan. |
create_date | int | UNIX timestamp of workspace creation date. |
Project
Project results for a specific project given by the project key.
Applicable Functions
Attributes
Project(
id='proj_9004a21df7b040ace4674c4879603fe8',
name='keypoints',
workspace_id='ws_1c8aab980f174b0296c7e35e88665b13',
type='ObjectDetection',
create_date=1701927649302,
localization='MULTI',
tags=['cat faces'],
groups=['main', 'cats'],
statistic=Statistic(
tags_count=[TagsCountItem(name='cat faces', count=0)],
total_assets=28,
annotated_assets=0,
total_annotations=0
)
)
Attribute | Type | Description |
---|---|---|
id | str | Unique ID of the project. |
name | str | Name of the project. You can modify this by calling the function project.update({"name": "YOUR_NEW_PROJECT_NAME"}) , or you can directly modify this on Nexus by clicking on the Settings tab on the left sidebar of the Project Dashboard. |
workspace_id | str | Unique ID of the current workspace that this project is in. |
type | str | Type of project. This can be one of ObjectDetection , InstanceSegmentation , Classification , or Keypoint depending on the type selected during project creation on Nexus. |
create_date | int | UNIX timestamp of project creation date. |
localization | str | Region(s) for data localization. Defaults to MULTI for multi-region. |
tags | list[str] | List of tag names in the project. |
groups | list[str | List of asset group names in the project. |
statistic | Statistic object | Contains project statistics for the following categories: - tags_count : List of TagCountItem objects representing tag counts for each tag in the project.- total_assets : Total number of assets in the project.- annotated_assets : Total number of annotated assets in the project.- total_annotations : Total number of annotations in the project. |
Statistic
Project-level statistics on tags, assets, and annotations.
Applicable Functions
Attributes
Statistic(
tags_count=[TagsCountItem(name="cat faces", count=0)],
total_assets=28,
annotated_assets=0,
total_annotations=0
)
Name | Type | Description |
---|---|---|
tags_count | list[ TagCountItem ] | List of TagCountItem objects representing tag counts for each tag in the project. |
total_assets | int | Total number of assets in the project. |
annotated_assets | int | Total number of annotated assets in the project. |
total_annotations | int | Total number of annotations in the project. |
ProjectInsight
Information and metrics on completed training runs in a project.
Applicable Functions
Attributes
ProjectInsight(
flow_title='Test workflow',
run_id='run_4a5d406d-464d-470c-bd7d-e92456621ad3',
dataset=InsightDataset(
data_type='Rectangle',
num_classes=1,
average_annotations=5.19,
total_assets=500,
settings=DatasetSettings(
split_ratio=0.3,
shuffle=True,
seed=0,
using_sliding_window=False
)
),
model=InsightModel(
name='fasterrcnn-inceptionv2-1024x1024',
batch_size=2,
training_steps=5000,
max_detection_per_class=100,
solver='momentum',
learning_rate=0.04,
momentum=0.9
),
checkpoint=RunCheckpoint(
strategy='STRAT_ALWAYS_SAVE_LATEST',
evaluation_interval=250,
metric=None
),
artifact=InsightArtifact(
id='artifact_65ae274540259e2a07533532',
is_training=False,
step=5000,
metric=ArtifactMetric(
total_loss=0.32356,
classification_loss=0.012036,
localization_loss=0.010706,
regularization_loss=0.0
)
),
create_date=1705912133684
)
Name | Type | Description |
---|---|---|
flow_title | str | Name of the workflow. |
run_id | str | ID of the training run. |
dataset | InsightDataset object | Information and statistics on the training dataset. |
model | InsightModel object | Information on model architecture and hyperparameters. |
checkpoint | RunCheckpoint object | Information on training setup. |
artifact | InsightArtifact object | Information and metrics on saved artifact. |
create_date | int | UNIX timestamp of the project creation date. |
InsightDataset
Information and metrics on project dataset and annotations.
Applicable Functions
Attributes
InsightDataset(
data_type='Rectangle',
num_classes=1,
average_annotations=5.19,
total_assets=500,
settings=DatasetSettings(
split_ratio=0.3,
shuffle=True,
seed=0,
using_sliding_window=False
)
Name | Type | Description |
---|---|---|
data_type | str | |
num_classes | int | |
average_annotations | float | |
total_assets | int | |
settings | DatasetSettings object |
InsightModel
Information and metrics on model architecture and hyperparameters.
Applicable Functions
Attributes
InsightModel(
name='fasterrcnn-inceptionv2-1024x1024',
batch_size=2,
training_steps=5000,
max_detection_per_class=100,
solver='momentum',
learning_rate=0.04,
momentum=0.9
)
Name | Type | Description |
---|---|---|
name | str | Name of the model architecture. |
batch_size | int | Number of images that the model is trained with at every step. |
training_steps | int | Total number of training steps. |
max_detection_per_class | int | Maximum number of detections that the model will try to identify during training. |
solver | str | |
learning_rate | float | |
momentum | float |
RunCheckpoint
Information and metrics on training setup.
Applicable Functions
Attributes
RunCheckpoint(
strategy='STRAT_ALWAYS_SAVE_LATEST',
evaluation_interval=250,
metric=None
)
Name | Type | Description |
---|---|---|
strategy | str | The checkpointing strategy name for the training, enum: [ STRAT_EVERY_N_EPOCH , STRAT_ALWAYS_SAVE_LATEST , STRAT_LOWEST_VALIDATION_LOSS , STRAT_HIGHEST_ACCURACY ] |
evaluation_interval | int | The checkpoint evaluation interval value for the training. |
metric | str | The checkpointing metric for the training, enum: [Loss/total_loss , Loss/regularization_loss , Loss/localization_loss , Loss/classification_loss , DetectionBoxes_Precision/mAP , DetectionBoxes_Precision/[email protected] , DetectionBoxes_Precision/mAP (small) , DetectionBoxes_Precision/mAP (medium) , DetectionBoxes_Precision/mAP (large) , DetectionBoxes_Recall/AR@1 , DetectionBoxes_Recall/AR@10 , DetectionBoxes_Recall/AR@100 , DetectionBoxes_Recall/AR@100 (small) , DetectionBoxes_Recall/AR@100 (medium) , DetectionBoxes_Recall/AR@100 (large) ] |
InsightArtifact
Information and metrics on training results and artifacts.
Applicable Functions
Attributes
InsightArtifact(
id='artifact_65ae274540259e2a07533532',
is_training=False,
step=5000,
metric=ArtifactMetric(
total_loss=0.32356,
classification_loss=0.012036,
localization_loss=0.010706,
regularization_loss=0.0
)
)
Name | Type | Description |
---|---|---|
id | str | Artifact ID of the training run. |
is_training | bool | Whether the training is still ongoing. |
step | int | Number of steps the training run is currently at. |
metric | ArtifactMetric object | Metrics of the saved artifact from the training run, including various types of losses. |
DatasetSettings
User-selected dataset settings, such as the train-test split ratio and whether the data should be shuffled.
Applicable Functions
Attributes
DatasetSettings(
split_ratio=0.3,
shuffle=True,
seed=0,
using_sliding_window=False
)
Name | Type | Description |
---|---|---|
split_ratio | float | Value between 0 and 1 to indicate the train-test split ratio. |
shuffle | bool | Whether the dataset should be shuffled. |
seed | int | Random seed to use, defaults to 0. |
using_sliding_window | bool | Whether the sliding window feature is enabled. |
ArtifactMetric
Metrics of the saved artifact from the training run, including various types of losses.
Applicable Functions
Attributes
ArtifactMetric(
total_loss=0.32356,
classification_loss=0.012036,
localization_loss=0.010706,
regularization_loss=0.0
)
Name | Type | Description |
---|---|---|
total_loss | float | Sum of all losses (classification loss, localization loss, regularization loss). |
classification_loss | float | Deviation between the predicted object class of each predicted bounding box, and the ground truth object class in the predicted bounding box. |
localization_loss | float | Deviation between the coordinates of each predicted bounding box, and the ground truth bounding box. |
regularization_loss | float | Penalizes weights of model coefficients to prevent overfitting. |
ProjectUser
User metadata.
Applicable Functions
ProjectUser(
id='user_6323fea23e292439f31c58cd',
access_type='Owner',
email='[email protected]',
nickname='raighne',
picture='https://s.gravatar.com/avatar/avatars%2Fra.png'
)
Attribute | Type | Description |
---|---|---|
id | str | User ID. |
access_type | str | The access type of the current project, one of [Owner , Collaborator , Labeller ]. |
email | str | User email. |
nickname | str | User nickname. |
picture | str | User profile picture. |
AssetResults
Data results for a specific asset.
Applicable Functions
Asset(
id='asset_8208740a-2d9c-46e8-abb9-5777371bdcd3',
filename='boat180.png',
project='proj_cd067221d5a6e4007ccbb4afb5966535',
status='None',
create_date=1701927649302,
url='',
metadata=AssetMetadata(
file_size=186497,
mime_type='image/png',
height=243,
width=400,
groups=['main'],
custom_metadata={'captureAt': '2021-03-10T09:00:00Z'}
),
statistic=AssetAnnotationsStatistic(
tags_count=[],
total_annotations=0
)
)
Attribute | Type | Description |
---|---|---|
id | str | Asset ID. |
filename | str | File name of the asset. |
project | str | Project ID in which the asset is contained. |
status | str | The status of the asset, enum: [Annotated , Review , Completed , Tofix , None ] |
create_date | int | UNIX timestamp of when the asset was uploaded. |
url | str | URL to the raw asset file. |
metadata | dict | Asset metadata. |
statistic | dict | Asset annotation statistics. |
AssetMetadata
Metadata for a specific asset.
Applicable Functions
AssetMetadata(
file_size=186497,
mime_type='image/png',
height=243,
width=400,
groups=['main'],
custom_metadata={'captureAt': '2021-03-10T09:00:00Z'}
)
Attribute | Type | Description |
---|---|---|
file_size | int | Size of the asset in bytes. |
mime_type | str | Media type and format of the asset. |
height | int | Pixel height of the asset. |
width | int | Pixel width of the asset. |
groups | List[str] | The groups of the asset. |
custom_metadata | dict | The custom metadata of the asset. |
AssetStatistics
Data statistics for a specific asset.
Applicable Functions
AssetAnnotationsStatistic(
tags_count= [
TagsCountItem(name="tagName1", count=1)
],
total_annotations= 2
)
Attribute | Type | Description |
---|---|---|
tags_count | list[dict] | List of tag counts. |
total_annotations | int | Total number of annotations in the asset. |
GroupStatistics
Asset group statistics.
Applicable Functions
[
AssetGroup(
group='1',
statistic=AssetGroupStatistic(
total_assets=1,
annotated_assets=0,
reviewed_assets=0,
to_fixed_assets=0,
completed_assets=0
)
)
]
Attribute | Type | Description |
---|---|---|
group | str | Name of the asset group. |
statistic | dict | Contains asset counts of the following categories: - total_assets : Total number of assets in the asset group- annotated_assets : Total number of annotated assets in the asset group.- reviewed_assets : Total number of reviewed assets in the asset group.- to_fixed_assets : Total number of assets in which annotations need to be fixed in the asset group.- completed_assets : Total number of assets that have completed the annotation pipeline in the asset group. |
TagCountItem
Total count of instances of a tag.
Applicable Functions
Attributes
TagCountItem(
name="tagName1",
count=1
)
Name | Type | Description |
---|---|---|
name | str | Tag name. |
count | int | Total count of instances of the tag. |
AnnotationMetadata
Metadata for a specific annotation.
Applicable Functions
Annotation(
id='annot_a9ff9b21-c0e2-49ff-8a69-773aaf00a6f8',
project_id='proj_cd067221d5a6e4007ccbb4afb5966535',
asset_id='asset_f4dcb429-0332-4dd6-a1b4-fee794031ba6',
tag='boat',
bound_type='Rectangle',
create_date=1701927649302,
bound=[
[0.2772511848341232, 0.34635416666666663],
[0.2772511848341232, 0.46875],
[0.54739336492891, 0.46875],
[0.54739336492891, 0.34635416666666663]
]
)
Attribute | Type | Description |
---|---|---|
id | str | Unique ID of the annotation. |
project_id | str | ID of the project containing the annotation. |
asset_id | str | ID of the asset containing the annotation. |
tag | str | Tag name of the annotation. |
bound_type | str | Bound type of the annotation, one of [rectangle , polygon ]. |
bound | list[list[float]] | Bound vertices with the following format:[[x1, y1], [x2, y2], ... , [xn, yn]] |
WorkflowMetadata
Metadata for a training workflow.
Applicable Functions
Workflow(
id='flow_64e812a7e47592ef374cbbc2',
project_id='proj_cd067221d5a6e4007ccbb4afb5966535',
title='Yolov8 Workflow',
create_date=1701927649302,
update_date=1701927649302
)
Attribute | Type | Description |
---|---|---|
id | str | ID of the workflow. |
title | str | Name of the workflow. |
project_id | str | Project ID containing the workflow. |
update_date | int | Last updated UNIX timestamp of the workflow. |
TrainingInsight
Insight metadata for a training run.
Applicable Functions
ProjectInsight(
flow_title='Test workflow',
run_id='run_4a5d406d-464d-470c-bd7d-e92456621ad3',
dataset=InsightDataset(
data_type='Rectangle',
num_classes=1,
average_annotations=5.19,
total_assets=500,
settings=DatasetSettings(
split_ratio=0.3,
shuffle=True,
seed=0,
using_sliding_window=False
)
),
model=InsightModel(
name='fasterrcnn-inceptionv2-1024x1024',
batch_size=2,
training_steps=5000,
max_detection_per_class=100,
solver='momentum',
learning_rate=0.04,
momentum=0.9
),
checkpoint=RunCheckpoint(
strategy='STRAT_ALWAYS_SAVE_LATEST',
evaluation_interval=250,
metric=None
),
artifact=InsightArtifact(
id='artifact_65ae274540259e2a07533532',
is_training=False,
step=5000,
metric=ArtifactMetric(
total_loss=0.32356,
classification_loss=0.012036,
localization_loss=0.010706,
regularization_loss=0.0
)
),
create_date=1705912133684
)
Attribute | Type | Description |
---|---|---|
flow_title | str | Name of the workflow. |
run_id | str | ID of the training run. |
dataset | dict | Contains loss metrics for the following categories: |
step | int | Total number of training steps. |
create_date | int | UNIX timestamp of the training creation date. |
metric | dict | Contains loss metrics for the following categories: - total_loss - classification_loss - localization_loss - regularization_loss |
statistic | dict | Contains dataset statistics for the following categories: - average_annotations : Average number of annotations per asset. |
optimizer | str | Name of the optimizer used in the training. |
learning_rate | float | Value of the learning rate used in the training. |
momentum | float | Value of the momentum used in the training. |
epochs | int | Total number of training epochs. |
batch_size | int | Value of the batch size used in the training. |
model_name | str | Name of the specific model architecture used in the training. |
max_detections_per_class | int | Value to cap the maximum number of detections per class for the model. |
data_type | str | Annotation data type. |
num_classes | int | Total number of unique classes. |
split_ratio | float | Train-test split ratio. |
shuffle | bool | Whether the dataset was shuffled. |
seed | int | Initialization seed for the training. |
checkpoint_every_n | int | Epoch interval to generate checkpoints. |
metric_target | str | Metric used to determine best checkpoint saved. |
TrainingMetadata
Metadata for training runs.
Bases
dict
Applicable Functions
{
"id": "run_63eb212ff0f856bf95085095",
"object": "run",
"project_id": "proj_cd067221d5a6e4007ccbb4afb5966535",
"flow_id": "flow_63bbd3bf8a78eb906f417396",
"status": {
"conditions": [
{
"condition": "TrainingStarted",
"last_updated": 1676353954729,
"status": "finished"
},
{
"condition": "TrainingFinished",
"last_updated": 1676356061724,
"status": "finished"
}
],
"last_updated": 1676356061724
},
"execution": {
"accelerator": {
"name": "GPU_T4",
"count": 2
},
"checkpoint": {
"strategy": "STRAT_LOWEST_VALIDATION_LOSS",
"evaluation_interval": 250,
"metric": "Loss/total_loss"
}
},
"features": {
"matrix": true,
"preview": true
},
"create_date": 1676353954729,
"last_modified_date": 1676356061724,
"logs": [
"log_63eb212ff0f856bf95085095"
]
}
Attribute | Type | Description |
---|---|---|
id | str | Training run ID. |
object | str | Type of object. |
project_id | str | ID of the project containing the training run. |
flow_id | str | ID of the workflow used for the training run. |
status | dict | Status of completion of the different stages in the training run, contains the following categories: - conditions : List of dictionaries that describe the training conditions (TrainingStarted , TrainingFinished ), last updated UNIX timestamp of the operation, and the status of completion.- last_updated : UNIX timestamp of when the training statuses were last updated. |
execution | dict | Contains training configuration parameters for the following categories: - accelerator : Dictionary containing the type and number of GPUs used.- checkpoint : Dictionary containing the checkpoint strategy, evaluation interval, and metric used to save the best checkpoint. |
features | dict | Contains the activation status of certain advanced visualization features such as Evaluation Preview and Confusion Matrix. |
create_date | int | UNIX timestamp of the training creation date. |
last_modified_date | int | UNIX timestamp of the last modified date of the training. |
logs | list[str] | List of training log IDs that can be used to view training logs via datature.Run.log() |
LogMetadata
Metadata for training logs.
Bases
dict
Applicable Functions
{
"id": "log_63eb212ff0f856bf95085095",
"object": "log",
"event": [
{
"ev": "memoryUsage",
"pl": {},
"t": 1675669392000
}
]
}
Attribute | Type | Description |
---|---|---|
id | str | Log ID. |
object | str | Type of object. |
event | list[dict] | List of training event logs containing the following categories: - ev : Type of event tracked.- pl : Detailed description of the logs.- t : UNIX timestamp of the event. |
ArtifactMetadata
Metadata for artifacts.
Bases
dict
Applicable Functions
{
"id": "artifact_63bd140e67b42dc9f431ffe2",
"object": "artifact",
"is_training": false,
"step": 3000,
"flow_title": "Blood Cell Detector",
"run_id": "run_63bd08d8cdf700575fa4dd01",
"files": [
{
"name": "ckpt-13.data-00000-of-00001",
"md5": "5a96886e53f98daae379787ee0f22bda"
}
],
"project_id": "proj_cd067221d5a6e4007ccbb4afb5966535",
"artifact_name": "ckpt-13",
"create_date": 1673335822851,
"metric": {
"total_loss": 0.548,
"classification_loss": 0.511,
"localization_loss": 0.006,
"regularization_loss": 0.03
},
"is_deployed": false,
"exports": ["onnx", "tflite"],
"model_type": "efficientdet-d1-640x640",
"exportable_formats": ["tensorflow", "tflite", "onnx", "pytorch"]
}
Attribute | Type | Description |
---|---|---|
id | str | Artifact ID. |
object | str | Type of object. |
is_training | bool | Whether the training is still running. |
step | int | Total number of training steps. |
flow_title | str | Title of the workflow. |
run_id | str | ID of the training run of the current artifact. |
files | list[dict] | List of artifact checkpoint files containing the following categories: - name : Name of the checkpoint file.- md5 : MD5 hash value of the checkpoint file. |
project_id | str | ID of the project containing the current artifact. |
artifact_name | str | Checkpoint name of the artifact. |
create_date | int | UNIX timestamp of the artifact creation date. |
metric | dict | Dictionary containing the following metrics: - total_loss - classification_loss - localization_loss - regularization_loss |
is_deployed | bool | Whether the current artifact has an active deployment. |
exports | list[str] | List of model formats that the artifact has been exported in. |
model_type | str | Model architecture name. |
exportable_formats | list[str] | List of all exportable model formats for the artifact. |
ExportedMetadata
Metadata of exported models.
Bases
dict
Applicable Functions
{
"id": "model_d15aba68872b045e27ac3db06a401da3",
"object": "model",
"status": "Finished",
"format": "tensorflow",
"create_date": 1673336054173,
"download": {
"method": "GET",
"expiry": 1673339505871,
"url": "https://storage.googleapis.com/exports.datature.ioa2d89"
}
}
Attribute | Type | Description |
---|---|---|
id | str | ID of the exported model. |
object | str | Type of object. |
status | str | Status of the model export. |
format | str | Exported model format. |
create_date | int | UNIX timestamp of the creation date of the exported model. |
download | dict | Dictionary containing the download metadata of the exported model: - method : Request method- expiry : UNIX timestamp of the expiry of the download link.- url : Download link of the exported model. |
Deployment
Metadata for active deployments.
Bases
dict
Applicable Functions
Deployment(
id='deploy_0809bb56-35db-4681-84ee-ebd5fb7b2ee5',
name='my-first-deployment',
status=DeploymentStatus(
overview='Creating',
message='Creating service',
update_data=1724074609167
),
create_date=1724074608669,
update_date=1724074608669,
project_id='proj_ca5fe71e7592bbcf7705ea36e4f29ed4',
artifact_id='artifact_65f140b9020ebc6f2e23cf80',
version_tag='v1',
region='us',
history_versions=[DeploymentHistoryVersion(
version_tag='v1',
artifact_id='artifact_65f140b9020ebc6f2e23cf80',
update_date=1724074608669
)],
options=None,
instance_id='instance_t4-standard-1g',
resources=DeploymentResources(
cpu=6,
ram=24576,
GPU_T4=1,
GPU_L4=None,
GPU_A100_40GB=None,
GPU_A100_80GB=None,
GPU_H100=None
),
scaling=DeploymentScaling(
replicas=1,
mode='FixedReplicaCount'
),
url=None
)
Attribute | Type | Description |
---|---|---|
id | str | ID of the active deployment. |
name | str | Name of the active deployment. |
status | DeploymentStatus | Dictionary of the deployment status containing the following categories: - overview : Overview status of the deployment.- message : Status message of the deployment.- status_date : UNIX timestamp of the last update of the deployment status. |
create_date | int | UNIX timestamp of the creation date of the deployment. |
update_date | int | UNIX timestamp of the last update date of the deployment. |
project_id | str | Project ID containing the active deployment. |
artifact_id | str | ID of the artifact used for the deployment. |
version_tag | str | Current version tag of the deployment. |
region | str | Region where the deployment is hosted. |
history_versions | Optional[List[DeploymentHistoryVersion]] | List of past versions in the deployment history. |
options | DeploymentOptions | Configuration options for the deployment. |
instance_id | str | Instance identifier of the deployment |
resources | DeploymentResources | CPU and/or GPU resources allocated to the deployment. |
scaling | DeploymentScaling | Dictionary containing the following categories: - mode : Instance scaling mode of the deployment.- num_instances : Number of instances of the deployment. |
url | str | API URL endpoint for prediction requests. |
OperationMetadata
Metadata for background operations.
Bases
dict
Applicable Functions
{
"id": "op_508fc5d1-e908-486d-9e7b-1dca99b80024",
"object": "operation",
"op_link": "users/api|affaf/proje-1dca99b80024",
"status": {
"overview": "Queued",
"message": "Operation queued",
"time_updated": 1676621361765,
"time_scheduled": 1676621361765,
"progress": {
"unit": "whole operation",
"with_status": {
"queued": 1,
"running": 0,
"finished": 0,
"cancelled": 0,
"errored": 0
}
}
}
}
Attribute | Type | Description |
---|---|---|
id | str | Unique operation ID. |
object | str | Type of object. |
op_link | str | Operation link used to retrieve operation status. |
status | dict | Operation status metadata. |
OperationStatus
Metadata of operation status.
Bases
dict
{
"overview": "Queued",
"message": "Operation queued",
"time_updated": 1676621361765,
"time_scheduled": 1676621361765,
"progress": {
"unit": "whole operation",
"with_status": {
"queued": 1,
"running": 0,
"finished": 0,
"cancelled": 0,
"errored": 0
}
}
}
Attribute | Type | Description |
---|---|---|
overview | str | Overview status of current operation. |
message | str | Status message of current operation. |
time_updated | int | Last updated UNIX timestamp of current operation status. |
time_scheduled | int | UNIX timestamp of when the operation was first scheduled. |
progress | dict | Operation progress status indicator. |
Updated 4 months ago