2019/01/17 - Amazon Rekognition - 2 updated api methods
Changes GetLabelDetection now returns bounding box information for common objects and a hierarchical taxonomy of detected labels. The version of the model used for video label detection is also returned. DetectModerationLabels now returns the version of the model used for detecting unsafe content.
{'ModerationModelVersion': 'string'}
Detects explicit or suggestive adult content in a specified JPEG or PNG format image. Use DetectModerationLabels to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content.
To filter images, use the labels returned by DetectModerationLabels to determine which types of content are appropriate.
For information about moderation labels, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.
See also: AWS API Documentation
Request Syntax
client.detect_moderation_labels( Image={ 'Bytes': b'bytes', 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, MinConfidence=... )
dict
[REQUIRED]
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
Bytes (bytes) --
Blob of image bytes up to 5 MBs.
S3Object (dict) --
Identifies an S3 object as the image source.
Bucket (string) --
Name of the S3 bucket.
Name (string) --
S3 object key name.
Version (string) --
If the bucket is versioning enabled, you can specify the object version.
float
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value.
If you don't specify MinConfidence , the operation returns labels with confidence values greater than or equal to 50 percent.
dict
Response Syntax
{ 'ModerationLabels': [ { 'Confidence': ..., 'Name': 'string', 'ParentName': 'string' }, ], 'ModerationModelVersion': 'string' }
Response Structure
(dict) --
ModerationLabels (list) --
Array of detected Moderation labels and the time, in millseconds from the start of the video, they were detected.
(dict) --
Provides information about a single type of moderated content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
Confidence (float) --
Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.
If you don't specify the MinConfidence parameter in the call to DetectModerationLabels , the operation returns labels with a confidence value greater than or equal to 50 percent.
Name (string) --
The label name for the type of content detected in the image.
ParentName (string) --
The name for the parent label. Labels at the top level of the hierarchy have the parent label "" .
ModerationModelVersion (string) --
Version number of the moderation detection model that was used to detect unsafe content.
{'LabelModelVersion': 'string'}
Gets the label detection results of a Amazon Rekognition Video analysis started by StartLabelDetection .
The label detection operation is started by a call to StartLabelDetection which returns a job identifier (JobId ). When the label detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartlabelDetection . To get the results of the label detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call GetLabelDetection and pass the job identifier (JobId ) from the initial call to StartLabelDetection .
GetLabelDetection returns an array of detected labels (Labels ) sorted by the time the labels were detected. You can also sort by the label name by specifying NAME for the SortBy input parameter.
The labels returned include the label name, the percentage confidence in the accuracy of the detected label, and the time the label was detected in the video.
The returned labels also include bounding box information for common objects, a hierarchical taxonomy of detected labels, and the version of the label model used for detection.
Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in MaxResults , the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetlabelDetection and populate the NextToken request parameter with the token value returned from the previous call to GetLabelDetection .
See also: AWS API Documentation
Request Syntax
client.get_label_detection( JobId='string', MaxResults=123, NextToken='string', SortBy='NAME'|'TIMESTAMP' )
string
[REQUIRED]
Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to StartlabelDetection .
integer
Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
string
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.
string
Sort to use for elements in the Labels array. Use TIMESTAMP to sort array elements by the time labels are detected. Use NAME to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP .
dict
Response Syntax
{ 'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED', 'StatusMessage': 'string', 'VideoMetadata': { 'Codec': 'string', 'DurationMillis': 123, 'Format': 'string', 'FrameRate': ..., 'FrameHeight': 123, 'FrameWidth': 123 }, 'NextToken': 'string', 'Labels': [ { 'Timestamp': 123, 'Label': { 'Name': 'string', 'Confidence': ..., 'Instances': [ { 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'Confidence': ... }, ], 'Parents': [ { 'Name': 'string' }, ] } }, ], 'LabelModelVersion': 'string' }
Response Structure
(dict) --
JobStatus (string) --
The current status of the label detection job.
StatusMessage (string) --
If the job fails, StatusMessage provides a descriptive error message.
VideoMetadata (dict) --
Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.
Codec (string) --
Type of compression used in the analyzed video.
DurationMillis (integer) --
Length of the video in milliseconds.
Format (string) --
Format of the analyzed video. Possible values are MP4, MOV and AVI.
FrameRate (float) --
Number of frames per second in the video.
FrameHeight (integer) --
Vertical pixel dimension of the video.
FrameWidth (integer) --
Horizontal pixel dimension of the video.
NextToken (string) --
If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of labels.
Labels (list) --
An array of labels detected in the video. Each element contains the detected label and the time, in milliseconds from the start of the video, that the label was detected.
(dict) --
Information about a label detected in a video analysis request and the time the label was detected in the video.
Timestamp (integer) --
Time, in milliseconds from the start of the video, that the label was detected.
Label (dict) --
Details about the detected label.
Name (string) --
The name (label) of the object or scene.
Confidence (float) --
Level of confidence.
Instances (list) --
If Label represents an object, Instances contains the bounding boxes for each instance of the detected object. Bounding boxes are returned for common object labels such as people, cars, furniture, apparel or pets.
(dict) --
An instance of a label returned by Amazon Rekognition Image ( DetectLabels ) or by Amazon Rekognition Video ( GetLabelDetection ).
BoundingBox (dict) --
The position of the label instance on the image.
Width (float) --
Width of the bounding box as a ratio of the overall image width.
Height (float) --
Height of the bounding box as a ratio of the overall image height.
Left (float) --
Left coordinate of the bounding box as a ratio of overall image width.
Top (float) --
Top coordinate of the bounding box as a ratio of overall image height.
Confidence (float) --
The confidence that Amazon Rekognition has in the accuracy of the bounding box.
Parents (list) --
The parent labels for a label. The response includes all ancestor labels.
(dict) --
A parent label for a label. A label can have 0, 1, or more parents.
Name (string) --
The name of the parent label.
LabelModelVersion (string) --
Version number of the label detection model that was used to detect labels.