2021/08/26 - Amazon Rekognition - 3 updated api methods
Changes This release added new attributes to Rekognition RecognizeCelebities and GetCelebrityInfo API operations.
{'FaceMatches': {'Face': {'Emotions': [{'Confidence': 'float', 'Type': 'HAPPY | SAD | ANGRY | ' 'CONFUSED | DISGUSTED | ' 'SURPRISED | CALM | UNKNOWN | ' 'FEAR'}], 'Smile': {'Confidence': 'float', 'Value': 'boolean'}}}, 'UnmatchedFaces': {'Emotions': [{'Confidence': 'float', 'Type': 'HAPPY | SAD | ANGRY | CONFUSED | ' 'DISGUSTED | SURPRISED | CALM | ' 'UNKNOWN | FEAR'}], 'Smile': {'Confidence': 'float', 'Value': 'boolean'}}}
Compares a face in the source input image with each of the 100 largest faces detected in the target input image.
If the source image contains multiple faces, the service detects the largest face and compares it with each face detected in the target image.
Note
CompareFaces uses machine learning algorithms, which are probabilistic. A false negative is an incorrect prediction that a face in the target image has a low similarity confidence score when compared to the face in the source image. To reduce the probability of false negatives, we recommend that you compare the target image against multiple source images. If you plan to use CompareFaces to make a decision that impacts an individual's rights, privacy, or access to services, we recommend that you pass the result to a human for review and further validation before taking action.
You pass the input and target images either as base64-encoded image bytes or as references to images in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes isn't supported. The image must be formatted as a PNG or JPEG file.
In response, the operation returns an array of face matches ordered by similarity score in descending order. For each face match, the response provides a bounding box of the face, facial landmarks, pose details (pitch, role, and yaw), quality (brightness and sharpness), and confidence value (indicating the level of confidence that the bounding box contains a face). The response also provides a similarity score, which indicates how closely the faces match.
Note
By default, only faces with a similarity score of greater than or equal to 80% are returned in the response. You can change this value by specifying the SimilarityThreshold parameter.
CompareFaces also returns an array of faces that don't match the source image. For each face, it returns a bounding box, confidence value, landmarks, pose details, and quality. The response also returns information about the face in the source image, including the bounding box of the face and confidence value.
The QualityFilter input parameter allows you to filter out detected faces that don’t meet a required quality bar. The quality bar is based on a variety of common use cases. Use QualityFilter to set the quality bar by specifying LOW , MEDIUM , or HIGH . If you do not want to filter detected faces, specify NONE . The default value is NONE .
If the image doesn't contain Exif metadata, CompareFaces returns orientation information for the source and target images. Use these values to display the images with the correct image orientation.
If no faces are detected in the source or target images, CompareFaces returns an InvalidParameterException error.
Note
This is a stateless API operation. That is, data returned by this operation doesn't persist.
For an example, see Comparing Faces in Images in the Amazon Rekognition Developer Guide.
This operation requires permissions to perform the rekognition:CompareFaces action.
See also: AWS API Documentation
Request Syntax
client.compare_faces( SourceImage={ 'Bytes': b'bytes', 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, TargetImage={ 'Bytes': b'bytes', 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } }, SimilarityThreshold=..., QualityFilter='NONE'|'AUTO'|'LOW'|'MEDIUM'|'HIGH' )
dict
[REQUIRED]
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.
Bytes (bytes) --
Blob of image bytes up to 5 MBs.
S3Object (dict) --
Identifies an S3 object as the image source.
Bucket (string) --
Name of the S3 bucket.
Name (string) --
S3 object key name.
Version (string) --
If the bucket is versioning enabled, you can specify the object version.
dict
[REQUIRED]
The target image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.
Bytes (bytes) --
Blob of image bytes up to 5 MBs.
S3Object (dict) --
Identifies an S3 object as the image source.
Bucket (string) --
Name of the S3 bucket.
Name (string) --
S3 object key name.
Version (string) --
If the bucket is versioning enabled, you can specify the object version.
float
The minimum level of confidence in the face matches that a match must meet to be included in the FaceMatches array.
string
A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren't compared. If you specify AUTO , Amazon Rekognition chooses the quality bar. If you specify LOW , MEDIUM , or HIGH , filtering removes all faces that don’t meet the chosen quality bar. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that's misidentified as a face, a face that's too blurry, or a face with a pose that's too extreme to use. If you specify NONE , no filtering is performed. The default value is NONE .
To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
dict
Response Syntax
{ 'SourceImageFace': { 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'Confidence': ... }, 'FaceMatches': [ { 'Similarity': ..., 'Face': { 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'Confidence': ..., 'Landmarks': [ { 'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil'|'upperJawlineLeft'|'midJawlineLeft'|'chinBottom'|'midJawlineRight'|'upperJawlineRight', 'X': ..., 'Y': ... }, ], 'Pose': { 'Roll': ..., 'Yaw': ..., 'Pitch': ... }, 'Quality': { 'Brightness': ..., 'Sharpness': ... }, 'Emotions': [ { 'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN'|'FEAR', 'Confidence': ... }, ], 'Smile': { 'Value': True|False, 'Confidence': ... } } }, ], 'UnmatchedFaces': [ { 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'Confidence': ..., 'Landmarks': [ { 'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil'|'upperJawlineLeft'|'midJawlineLeft'|'chinBottom'|'midJawlineRight'|'upperJawlineRight', 'X': ..., 'Y': ... }, ], 'Pose': { 'Roll': ..., 'Yaw': ..., 'Pitch': ... }, 'Quality': { 'Brightness': ..., 'Sharpness': ... }, 'Emotions': [ { 'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN'|'FEAR', 'Confidence': ... }, ], 'Smile': { 'Value': True|False, 'Confidence': ... } }, ], 'SourceImageOrientationCorrection': 'ROTATE_0'|'ROTATE_90'|'ROTATE_180'|'ROTATE_270', 'TargetImageOrientationCorrection': 'ROTATE_0'|'ROTATE_90'|'ROTATE_180'|'ROTATE_270' }
Response Structure
(dict) --
SourceImageFace (dict) --
The face in the source image that was used for comparison.
BoundingBox (dict) --
Bounding box of the face.
Width (float) --
Width of the bounding box as a ratio of the overall image width.
Height (float) --
Height of the bounding box as a ratio of the overall image height.
Left (float) --
Left coordinate of the bounding box as a ratio of overall image width.
Top (float) --
Top coordinate of the bounding box as a ratio of overall image height.
Confidence (float) --
Confidence level that the selected bounding box contains a face.
FaceMatches (list) --
An array of faces in the target image that match the source image face. Each CompareFacesMatch object provides the bounding box, the confidence level that the bounding box contains a face, and the similarity score for the face in the bounding box and the face in the source image.
(dict) --
Provides information about a face in a target image that matches the source image face analyzed by CompareFaces . The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.
Similarity (float) --
Level of confidence that the faces match.
Face (dict) --
Provides face metadata (bounding box and confidence that the bounding box actually contains a face).
BoundingBox (dict) --
Bounding box of the face.
Width (float) --
Width of the bounding box as a ratio of the overall image width.
Height (float) --
Height of the bounding box as a ratio of the overall image height.
Left (float) --
Left coordinate of the bounding box as a ratio of overall image width.
Top (float) --
Top coordinate of the bounding box as a ratio of overall image height.
Confidence (float) --
Level of confidence that what the bounding box contains is a face.
Landmarks (list) --
An array of facial landmarks.
(dict) --
Indicates the location of the landmark on the face.
Type (string) --
Type of landmark.
X (float) --
The x-coordinate of the landmark expressed as a ratio of the width of the image. The x-coordinate is measured from the left-side of the image. For example, if the image is 700 pixels wide and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.
Y (float) --
The y-coordinate of the landmark expressed as a ratio of the height of the image. The y-coordinate is measured from the top of the image. For example, if the image height is 200 pixels and the y-coordinate of the landmark is at 50 pixels, this value is 0.25.
Pose (dict) --
Indicates the pose of the face as determined by its pitch, roll, and yaw.
Roll (float) --
Value representing the face rotation on the roll axis.
Yaw (float) --
Value representing the face rotation on the yaw axis.
Pitch (float) --
Value representing the face rotation on the pitch axis.
Quality (dict) --
Identifies face image brightness and sharpness.
Brightness (float) --
Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.
Sharpness (float) --
Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.
Emotions (list) --
The emotions that appear to be expressed on the face, and the confidence level in the determination. Valid values include "Happy", "Sad", "Angry", "Confused", "Disgusted", "Surprised", "Calm", "Unknown", and "Fear".
(dict) --
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
Type (string) --
Type of emotion detected.
Confidence (float) --
Level of confidence in the determination.
Smile (dict) --
Indicates whether or not the face is smiling, and the confidence level in the determination.
Value (boolean) --
Boolean value that indicates whether the face is smiling or not.
Confidence (float) --
Level of confidence in the determination.
UnmatchedFaces (list) --
An array of faces in the target image that did not match the source image face.
(dict) --
Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities .
BoundingBox (dict) --
Bounding box of the face.
Width (float) --
Width of the bounding box as a ratio of the overall image width.
Height (float) --
Height of the bounding box as a ratio of the overall image height.
Left (float) --
Left coordinate of the bounding box as a ratio of overall image width.
Top (float) --
Top coordinate of the bounding box as a ratio of overall image height.
Confidence (float) --
Level of confidence that what the bounding box contains is a face.
Landmarks (list) --
An array of facial landmarks.
(dict) --
Indicates the location of the landmark on the face.
Type (string) --
Type of landmark.
X (float) --
The x-coordinate of the landmark expressed as a ratio of the width of the image. The x-coordinate is measured from the left-side of the image. For example, if the image is 700 pixels wide and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.
Y (float) --
The y-coordinate of the landmark expressed as a ratio of the height of the image. The y-coordinate is measured from the top of the image. For example, if the image height is 200 pixels and the y-coordinate of the landmark is at 50 pixels, this value is 0.25.
Pose (dict) --
Indicates the pose of the face as determined by its pitch, roll, and yaw.
Roll (float) --
Value representing the face rotation on the roll axis.
Yaw (float) --
Value representing the face rotation on the yaw axis.
Pitch (float) --
Value representing the face rotation on the pitch axis.
Quality (dict) --
Identifies face image brightness and sharpness.
Brightness (float) --
Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.
Sharpness (float) --
Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.
Emotions (list) --
The emotions that appear to be expressed on the face, and the confidence level in the determination. Valid values include "Happy", "Sad", "Angry", "Confused", "Disgusted", "Surprised", "Calm", "Unknown", and "Fear".
(dict) --
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
Type (string) --
Type of emotion detected.
Confidence (float) --
Level of confidence in the determination.
Smile (dict) --
Indicates whether or not the face is smiling, and the confidence level in the determination.
Value (boolean) --
Boolean value that indicates whether the face is smiling or not.
Confidence (float) --
Level of confidence in the determination.
SourceImageOrientationCorrection (string) --
The value of SourceImageOrientationCorrection is always null.
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
TargetImageOrientationCorrection (string) --
The value of TargetImageOrientationCorrection is always null.
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
{'KnownGender': {'Type': 'Male | Female'}}
Gets the name and additional information about a celebrity based on their Amazon Rekognition ID. The additional information is returned as an array of URLs. If there is no additional information about the celebrity, this list is empty.
For more information, see Recognizing Celebrities in an Image in the Amazon Rekognition Developer Guide.
This operation requires permissions to perform the rekognition:GetCelebrityInfo action.
See also: AWS API Documentation
Request Syntax
client.get_celebrity_info( Id='string' )
string
[REQUIRED]
The ID for the celebrity. You get the celebrity ID from a call to the RecognizeCelebrities operation, which recognizes celebrities in an image.
dict
Response Syntax
{ 'Urls': [ 'string', ], 'Name': 'string', 'KnownGender': { 'Type': 'Male'|'Female' } }
Response Structure
(dict) --
Urls (list) --
An array of URLs pointing to additional celebrity information.
(string) --
Name (string) --
The name of the celebrity.
KnownGender (dict) --
Retrieves the known gender for the celebrity.
Type (string) --
A string value of the KnownGender info about the Celebrity.
{'CelebrityFaces': {'Face': {'Emotions': [{'Confidence': 'float', 'Type': 'HAPPY | SAD | ANGRY | ' 'CONFUSED | DISGUSTED | ' 'SURPRISED | CALM | UNKNOWN ' '| FEAR'}], 'Smile': {'Confidence': 'float', 'Value': 'boolean'}}, 'KnownGender': {'Type': 'Male | Female'}}, 'UnrecognizedFaces': {'Emotions': [{'Confidence': 'float', 'Type': 'HAPPY | SAD | ANGRY | CONFUSED | ' 'DISGUSTED | SURPRISED | CALM | ' 'UNKNOWN | FEAR'}], 'Smile': {'Confidence': 'float', 'Value': 'boolean'}}}
Returns an array of celebrities recognized in the input image. For more information, see Recognizing Celebrities in the Amazon Rekognition Developer Guide.
RecognizeCelebrities returns the 64 largest faces in the image. It lists recognized celebrities in the CelebrityFaces array and unrecognized faces in the UnrecognizedFaces array. RecognizeCelebrities doesn't return celebrities whose faces aren't among the largest 64 faces in the image.
For each celebrity recognized, RecognizeCelebrities returns a Celebrity object. The Celebrity object contains the celebrity name, ID, URL links to additional information, match confidence, and a ComparedFace object that you can use to locate the celebrity's face on the image.
Amazon Rekognition doesn't retain information about which images a celebrity has been recognized in. Your application must store this information and use the Celebrity ID property as a unique identifier for the celebrity. If you don't store the celebrity name or additional information URLs returned by RecognizeCelebrities , you will need the ID to identify the celebrity in a call to the GetCelebrityInfo operation.
You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.
For an example, see Recognizing Celebrities in an Image in the Amazon Rekognition Developer Guide.
This operation requires permissions to perform the rekognition:RecognizeCelebrities operation.
See also: AWS API Documentation
Request Syntax
client.recognize_celebrities( Image={ 'Bytes': b'bytes', 'S3Object': { 'Bucket': 'string', 'Name': 'string', 'Version': 'string' } } )
dict
[REQUIRED]
The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.
Bytes (bytes) --
Blob of image bytes up to 5 MBs.
S3Object (dict) --
Identifies an S3 object as the image source.
Bucket (string) --
Name of the S3 bucket.
Name (string) --
S3 object key name.
Version (string) --
If the bucket is versioning enabled, you can specify the object version.
dict
Response Syntax
{ 'CelebrityFaces': [ { 'Urls': [ 'string', ], 'Name': 'string', 'Id': 'string', 'Face': { 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'Confidence': ..., 'Landmarks': [ { 'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil'|'upperJawlineLeft'|'midJawlineLeft'|'chinBottom'|'midJawlineRight'|'upperJawlineRight', 'X': ..., 'Y': ... }, ], 'Pose': { 'Roll': ..., 'Yaw': ..., 'Pitch': ... }, 'Quality': { 'Brightness': ..., 'Sharpness': ... }, 'Emotions': [ { 'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN'|'FEAR', 'Confidence': ... }, ], 'Smile': { 'Value': True|False, 'Confidence': ... } }, 'MatchConfidence': ..., 'KnownGender': { 'Type': 'Male'|'Female' } }, ], 'UnrecognizedFaces': [ { 'BoundingBox': { 'Width': ..., 'Height': ..., 'Left': ..., 'Top': ... }, 'Confidence': ..., 'Landmarks': [ { 'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil'|'upperJawlineLeft'|'midJawlineLeft'|'chinBottom'|'midJawlineRight'|'upperJawlineRight', 'X': ..., 'Y': ... }, ], 'Pose': { 'Roll': ..., 'Yaw': ..., 'Pitch': ... }, 'Quality': { 'Brightness': ..., 'Sharpness': ... }, 'Emotions': [ { 'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN'|'FEAR', 'Confidence': ... }, ], 'Smile': { 'Value': True|False, 'Confidence': ... } }, ], 'OrientationCorrection': 'ROTATE_0'|'ROTATE_90'|'ROTATE_180'|'ROTATE_270' }
Response Structure
(dict) --
CelebrityFaces (list) --
Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 64 celebrities in an image. Each celebrity object includes the following attributes: Face , Confidence , Emotions , Landmarks , Pose , Quality , Smile , Id , KnownGender , MatchConfidence , Name , Urls .
(dict) --
Provides information about a celebrity recognized by the RecognizeCelebrities operation.
Urls (list) --
An array of URLs pointing to additional information about the celebrity. If there is no additional information about the celebrity, this list is empty.
(string) --
Name (string) --
The name of the celebrity.
Id (string) --
A unique identifier for the celebrity.
Face (dict) --
Provides information about the celebrity's face, such as its location on the image.
BoundingBox (dict) --
Bounding box of the face.
Width (float) --
Width of the bounding box as a ratio of the overall image width.
Height (float) --
Height of the bounding box as a ratio of the overall image height.
Left (float) --
Left coordinate of the bounding box as a ratio of overall image width.
Top (float) --
Top coordinate of the bounding box as a ratio of overall image height.
Confidence (float) --
Level of confidence that what the bounding box contains is a face.
Landmarks (list) --
An array of facial landmarks.
(dict) --
Indicates the location of the landmark on the face.
Type (string) --
Type of landmark.
X (float) --
The x-coordinate of the landmark expressed as a ratio of the width of the image. The x-coordinate is measured from the left-side of the image. For example, if the image is 700 pixels wide and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.
Y (float) --
The y-coordinate of the landmark expressed as a ratio of the height of the image. The y-coordinate is measured from the top of the image. For example, if the image height is 200 pixels and the y-coordinate of the landmark is at 50 pixels, this value is 0.25.
Pose (dict) --
Indicates the pose of the face as determined by its pitch, roll, and yaw.
Roll (float) --
Value representing the face rotation on the roll axis.
Yaw (float) --
Value representing the face rotation on the yaw axis.
Pitch (float) --
Value representing the face rotation on the pitch axis.
Quality (dict) --
Identifies face image brightness and sharpness.
Brightness (float) --
Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.
Sharpness (float) --
Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.
Emotions (list) --
The emotions that appear to be expressed on the face, and the confidence level in the determination. Valid values include "Happy", "Sad", "Angry", "Confused", "Disgusted", "Surprised", "Calm", "Unknown", and "Fear".
(dict) --
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
Type (string) --
Type of emotion detected.
Confidence (float) --
Level of confidence in the determination.
Smile (dict) --
Indicates whether or not the face is smiling, and the confidence level in the determination.
Value (boolean) --
Boolean value that indicates whether the face is smiling or not.
Confidence (float) --
Level of confidence in the determination.
MatchConfidence (float) --
The confidence, in percentage, that Amazon Rekognition has that the recognized face is the celebrity.
KnownGender (dict) --
The known gender identity for the celebrity that matches the provided ID.
Type (string) --
A string value of the KnownGender info about the Celebrity.
UnrecognizedFaces (list) --
Details about each unrecognized face in the image.
(dict) --
Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities .
BoundingBox (dict) --
Bounding box of the face.
Width (float) --
Width of the bounding box as a ratio of the overall image width.
Height (float) --
Height of the bounding box as a ratio of the overall image height.
Left (float) --
Left coordinate of the bounding box as a ratio of overall image width.
Top (float) --
Top coordinate of the bounding box as a ratio of overall image height.
Confidence (float) --
Level of confidence that what the bounding box contains is a face.
Landmarks (list) --
An array of facial landmarks.
(dict) --
Indicates the location of the landmark on the face.
Type (string) --
Type of landmark.
X (float) --
The x-coordinate of the landmark expressed as a ratio of the width of the image. The x-coordinate is measured from the left-side of the image. For example, if the image is 700 pixels wide and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.
Y (float) --
The y-coordinate of the landmark expressed as a ratio of the height of the image. The y-coordinate is measured from the top of the image. For example, if the image height is 200 pixels and the y-coordinate of the landmark is at 50 pixels, this value is 0.25.
Pose (dict) --
Indicates the pose of the face as determined by its pitch, roll, and yaw.
Roll (float) --
Value representing the face rotation on the roll axis.
Yaw (float) --
Value representing the face rotation on the yaw axis.
Pitch (float) --
Value representing the face rotation on the pitch axis.
Quality (dict) --
Identifies face image brightness and sharpness.
Brightness (float) --
Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.
Sharpness (float) --
Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.
Emotions (list) --
The emotions that appear to be expressed on the face, and the confidence level in the determination. Valid values include "Happy", "Sad", "Angry", "Confused", "Disgusted", "Surprised", "Calm", "Unknown", and "Fear".
(dict) --
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
Type (string) --
Type of emotion detected.
Confidence (float) --
Level of confidence in the determination.
Smile (dict) --
Indicates whether or not the face is smiling, and the confidence level in the determination.
Value (boolean) --
Boolean value that indicates whether the face is smiling or not.
Confidence (float) --
Level of confidence in the determination.
OrientationCorrection (string) --
Note
Support for estimating image orientation using the the OrientationCorrection field has ceased as of August 2021. Any returned values for this field included in an API response will always be NULL.
The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct the orientation. The bounding box coordinates returned in CelebrityFaces and UnrecognizedFaces represent face locations before the image orientation is corrected.
Note
If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the image's orientation. If so, and the Exif metadata for the input image populates the orientation field, the value of OrientationCorrection is null. The CelebrityFaces and UnrecognizedFaces bounding box coordinates represent face locations after Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.