2018/06/27 - Amazon Comprehend - 16 new 6 updated api methods
Changes This release gives customers the option to batch process a set of documents stored within an S3 bucket in addition to the existing synchronous nature of the current Comprehend API.
Stops a dominant language detection job in progress.
If the job state is IN_PROGRESS the job will be marked for termination and put into the STOPPING state.
If the job is in the COMPLETED or FAILED state when you call the StopDominantLanguageDetectionJob operation, the operation will return a 400 Internal Request Exception.
When a job is stopped, any document that has already been processed will be written to the output location.
See also: AWS API Documentation
Request Syntax
client.stop_dominant_language_detection_job( JobId='string' )
string
[REQUIRED]
The identifier of the dominant language detection job to stop.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier of the dominant language detection job to stop.
JobStatus (string) --
Either STOPPING if the job is currently running, or STOPPED if the job was previously stopped with the StopDominantLanguageDetectionJob operation.
Gets the properties associated with a key phrases detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
client.describe_key_phrases_detection_job( JobId='string' )
string
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
dict
Response Syntax
{ 'KeyPhrasesDetectionJobProperties': { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' }, 'LanguageCode': 'en'|'es' } }
Response Structure
(dict) --
KeyPhrasesDetectionJobProperties (dict) --
An object that contains the properties associated with a key phrases detection job.
JobId (string) --
The identifier assigned to the key phrases detection job.
JobName (string) --
The name that you assigned the key phrases detection job.
JobStatus (string) --
The current status of the key phrases detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the key phrases detection job was submitted for processing.
EndTime (datetime) --
The time that the key phrases detection job completed.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the key phrases detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the key phrases detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
Stops an entities detection job in progress.
If the job state is IN_PROGRESS the job will be marked for termination and put into the STOPPING state.
If the job is in the COMPLETED or FAILED state when you call the StopDominantLanguageDetectionJob operation, the operation will return a 400 Internal Request Exception.
When a job is stopped, any document that has already been processed will be written to the output location.
See also: AWS API Documentation
Request Syntax
client.stop_entities_detection_job( JobId='string' )
string
[REQUIRED]
The identifier of the entities detection job to stop.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier of the entities detection job to stop.
JobStatus (string) --
Either STOPPING if the job is currently running, or STOPPED if the job was previously stopped with the StopEntitiesDetectionJob operation.
Stops a key phrases detection job in progress.
If the job state is IN_PROGRESS the job will be marked for termination and put into the STOPPING state.
If the job is in the COMPLETED or FAILED state when you call the StopDominantLanguageDetectionJob operation, the operation will return a 400 Internal Request Exception.
When a job is stopped, any document that has already been processed will be written to the output location.
See also: AWS API Documentation
Request Syntax
client.stop_key_phrases_detection_job( JobId='string' )
string
[REQUIRED]
The identifier of the key phrases detection job to stop.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier of the key phrases detection job to stop.
JobStatus (string) --
Either STOPPING if the job is currently running, or STOPPED if the job was previously stopped with the StopKeyPhrasesDetectionJob operation.
Gets a list of the dominant language detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
client.list_dominant_language_detection_jobs( Filter={ 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'SubmitTimeBefore': datetime(2015, 1, 1), 'SubmitTimeAfter': datetime(2015, 1, 1) }, NextToken='string', MaxResults=123 )
dict
Filters that jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
JobName (string) --
Filters on the name of the job.
JobStatus (string) --
Filters the list of jobs based on job status. Returns only jobs with the specified status.
SubmitTimeBefore (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
SubmitTimeAfter (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
string
Identifies the next page of results to return.
integer
The maximum number of results to return in each page. The default is 100.
dict
Response Syntax
{ 'DominantLanguageDetectionJobPropertiesList': [ { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' } }, ], 'NextToken': 'string' }
Response Structure
(dict) --
DominantLanguageDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a dominant language detection job.
JobId (string) --
The identifier assigned to the dominant language detection job.
JobName (string) --
The name that you assigned to the dominant language detection job.
JobStatus (string) --
The current status of the dominant language detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description for the status of a job.
SubmitTime (datetime) --
The time that the dominant language detection job was submitted for processing.
EndTime (datetime) --
The time that the dominant language detection job completed.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the dominant language detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the dominant language detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
NextToken (string) --
Identifies the next page of results to return.
Gets the properties associated with a sentiment detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
client.describe_sentiment_detection_job( JobId='string' )
string
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
dict
Response Syntax
{ 'SentimentDetectionJobProperties': { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' }, 'LanguageCode': 'en'|'es' } }
Response Structure
(dict) --
SentimentDetectionJobProperties (dict) --
An object that contains the properties associated with a sentiment detection job.
JobId (string) --
The identifier assigned to the sentiment detection job.
JobName (string) --
The name that you assigned to the sentiment detection job
JobStatus (string) --
The current status of the sentiment detection job. If the status is FAILED , the Messages field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the sentiment detection job was submitted for processing.
EndTime (datetime) --
The time that the sentiment detection job ended.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the sentiment detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the sentiment detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
Gets the properties associated with an entities detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
client.describe_entities_detection_job( JobId='string' )
string
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
dict
Response Syntax
{ 'EntitiesDetectionJobProperties': { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' }, 'LanguageCode': 'en'|'es' } }
Response Structure
(dict) --
EntitiesDetectionJobProperties (dict) --
An object that contains the properties associated with an entities detection job.
JobId (string) --
The identifier assigned to the entities detection job.
JobName (string) --
The name that you assigned the entities detection job.
JobStatus (string) --
The current status of the entities detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the entities detection job was submitted for processing.
EndTime (datetime) --
The time that the entities detection job completed
InputDataConfig (dict) --
The input data configuration that you supplied when you created the entities detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the entities detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
Gets a list of sentiment detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
client.list_sentiment_detection_jobs( Filter={ 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'SubmitTimeBefore': datetime(2015, 1, 1), 'SubmitTimeAfter': datetime(2015, 1, 1) }, NextToken='string', MaxResults=123 )
dict
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
JobName (string) --
Filters on the name of the job.
JobStatus (string) --
Filters the list of jobs based on job status. Returns only jobs with the specified status.
SubmitTimeBefore (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
SubmitTimeAfter (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
string
Identifies the next page of results to return.
integer
The maximum number of results to return in each page. The default is 100.
dict
Response Syntax
{ 'SentimentDetectionJobPropertiesList': [ { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' }, 'LanguageCode': 'en'|'es' }, ], 'NextToken': 'string' }
Response Structure
(dict) --
SentimentDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a sentiment detection job.
JobId (string) --
The identifier assigned to the sentiment detection job.
JobName (string) --
The name that you assigned to the sentiment detection job
JobStatus (string) --
The current status of the sentiment detection job. If the status is FAILED , the Messages field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the sentiment detection job was submitted for processing.
EndTime (datetime) --
The time that the sentiment detection job ended.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the sentiment detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the sentiment detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
NextToken (string) --
Identifies the next page of results to return.
Gets a list of the entity detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
client.list_entities_detection_jobs( Filter={ 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'SubmitTimeBefore': datetime(2015, 1, 1), 'SubmitTimeAfter': datetime(2015, 1, 1) }, NextToken='string', MaxResults=123 )
dict
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
JobName (string) --
Filters on the name of the job.
JobStatus (string) --
Filters the list of jobs based on job status. Returns only jobs with the specified status.
SubmitTimeBefore (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
SubmitTimeAfter (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
string
Identifies the next page of results to return.
integer
The maximum number of results to return in each page. The default is 100.
dict
Response Syntax
{ 'EntitiesDetectionJobPropertiesList': [ { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' }, 'LanguageCode': 'en'|'es' }, ], 'NextToken': 'string' }
Response Structure
(dict) --
EntitiesDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about an entities detection job.
JobId (string) --
The identifier assigned to the entities detection job.
JobName (string) --
The name that you assigned the entities detection job.
JobStatus (string) --
The current status of the entities detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the entities detection job was submitted for processing.
EndTime (datetime) --
The time that the entities detection job completed
InputDataConfig (dict) --
The input data configuration that you supplied when you created the entities detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the entities detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
NextToken (string) --
Identifies the next page of results to return.
Stops a sentiment detection job in progress.
If the job state is IN_PROGRESS the job will be marked for termination and put into the STOPPING state.
If the job is in the COMPLETED or FAILED state when you call the StopDominantLanguageDetectionJob operation, the operation will return a 400 Internal Request Exception.
When a job is stopped, any document that has already been processed will be written to the output location.
See also: AWS API Documentation
Request Syntax
client.stop_sentiment_detection_job( JobId='string' )
string
[REQUIRED]
The identifier of the sentiment detection job to stop.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier of the sentiment detection job to stop.
JobStatus (string) --
Either STOPPING if the job is currently running, or STOPPED if the job was previously stopped with the StopSentimentDetectionJob operation.
Starts an asynchronous key phrase detection job for a collection of documents. Use the operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
client.start_key_phrases_detection_job( InputDataConfig={ 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, OutputDataConfig={ 'S3Uri': 'string' }, DataAccessRoleArn='string', JobName='string', LanguageCode='en'|'es', ClientRequestToken='string' )
dict
[REQUIRED]
Specifies the format and location of the input data for the job.
S3Uri (string) -- [REQUIRED]
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
dict
[REQUIRED]
Specifies where to send the output files.
S3Uri (string) -- [REQUIRED]
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
string
The identifier of the job.
string
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
string
A unique identifier for the request. If you don't set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of a job, use this identifier with the operation.
JobStatus (string) --
The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the operation.
Get a list of key phrase detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
client.list_key_phrases_detection_jobs( Filter={ 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'SubmitTimeBefore': datetime(2015, 1, 1), 'SubmitTimeAfter': datetime(2015, 1, 1) }, NextToken='string', MaxResults=123 )
dict
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
JobName (string) --
Filters on the name of the job.
JobStatus (string) --
Filters the list of jobs based on job status. Returns only jobs with the specified status.
SubmitTimeBefore (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
SubmitTimeAfter (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
string
Identifies the next page of results to return.
integer
The maximum number of results to return in each page. The default is 100.
dict
Response Syntax
{ 'KeyPhrasesDetectionJobPropertiesList': [ { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' }, 'LanguageCode': 'en'|'es' }, ], 'NextToken': 'string' }
Response Structure
(dict) --
KeyPhrasesDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a key phrases detection job.
JobId (string) --
The identifier assigned to the key phrases detection job.
JobName (string) --
The name that you assigned the key phrases detection job.
JobStatus (string) --
The current status of the key phrases detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the key phrases detection job was submitted for processing.
EndTime (datetime) --
The time that the key phrases detection job completed.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the key phrases detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the key phrases detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
NextToken (string) --
Identifies the next page of results to return.
Starts an asynchronous sentiment detection job for a collection of documents. use the operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
client.start_sentiment_detection_job( InputDataConfig={ 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, OutputDataConfig={ 'S3Uri': 'string' }, DataAccessRoleArn='string', JobName='string', LanguageCode='en'|'es', ClientRequestToken='string' )
dict
[REQUIRED]
Specifies the format and location of the input data for the job.
S3Uri (string) -- [REQUIRED]
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
dict
[REQUIRED]
Specifies where to send the output files.
S3Uri (string) -- [REQUIRED]
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
string
The identifier of the job.
string
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
string
A unique identifier for the request. If you don't set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of a job, use this identifier with the operation.
JobStatus (string) --
The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the operation.
Starts an asynchronous entity detection job for a collection of documents. Use the operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
client.start_entities_detection_job( InputDataConfig={ 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, OutputDataConfig={ 'S3Uri': 'string' }, DataAccessRoleArn='string', JobName='string', LanguageCode='en'|'es', ClientRequestToken='string' )
dict
[REQUIRED]
Specifies the format and location of the input data for the job.
S3Uri (string) -- [REQUIRED]
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
dict
[REQUIRED]
Specifies where to send the output files.
S3Uri (string) -- [REQUIRED]
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
string
The identifier of the job.
string
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
string
A unique identifier for the request. If you don't set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of job, use this identifier with the operation.
JobStatus (string) --
The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the operation.
Starts an asynchronous dominant language detection job for a collection of documents. Use the operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
client.start_dominant_language_detection_job( InputDataConfig={ 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, OutputDataConfig={ 'S3Uri': 'string' }, DataAccessRoleArn='string', JobName='string', ClientRequestToken='string' )
dict
[REQUIRED]
Specifies the format and location of the input data for the job.
S3Uri (string) -- [REQUIRED]
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
dict
[REQUIRED]
Specifies where to send the output files.
S3Uri (string) -- [REQUIRED]
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
string
An identifier for the job.
string
A unique identifier for the request. If you do not set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of a job, use this identifier with the operation.
JobStatus (string) --
The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the operation.
Gets the properties associated with a dominant language detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
client.describe_dominant_language_detection_job( JobId='string' )
string
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
dict
Response Syntax
{ 'DominantLanguageDetectionJobProperties': { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' } } }
Response Structure
(dict) --
DominantLanguageDetectionJobProperties (dict) --
An object that contains the properties associated with a dominant language detection job.
JobId (string) --
The identifier assigned to the dominant language detection job.
JobName (string) --
The name that you assigned to the dominant language detection job.
JobStatus (string) --
The current status of the dominant language detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description for the status of a job.
SubmitTime (datetime) --
The time that the dominant language detection job was submitted for processing.
EndTime (datetime) --
The time that the dominant language detection job completed.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the dominant language detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the dominant language detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
{'LanguageCode': ['en', 'es']}
Inspects the text of a batch of documents for named entities and returns information about them. For more information about named entities, see how-entities
See also: AWS API Documentation
Request Syntax
client.batch_detect_entities( TextList=[ 'string', ], LanguageCode='en'|'es' )
list
[REQUIRED]
A list containing the text of the input documents. The list can contain a maximum of 25 documents. Each document must contain fewer than 5,000 bytes of UTF-8 encoded characters.
(string) --
string
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{ 'ResultList': [ { 'Index': 123, 'Entities': [ { 'Score': ..., 'Type': 'PERSON'|'LOCATION'|'ORGANIZATION'|'COMMERCIAL_ITEM'|'EVENT'|'DATE'|'QUANTITY'|'TITLE'|'OTHER', 'Text': 'string', 'BeginOffset': 123, 'EndOffset': 123 }, ] }, ], 'ErrorList': [ { 'Index': 123, 'ErrorCode': 'string', 'ErrorMessage': 'string' }, ] }
Response Structure
(dict) --
ResultList (list) --
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList is empty.
(dict) --
The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
Index (integer) --
The zero-based index of the document in the input list.
Entities (list) --
One or more Entity objects, one for each entity detected in the document.
(dict) --
Provides information about an entity.
Score (float) --
The level of confidence that Amazon Comprehend has in the accuracy of the detection.
Type (string) --
The entity's type.
Text (string) --
The text of the entity.
BeginOffset (integer) --
A character offset in the input text that shows where the entity begins (the first character is at position 0). The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
EndOffset (integer) --
A character offset in the input text that shows where the entity ends. The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
ErrorList (list) --
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList is empty.
(dict) --
Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError object for each document that contained an error.
Index (integer) --
The zero-based index of the document in the input list.
ErrorCode (string) --
The numeric error code of the error.
ErrorMessage (string) --
A text description of the error.
{'LanguageCode': ['en', 'es']}
Detects the key noun phrases found in a batch of documents.
See also: AWS API Documentation
Request Syntax
client.batch_detect_key_phrases( TextList=[ 'string', ], LanguageCode='en'|'es' )
list
[REQUIRED]
A list containing the text of the input documents. The list can contain a maximum of 25 documents. Each document must contain fewer that 5,000 bytes of UTF-8 encoded characters.
(string) --
string
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{ 'ResultList': [ { 'Index': 123, 'KeyPhrases': [ { 'Score': ..., 'Text': 'string', 'BeginOffset': 123, 'EndOffset': 123 }, ] }, ], 'ErrorList': [ { 'Index': 123, 'ErrorCode': 'string', 'ErrorMessage': 'string' }, ] }
Response Structure
(dict) --
ResultList (list) --
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList is empty.
(dict) --
The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
Index (integer) --
The zero-based index of the document in the input list.
KeyPhrases (list) --
One or more KeyPhrase objects, one for each key phrase detected in the document.
(dict) --
Describes a key noun phrase.
Score (float) --
The level of confidence that Amazon Comprehend has in the accuracy of the detection.
Text (string) --
The text of a key noun phrase.
BeginOffset (integer) --
A character offset in the input text that shows where the key phrase begins (the first character is at position 0). The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
EndOffset (integer) --
A character offset in the input text where the key phrase ends. The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
ErrorList (list) --
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList is empty.
(dict) --
Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError object for each document that contained an error.
Index (integer) --
The zero-based index of the document in the input list.
ErrorCode (string) --
The numeric error code of the error.
ErrorMessage (string) --
A text description of the error.
{'LanguageCode': ['en', 'es']}
Inspects a batch of documents and returns an inference of the prevailing sentiment, POSITIVE , NEUTRAL , MIXED , or NEGATIVE , in each one.
See also: AWS API Documentation
Request Syntax
client.batch_detect_sentiment( TextList=[ 'string', ], LanguageCode='en'|'es' )
list
[REQUIRED]
A list containing the text of the input documents. The list can contain a maximum of 25 documents. Each document must contain fewer that 5,000 bytes of UTF-8 encoded characters.
(string) --
string
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{ 'ResultList': [ { 'Index': 123, 'Sentiment': 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED', 'SentimentScore': { 'Positive': ..., 'Negative': ..., 'Neutral': ..., 'Mixed': ... } }, ], 'ErrorList': [ { 'Index': 123, 'ErrorCode': 'string', 'ErrorMessage': 'string' }, ] }
Response Structure
(dict) --
ResultList (list) --
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList is empty.
(dict) --
The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
Index (integer) --
The zero-based index of the document in the input list.
Sentiment (string) --
The sentiment detected in the document.
SentimentScore (dict) --
The level of confidence that Amazon Comprehend has in the accuracy of its sentiment detection.
Positive (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the POSITIVE sentiment.
Negative (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the NEGATIVE sentiment.
Neutral (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the NEUTRAL sentiment.
Mixed (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the MIXED sentiment.
ErrorList (list) --
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList is empty.
(dict) --
Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError object for each document that contained an error.
Index (integer) --
The zero-based index of the document in the input list.
ErrorCode (string) --
The numeric error code of the error.
ErrorMessage (string) --
A text description of the error.
{'TopicsDetectionJobProperties': {'JobStatus': ['STOPPED', 'STOP_REQUESTED']}}
Gets the properties associated with a topic detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
client.describe_topics_detection_job( JobId='string' )
string
[REQUIRED]
The identifier assigned by the user to the detection job.
dict
Response Syntax
{ 'TopicsDetectionJobProperties': { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' }, 'NumberOfTopics': 123 } }
Response Structure
(dict) --
TopicsDetectionJobProperties (dict) --
The list of properties for the requested job.
JobId (string) --
The identifier assigned to the topic detection job.
JobName (string) --
The name of the topic detection job.
JobStatus (string) --
The current status of the topic detection job. If the status is Failed , the reason for the failure is shown in the Message field.
Message (string) --
A description for the status of a job.
SubmitTime (datetime) --
The time that the topic detection job was submitted for processing.
EndTime (datetime) --
The time that the topic detection job was completed.
InputDataConfig (dict) --
The input data configuration supplied when you created the topic detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration supplied when you created the topic detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
NumberOfTopics (integer) --
The number of topics to detect supplied when you created the topic detection job. The default is 10.
{'Filter': {'JobStatus': ['STOPPED', 'STOP_REQUESTED']}}Response
{'TopicsDetectionJobPropertiesList': {'JobStatus': ['STOPPED', 'STOP_REQUESTED']}}
Gets a list of the topic detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
client.list_topics_detection_jobs( Filter={ 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'SubmitTimeBefore': datetime(2015, 1, 1), 'SubmitTimeAfter': datetime(2015, 1, 1) }, NextToken='string', MaxResults=123 )
dict
Filters the jobs that are returned. Jobs can be filtered on their name, status, or the date and time that they were submitted. You can set only one filter at a time.
JobName (string) --
JobStatus (string) --
Filters the list of topic detection jobs based on job status. Returns only jobs with the specified status.
SubmitTimeBefore (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Only returns jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
SubmitTimeAfter (datetime) --
Filters the list of jobs based on the time that the job was submitted for processing. Only returns jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
string
Identifies the next page of results to return.
integer
The maximum number of results to return in each page. The default is 100.
dict
Response Syntax
{ 'TopicsDetectionJobPropertiesList': [ { 'JobId': 'string', 'JobName': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED', 'Message': 'string', 'SubmitTime': datetime(2015, 1, 1), 'EndTime': datetime(2015, 1, 1), 'InputDataConfig': { 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, 'OutputDataConfig': { 'S3Uri': 'string' }, 'NumberOfTopics': 123 }, ], 'NextToken': 'string' }
Response Structure
(dict) --
TopicsDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a topic detection job.
JobId (string) --
The identifier assigned to the topic detection job.
JobName (string) --
The name of the topic detection job.
JobStatus (string) --
The current status of the topic detection job. If the status is Failed , the reason for the failure is shown in the Message field.
Message (string) --
A description for the status of a job.
SubmitTime (datetime) --
The time that the topic detection job was submitted for processing.
EndTime (datetime) --
The time that the topic detection job was completed.
InputDataConfig (dict) --
The input data configuration supplied when you created the topic detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
OutputDataConfig (dict) --
The output data configuration supplied when you created the topic detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
NumberOfTopics (integer) --
The number of topics to detect supplied when you created the topic detection job. The default is 10.
NextToken (string) --
Identifies the next page of results to return.
{'JobStatus': ['STOPPED', 'STOP_REQUESTED']}
Starts an asynchronous topic detection job. Use the DescribeTopicDetectionJob operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
client.start_topics_detection_job( InputDataConfig={ 'S3Uri': 'string', 'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE' }, OutputDataConfig={ 'S3Uri': 'string' }, DataAccessRoleArn='string', JobName='string', NumberOfTopics=123, ClientRequestToken='string' )
dict
[REQUIRED]
Specifies the format and location of the input data for the job.
S3Uri (string) -- [REQUIRED]
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
dict
[REQUIRED]
Specifies where to send the output files. The output is a compressed archive with two files, topic-terms.csv that lists the terms associated with each topic, and doc-topics.csv that lists the documents associated with each topic
S3Uri (string) -- [REQUIRED]
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
string
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data.
string
The identifier of the job.
integer
The number of topics to detect.
string
A unique identifier for the request. If you do not set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{ 'JobId': 'string', 'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED' }
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of the job, use this identifier with the DescribeTopicDetectionJob operation.
JobStatus (string) --
The status of the job:
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the DescribeTopicDetectionJob operation.