AWS Glue

2020/10/21 - AWS Glue - 5 updated api methods

Changes  AWS Glue crawlers now support incremental crawls for the Amazon Simple Storage Service (Amazon S3) data source.

BatchGetCrawlers (updated) Link ¶
Changes (response)
{'Crawlers': {'RecrawlPolicy': {'RecrawlBehavior': 'CRAWL_EVERYTHING | '
                                                   'CRAWL_NEW_FOLDERS_ONLY'}}}

Returns a list of resource metadata for a given list of crawler names. After calling the ListCrawlers operation, you can call this operation to access the data to which you have been granted permissions. This operation supports all IAM permissions, including permission conditions that uses tags.

See also: AWS API Documentation

Request Syntax

client.batch_get_crawlers(
    CrawlerNames=[
        'string',
    ]
)
type CrawlerNames

list

param CrawlerNames

[REQUIRED]

A list of crawler names, which might be the names returned from the ListCrawlers operation.

  • (string) --

rtype

dict

returns

Response Syntax

{
    'Crawlers': [
        {
            'Name': 'string',
            'Role': 'string',
            'Targets': {
                'S3Targets': [
                    {
                        'Path': 'string',
                        'Exclusions': [
                            'string',
                        ],
                        'ConnectionName': 'string'
                    },
                ],
                'JdbcTargets': [
                    {
                        'ConnectionName': 'string',
                        'Path': 'string',
                        'Exclusions': [
                            'string',
                        ]
                    },
                ],
                'MongoDBTargets': [
                    {
                        'ConnectionName': 'string',
                        'Path': 'string',
                        'ScanAll': True|False
                    },
                ],
                'DynamoDBTargets': [
                    {
                        'Path': 'string',
                        'scanAll': True|False,
                        'scanRate': 123.0
                    },
                ],
                'CatalogTargets': [
                    {
                        'DatabaseName': 'string',
                        'Tables': [
                            'string',
                        ]
                    },
                ]
            },
            'DatabaseName': 'string',
            'Description': 'string',
            'Classifiers': [
                'string',
            ],
            'RecrawlPolicy': {
                'RecrawlBehavior': 'CRAWL_EVERYTHING'|'CRAWL_NEW_FOLDERS_ONLY'
            },
            'SchemaChangePolicy': {
                'UpdateBehavior': 'LOG'|'UPDATE_IN_DATABASE',
                'DeleteBehavior': 'LOG'|'DELETE_FROM_DATABASE'|'DEPRECATE_IN_DATABASE'
            },
            'State': 'READY'|'RUNNING'|'STOPPING',
            'TablePrefix': 'string',
            'Schedule': {
                'ScheduleExpression': 'string',
                'State': 'SCHEDULED'|'NOT_SCHEDULED'|'TRANSITIONING'
            },
            'CrawlElapsedTime': 123,
            'CreationTime': datetime(2015, 1, 1),
            'LastUpdated': datetime(2015, 1, 1),
            'LastCrawl': {
                'Status': 'SUCCEEDED'|'CANCELLED'|'FAILED',
                'ErrorMessage': 'string',
                'LogGroup': 'string',
                'LogStream': 'string',
                'MessagePrefix': 'string',
                'StartTime': datetime(2015, 1, 1)
            },
            'Version': 123,
            'Configuration': 'string',
            'CrawlerSecurityConfiguration': 'string'
        },
    ],
    'CrawlersNotFound': [
        'string',
    ]
}

Response Structure

  • (dict) --

    • Crawlers (list) --

      A list of crawler definitions.

      • (dict) --

        Specifies a crawler program that examines a data source and uses classifiers to try to determine its schema. If successful, the crawler records metadata concerning the data source in the AWS Glue Data Catalog.

        • Name (string) --

          The name of the crawler.

        • Role (string) --

          The Amazon Resource Name (ARN) of an IAM role that's used to access customer resources, such as Amazon Simple Storage Service (Amazon S3) data.

        • Targets (dict) --

          A collection of targets to crawl.

          • S3Targets (list) --

            Specifies Amazon Simple Storage Service (Amazon S3) targets.

            • (dict) --

              Specifies a data store in Amazon Simple Storage Service (Amazon S3).

              • Path (string) --

                The path to the Amazon S3 target.

              • Exclusions (list) --

                A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

                • (string) --

              • ConnectionName (string) --

                The name of a connection which allows a job or crawler to access data in Amazon S3 within an Amazon Virtual Private Cloud environment (Amazon VPC).

          • JdbcTargets (list) --

            Specifies JDBC targets.

            • (dict) --

              Specifies a JDBC data store to crawl.

              • ConnectionName (string) --

                The name of the connection to use to connect to the JDBC target.

              • Path (string) --

                The path of the JDBC target.

              • Exclusions (list) --

                A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

                • (string) --

          • MongoDBTargets (list) --

            Specifies Amazon DocumentDB or MongoDB targets.

            • (dict) --

              Specifies an Amazon DocumentDB or MongoDB data store to crawl.

              • ConnectionName (string) --

                The name of the connection to use to connect to the Amazon DocumentDB or MongoDB target.

              • Path (string) --

                The path of the Amazon DocumentDB or MongoDB target (database/collection).

              • ScanAll (boolean) --

                Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

                A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

          • DynamoDBTargets (list) --

            Specifies Amazon DynamoDB targets.

            • (dict) --

              Specifies an Amazon DynamoDB table to crawl.

              • Path (string) --

                The name of the DynamoDB table to crawl.

              • scanAll (boolean) --

                Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

                A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

              • scanRate (float) --

                The percentage of the configured read capacity units to use by the AWS Glue crawler. Read capacity units is a term defined by DynamoDB, and is a numeric value that acts as rate limiter for the number of reads that can be performed on that table per second.

                The valid values are null or a value between 0.1 to 1.5. A null value is used when user does not provide a value, and defaults to 0.5 of the configured Read Capacity Unit (for provisioned tables), or 0.25 of the max configured Read Capacity Unit (for tables using on-demand mode).

          • CatalogTargets (list) --

            Specifies AWS Glue Data Catalog targets.

            • (dict) --

              Specifies an AWS Glue Data Catalog target.

              • DatabaseName (string) --

                The name of the database to be synchronized.

              • Tables (list) --

                A list of the tables to be synchronized.

                • (string) --

        • DatabaseName (string) --

          The name of the database in which the crawler's output is stored.

        • Description (string) --

          A description of the crawler.

        • Classifiers (list) --

          A list of UTF-8 strings that specify the custom classifiers that are associated with the crawler.

          • (string) --

        • RecrawlPolicy (dict) --

          A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

          • RecrawlBehavior (string) --

            Specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.

            A value of CRAWL_EVERYTHING specifies crawling the entire dataset again.

            A value of CRAWL_NEW_FOLDERS_ONLY specifies crawling only folders that were added since the last crawler run.

        • SchemaChangePolicy (dict) --

          The policy that specifies update and delete behaviors for the crawler.

          • UpdateBehavior (string) --

            The update behavior when the crawler finds a changed schema.

          • DeleteBehavior (string) --

            The deletion behavior when the crawler finds a deleted object.

        • State (string) --

          Indicates whether the crawler is running, or whether a run is pending.

        • TablePrefix (string) --

          The prefix added to the names of tables that are created.

        • Schedule (dict) --

          For scheduled crawlers, the schedule when the crawler runs.

          • ScheduleExpression (string) --

            A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers . For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .

          • State (string) --

            The state of the schedule.

        • CrawlElapsedTime (integer) --

          If the crawler is running, contains the total time elapsed since the last crawl began.

        • CreationTime (datetime) --

          The time that the crawler was created.

        • LastUpdated (datetime) --

          The time that the crawler was last updated.

        • LastCrawl (dict) --

          The status of the last crawl, and potentially error information if an error occurred.

          • Status (string) --

            Status of the last crawl.

          • ErrorMessage (string) --

            If an error occurred, the error information about the last crawl.

          • LogGroup (string) --

            The log group for the last crawl.

          • LogStream (string) --

            The log stream for the last crawl.

          • MessagePrefix (string) --

            The prefix for a message about this crawl.

          • StartTime (datetime) --

            The time at which the crawl started.

        • Version (integer) --

          The version of the crawler.

        • Configuration (string) --

          Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Configuring a Crawler .

        • CrawlerSecurityConfiguration (string) --

          The name of the SecurityConfiguration structure to be used by this crawler.

    • CrawlersNotFound (list) --

      A list of names of crawlers that were not found.

      • (string) --

CreateCrawler (updated) Link ¶
Changes (request)
{'RecrawlPolicy': {'RecrawlBehavior': 'CRAWL_EVERYTHING | '
                                      'CRAWL_NEW_FOLDERS_ONLY'}}

Creates a new crawler with specified targets, role, configuration, and optional schedule. At least one crawl target must be specified, in the s3Targets field, the jdbcTargets field, or the DynamoDBTargets field.

See also: AWS API Documentation

Request Syntax

client.create_crawler(
    Name='string',
    Role='string',
    DatabaseName='string',
    Description='string',
    Targets={
        'S3Targets': [
            {
                'Path': 'string',
                'Exclusions': [
                    'string',
                ],
                'ConnectionName': 'string'
            },
        ],
        'JdbcTargets': [
            {
                'ConnectionName': 'string',
                'Path': 'string',
                'Exclusions': [
                    'string',
                ]
            },
        ],
        'MongoDBTargets': [
            {
                'ConnectionName': 'string',
                'Path': 'string',
                'ScanAll': True|False
            },
        ],
        'DynamoDBTargets': [
            {
                'Path': 'string',
                'scanAll': True|False,
                'scanRate': 123.0
            },
        ],
        'CatalogTargets': [
            {
                'DatabaseName': 'string',
                'Tables': [
                    'string',
                ]
            },
        ]
    },
    Schedule='string',
    Classifiers=[
        'string',
    ],
    TablePrefix='string',
    SchemaChangePolicy={
        'UpdateBehavior': 'LOG'|'UPDATE_IN_DATABASE',
        'DeleteBehavior': 'LOG'|'DELETE_FROM_DATABASE'|'DEPRECATE_IN_DATABASE'
    },
    RecrawlPolicy={
        'RecrawlBehavior': 'CRAWL_EVERYTHING'|'CRAWL_NEW_FOLDERS_ONLY'
    },
    Configuration='string',
    CrawlerSecurityConfiguration='string',
    Tags={
        'string': 'string'
    }
)
type Name

string

param Name

[REQUIRED]

Name of the new crawler.

type Role

string

param Role

[REQUIRED]

The IAM role or Amazon Resource Name (ARN) of an IAM role used by the new crawler to access customer resources.

type DatabaseName

string

param DatabaseName

The AWS Glue database where results are written, such as: arn:aws:daylight:us-east-1::database/sometable/* .

type Description

string

param Description

A description of the new crawler.

type Targets

dict

param Targets

[REQUIRED]

A list of collection of targets to crawl.

  • S3Targets (list) --

    Specifies Amazon Simple Storage Service (Amazon S3) targets.

    • (dict) --

      Specifies a data store in Amazon Simple Storage Service (Amazon S3).

      • Path (string) --

        The path to the Amazon S3 target.

      • Exclusions (list) --

        A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

        • (string) --

      • ConnectionName (string) --

        The name of a connection which allows a job or crawler to access data in Amazon S3 within an Amazon Virtual Private Cloud environment (Amazon VPC).

  • JdbcTargets (list) --

    Specifies JDBC targets.

    • (dict) --

      Specifies a JDBC data store to crawl.

      • ConnectionName (string) --

        The name of the connection to use to connect to the JDBC target.

      • Path (string) --

        The path of the JDBC target.

      • Exclusions (list) --

        A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

        • (string) --

  • MongoDBTargets (list) --

    Specifies Amazon DocumentDB or MongoDB targets.

    • (dict) --

      Specifies an Amazon DocumentDB or MongoDB data store to crawl.

      • ConnectionName (string) --

        The name of the connection to use to connect to the Amazon DocumentDB or MongoDB target.

      • Path (string) --

        The path of the Amazon DocumentDB or MongoDB target (database/collection).

      • ScanAll (boolean) --

        Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

        A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

  • DynamoDBTargets (list) --

    Specifies Amazon DynamoDB targets.

    • (dict) --

      Specifies an Amazon DynamoDB table to crawl.

      • Path (string) --

        The name of the DynamoDB table to crawl.

      • scanAll (boolean) --

        Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

        A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

      • scanRate (float) --

        The percentage of the configured read capacity units to use by the AWS Glue crawler. Read capacity units is a term defined by DynamoDB, and is a numeric value that acts as rate limiter for the number of reads that can be performed on that table per second.

        The valid values are null or a value between 0.1 to 1.5. A null value is used when user does not provide a value, and defaults to 0.5 of the configured Read Capacity Unit (for provisioned tables), or 0.25 of the max configured Read Capacity Unit (for tables using on-demand mode).

  • CatalogTargets (list) --

    Specifies AWS Glue Data Catalog targets.

    • (dict) --

      Specifies an AWS Glue Data Catalog target.

      • DatabaseName (string) -- [REQUIRED]

        The name of the database to be synchronized.

      • Tables (list) -- [REQUIRED]

        A list of the tables to be synchronized.

        • (string) --

type Schedule

string

param Schedule

A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers . For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .

type Classifiers

list

param Classifiers

A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.

  • (string) --

type TablePrefix

string

param TablePrefix

The table prefix used for catalog tables that are created.

type SchemaChangePolicy

dict

param SchemaChangePolicy

The policy for the crawler's update and deletion behavior.

  • UpdateBehavior (string) --

    The update behavior when the crawler finds a changed schema.

  • DeleteBehavior (string) --

    The deletion behavior when the crawler finds a deleted object.

type RecrawlPolicy

dict

param RecrawlPolicy

A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

  • RecrawlBehavior (string) --

    Specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.

    A value of CRAWL_EVERYTHING specifies crawling the entire dataset again.

    A value of CRAWL_NEW_FOLDERS_ONLY specifies crawling only folders that were added since the last crawler run.

type Configuration

string

param Configuration

Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Configuring a Crawler .

type CrawlerSecurityConfiguration

string

param CrawlerSecurityConfiguration

The name of the SecurityConfiguration structure to be used by this crawler.

type Tags

dict

param Tags

The tags to use with this crawler request. You may use tags to limit access to the crawler. For more information about tags in AWS Glue, see AWS Tags in AWS Glue in the developer guide.

  • (string) --

    • (string) --

rtype

dict

returns

Response Syntax

{}

Response Structure

  • (dict) --

GetCrawler (updated) Link ¶
Changes (response)
{'Crawler': {'RecrawlPolicy': {'RecrawlBehavior': 'CRAWL_EVERYTHING | '
                                                  'CRAWL_NEW_FOLDERS_ONLY'}}}

Retrieves metadata for a specified crawler.

See also: AWS API Documentation

Request Syntax

client.get_crawler(
    Name='string'
)
type Name

string

param Name

[REQUIRED]

The name of the crawler to retrieve metadata for.

rtype

dict

returns

Response Syntax

{
    'Crawler': {
        'Name': 'string',
        'Role': 'string',
        'Targets': {
            'S3Targets': [
                {
                    'Path': 'string',
                    'Exclusions': [
                        'string',
                    ],
                    'ConnectionName': 'string'
                },
            ],
            'JdbcTargets': [
                {
                    'ConnectionName': 'string',
                    'Path': 'string',
                    'Exclusions': [
                        'string',
                    ]
                },
            ],
            'MongoDBTargets': [
                {
                    'ConnectionName': 'string',
                    'Path': 'string',
                    'ScanAll': True|False
                },
            ],
            'DynamoDBTargets': [
                {
                    'Path': 'string',
                    'scanAll': True|False,
                    'scanRate': 123.0
                },
            ],
            'CatalogTargets': [
                {
                    'DatabaseName': 'string',
                    'Tables': [
                        'string',
                    ]
                },
            ]
        },
        'DatabaseName': 'string',
        'Description': 'string',
        'Classifiers': [
            'string',
        ],
        'RecrawlPolicy': {
            'RecrawlBehavior': 'CRAWL_EVERYTHING'|'CRAWL_NEW_FOLDERS_ONLY'
        },
        'SchemaChangePolicy': {
            'UpdateBehavior': 'LOG'|'UPDATE_IN_DATABASE',
            'DeleteBehavior': 'LOG'|'DELETE_FROM_DATABASE'|'DEPRECATE_IN_DATABASE'
        },
        'State': 'READY'|'RUNNING'|'STOPPING',
        'TablePrefix': 'string',
        'Schedule': {
            'ScheduleExpression': 'string',
            'State': 'SCHEDULED'|'NOT_SCHEDULED'|'TRANSITIONING'
        },
        'CrawlElapsedTime': 123,
        'CreationTime': datetime(2015, 1, 1),
        'LastUpdated': datetime(2015, 1, 1),
        'LastCrawl': {
            'Status': 'SUCCEEDED'|'CANCELLED'|'FAILED',
            'ErrorMessage': 'string',
            'LogGroup': 'string',
            'LogStream': 'string',
            'MessagePrefix': 'string',
            'StartTime': datetime(2015, 1, 1)
        },
        'Version': 123,
        'Configuration': 'string',
        'CrawlerSecurityConfiguration': 'string'
    }
}

Response Structure

  • (dict) --

    • Crawler (dict) --

      The metadata for the specified crawler.

      • Name (string) --

        The name of the crawler.

      • Role (string) --

        The Amazon Resource Name (ARN) of an IAM role that's used to access customer resources, such as Amazon Simple Storage Service (Amazon S3) data.

      • Targets (dict) --

        A collection of targets to crawl.

        • S3Targets (list) --

          Specifies Amazon Simple Storage Service (Amazon S3) targets.

          • (dict) --

            Specifies a data store in Amazon Simple Storage Service (Amazon S3).

            • Path (string) --

              The path to the Amazon S3 target.

            • Exclusions (list) --

              A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

              • (string) --

            • ConnectionName (string) --

              The name of a connection which allows a job or crawler to access data in Amazon S3 within an Amazon Virtual Private Cloud environment (Amazon VPC).

        • JdbcTargets (list) --

          Specifies JDBC targets.

          • (dict) --

            Specifies a JDBC data store to crawl.

            • ConnectionName (string) --

              The name of the connection to use to connect to the JDBC target.

            • Path (string) --

              The path of the JDBC target.

            • Exclusions (list) --

              A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

              • (string) --

        • MongoDBTargets (list) --

          Specifies Amazon DocumentDB or MongoDB targets.

          • (dict) --

            Specifies an Amazon DocumentDB or MongoDB data store to crawl.

            • ConnectionName (string) --

              The name of the connection to use to connect to the Amazon DocumentDB or MongoDB target.

            • Path (string) --

              The path of the Amazon DocumentDB or MongoDB target (database/collection).

            • ScanAll (boolean) --

              Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

              A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

        • DynamoDBTargets (list) --

          Specifies Amazon DynamoDB targets.

          • (dict) --

            Specifies an Amazon DynamoDB table to crawl.

            • Path (string) --

              The name of the DynamoDB table to crawl.

            • scanAll (boolean) --

              Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

              A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

            • scanRate (float) --

              The percentage of the configured read capacity units to use by the AWS Glue crawler. Read capacity units is a term defined by DynamoDB, and is a numeric value that acts as rate limiter for the number of reads that can be performed on that table per second.

              The valid values are null or a value between 0.1 to 1.5. A null value is used when user does not provide a value, and defaults to 0.5 of the configured Read Capacity Unit (for provisioned tables), or 0.25 of the max configured Read Capacity Unit (for tables using on-demand mode).

        • CatalogTargets (list) --

          Specifies AWS Glue Data Catalog targets.

          • (dict) --

            Specifies an AWS Glue Data Catalog target.

            • DatabaseName (string) --

              The name of the database to be synchronized.

            • Tables (list) --

              A list of the tables to be synchronized.

              • (string) --

      • DatabaseName (string) --

        The name of the database in which the crawler's output is stored.

      • Description (string) --

        A description of the crawler.

      • Classifiers (list) --

        A list of UTF-8 strings that specify the custom classifiers that are associated with the crawler.

        • (string) --

      • RecrawlPolicy (dict) --

        A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

        • RecrawlBehavior (string) --

          Specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.

          A value of CRAWL_EVERYTHING specifies crawling the entire dataset again.

          A value of CRAWL_NEW_FOLDERS_ONLY specifies crawling only folders that were added since the last crawler run.

      • SchemaChangePolicy (dict) --

        The policy that specifies update and delete behaviors for the crawler.

        • UpdateBehavior (string) --

          The update behavior when the crawler finds a changed schema.

        • DeleteBehavior (string) --

          The deletion behavior when the crawler finds a deleted object.

      • State (string) --

        Indicates whether the crawler is running, or whether a run is pending.

      • TablePrefix (string) --

        The prefix added to the names of tables that are created.

      • Schedule (dict) --

        For scheduled crawlers, the schedule when the crawler runs.

        • ScheduleExpression (string) --

          A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers . For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .

        • State (string) --

          The state of the schedule.

      • CrawlElapsedTime (integer) --

        If the crawler is running, contains the total time elapsed since the last crawl began.

      • CreationTime (datetime) --

        The time that the crawler was created.

      • LastUpdated (datetime) --

        The time that the crawler was last updated.

      • LastCrawl (dict) --

        The status of the last crawl, and potentially error information if an error occurred.

        • Status (string) --

          Status of the last crawl.

        • ErrorMessage (string) --

          If an error occurred, the error information about the last crawl.

        • LogGroup (string) --

          The log group for the last crawl.

        • LogStream (string) --

          The log stream for the last crawl.

        • MessagePrefix (string) --

          The prefix for a message about this crawl.

        • StartTime (datetime) --

          The time at which the crawl started.

      • Version (integer) --

        The version of the crawler.

      • Configuration (string) --

        Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Configuring a Crawler .

      • CrawlerSecurityConfiguration (string) --

        The name of the SecurityConfiguration structure to be used by this crawler.

GetCrawlers (updated) Link ¶
Changes (response)
{'Crawlers': {'RecrawlPolicy': {'RecrawlBehavior': 'CRAWL_EVERYTHING | '
                                                   'CRAWL_NEW_FOLDERS_ONLY'}}}

Retrieves metadata for all crawlers defined in the customer account.

See also: AWS API Documentation

Request Syntax

client.get_crawlers(
    MaxResults=123,
    NextToken='string'
)
type MaxResults

integer

param MaxResults

The number of crawlers to return on each call.

type NextToken

string

param NextToken

A continuation token, if this is a continuation request.

rtype

dict

returns

Response Syntax

{
    'Crawlers': [
        {
            'Name': 'string',
            'Role': 'string',
            'Targets': {
                'S3Targets': [
                    {
                        'Path': 'string',
                        'Exclusions': [
                            'string',
                        ],
                        'ConnectionName': 'string'
                    },
                ],
                'JdbcTargets': [
                    {
                        'ConnectionName': 'string',
                        'Path': 'string',
                        'Exclusions': [
                            'string',
                        ]
                    },
                ],
                'MongoDBTargets': [
                    {
                        'ConnectionName': 'string',
                        'Path': 'string',
                        'ScanAll': True|False
                    },
                ],
                'DynamoDBTargets': [
                    {
                        'Path': 'string',
                        'scanAll': True|False,
                        'scanRate': 123.0
                    },
                ],
                'CatalogTargets': [
                    {
                        'DatabaseName': 'string',
                        'Tables': [
                            'string',
                        ]
                    },
                ]
            },
            'DatabaseName': 'string',
            'Description': 'string',
            'Classifiers': [
                'string',
            ],
            'RecrawlPolicy': {
                'RecrawlBehavior': 'CRAWL_EVERYTHING'|'CRAWL_NEW_FOLDERS_ONLY'
            },
            'SchemaChangePolicy': {
                'UpdateBehavior': 'LOG'|'UPDATE_IN_DATABASE',
                'DeleteBehavior': 'LOG'|'DELETE_FROM_DATABASE'|'DEPRECATE_IN_DATABASE'
            },
            'State': 'READY'|'RUNNING'|'STOPPING',
            'TablePrefix': 'string',
            'Schedule': {
                'ScheduleExpression': 'string',
                'State': 'SCHEDULED'|'NOT_SCHEDULED'|'TRANSITIONING'
            },
            'CrawlElapsedTime': 123,
            'CreationTime': datetime(2015, 1, 1),
            'LastUpdated': datetime(2015, 1, 1),
            'LastCrawl': {
                'Status': 'SUCCEEDED'|'CANCELLED'|'FAILED',
                'ErrorMessage': 'string',
                'LogGroup': 'string',
                'LogStream': 'string',
                'MessagePrefix': 'string',
                'StartTime': datetime(2015, 1, 1)
            },
            'Version': 123,
            'Configuration': 'string',
            'CrawlerSecurityConfiguration': 'string'
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • Crawlers (list) --

      A list of crawler metadata.

      • (dict) --

        Specifies a crawler program that examines a data source and uses classifiers to try to determine its schema. If successful, the crawler records metadata concerning the data source in the AWS Glue Data Catalog.

        • Name (string) --

          The name of the crawler.

        • Role (string) --

          The Amazon Resource Name (ARN) of an IAM role that's used to access customer resources, such as Amazon Simple Storage Service (Amazon S3) data.

        • Targets (dict) --

          A collection of targets to crawl.

          • S3Targets (list) --

            Specifies Amazon Simple Storage Service (Amazon S3) targets.

            • (dict) --

              Specifies a data store in Amazon Simple Storage Service (Amazon S3).

              • Path (string) --

                The path to the Amazon S3 target.

              • Exclusions (list) --

                A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

                • (string) --

              • ConnectionName (string) --

                The name of a connection which allows a job or crawler to access data in Amazon S3 within an Amazon Virtual Private Cloud environment (Amazon VPC).

          • JdbcTargets (list) --

            Specifies JDBC targets.

            • (dict) --

              Specifies a JDBC data store to crawl.

              • ConnectionName (string) --

                The name of the connection to use to connect to the JDBC target.

              • Path (string) --

                The path of the JDBC target.

              • Exclusions (list) --

                A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

                • (string) --

          • MongoDBTargets (list) --

            Specifies Amazon DocumentDB or MongoDB targets.

            • (dict) --

              Specifies an Amazon DocumentDB or MongoDB data store to crawl.

              • ConnectionName (string) --

                The name of the connection to use to connect to the Amazon DocumentDB or MongoDB target.

              • Path (string) --

                The path of the Amazon DocumentDB or MongoDB target (database/collection).

              • ScanAll (boolean) --

                Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

                A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

          • DynamoDBTargets (list) --

            Specifies Amazon DynamoDB targets.

            • (dict) --

              Specifies an Amazon DynamoDB table to crawl.

              • Path (string) --

                The name of the DynamoDB table to crawl.

              • scanAll (boolean) --

                Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

                A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

              • scanRate (float) --

                The percentage of the configured read capacity units to use by the AWS Glue crawler. Read capacity units is a term defined by DynamoDB, and is a numeric value that acts as rate limiter for the number of reads that can be performed on that table per second.

                The valid values are null or a value between 0.1 to 1.5. A null value is used when user does not provide a value, and defaults to 0.5 of the configured Read Capacity Unit (for provisioned tables), or 0.25 of the max configured Read Capacity Unit (for tables using on-demand mode).

          • CatalogTargets (list) --

            Specifies AWS Glue Data Catalog targets.

            • (dict) --

              Specifies an AWS Glue Data Catalog target.

              • DatabaseName (string) --

                The name of the database to be synchronized.

              • Tables (list) --

                A list of the tables to be synchronized.

                • (string) --

        • DatabaseName (string) --

          The name of the database in which the crawler's output is stored.

        • Description (string) --

          A description of the crawler.

        • Classifiers (list) --

          A list of UTF-8 strings that specify the custom classifiers that are associated with the crawler.

          • (string) --

        • RecrawlPolicy (dict) --

          A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

          • RecrawlBehavior (string) --

            Specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.

            A value of CRAWL_EVERYTHING specifies crawling the entire dataset again.

            A value of CRAWL_NEW_FOLDERS_ONLY specifies crawling only folders that were added since the last crawler run.

        • SchemaChangePolicy (dict) --

          The policy that specifies update and delete behaviors for the crawler.

          • UpdateBehavior (string) --

            The update behavior when the crawler finds a changed schema.

          • DeleteBehavior (string) --

            The deletion behavior when the crawler finds a deleted object.

        • State (string) --

          Indicates whether the crawler is running, or whether a run is pending.

        • TablePrefix (string) --

          The prefix added to the names of tables that are created.

        • Schedule (dict) --

          For scheduled crawlers, the schedule when the crawler runs.

          • ScheduleExpression (string) --

            A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers . For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .

          • State (string) --

            The state of the schedule.

        • CrawlElapsedTime (integer) --

          If the crawler is running, contains the total time elapsed since the last crawl began.

        • CreationTime (datetime) --

          The time that the crawler was created.

        • LastUpdated (datetime) --

          The time that the crawler was last updated.

        • LastCrawl (dict) --

          The status of the last crawl, and potentially error information if an error occurred.

          • Status (string) --

            Status of the last crawl.

          • ErrorMessage (string) --

            If an error occurred, the error information about the last crawl.

          • LogGroup (string) --

            The log group for the last crawl.

          • LogStream (string) --

            The log stream for the last crawl.

          • MessagePrefix (string) --

            The prefix for a message about this crawl.

          • StartTime (datetime) --

            The time at which the crawl started.

        • Version (integer) --

          The version of the crawler.

        • Configuration (string) --

          Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Configuring a Crawler .

        • CrawlerSecurityConfiguration (string) --

          The name of the SecurityConfiguration structure to be used by this crawler.

    • NextToken (string) --

      A continuation token, if the returned list has not reached the end of those defined in this customer account.

UpdateCrawler (updated) Link ¶
Changes (request)
{'RecrawlPolicy': {'RecrawlBehavior': 'CRAWL_EVERYTHING | '
                                      'CRAWL_NEW_FOLDERS_ONLY'}}

Updates a crawler. If a crawler is running, you must stop it using StopCrawler before updating it.

See also: AWS API Documentation

Request Syntax

client.update_crawler(
    Name='string',
    Role='string',
    DatabaseName='string',
    Description='string',
    Targets={
        'S3Targets': [
            {
                'Path': 'string',
                'Exclusions': [
                    'string',
                ],
                'ConnectionName': 'string'
            },
        ],
        'JdbcTargets': [
            {
                'ConnectionName': 'string',
                'Path': 'string',
                'Exclusions': [
                    'string',
                ]
            },
        ],
        'MongoDBTargets': [
            {
                'ConnectionName': 'string',
                'Path': 'string',
                'ScanAll': True|False
            },
        ],
        'DynamoDBTargets': [
            {
                'Path': 'string',
                'scanAll': True|False,
                'scanRate': 123.0
            },
        ],
        'CatalogTargets': [
            {
                'DatabaseName': 'string',
                'Tables': [
                    'string',
                ]
            },
        ]
    },
    Schedule='string',
    Classifiers=[
        'string',
    ],
    TablePrefix='string',
    SchemaChangePolicy={
        'UpdateBehavior': 'LOG'|'UPDATE_IN_DATABASE',
        'DeleteBehavior': 'LOG'|'DELETE_FROM_DATABASE'|'DEPRECATE_IN_DATABASE'
    },
    RecrawlPolicy={
        'RecrawlBehavior': 'CRAWL_EVERYTHING'|'CRAWL_NEW_FOLDERS_ONLY'
    },
    Configuration='string',
    CrawlerSecurityConfiguration='string'
)
type Name

string

param Name

[REQUIRED]

Name of the new crawler.

type Role

string

param Role

The IAM role or Amazon Resource Name (ARN) of an IAM role that is used by the new crawler to access customer resources.

type DatabaseName

string

param DatabaseName

The AWS Glue database where results are stored, such as: arn:aws:daylight:us-east-1::database/sometable/* .

type Description

string

param Description

A description of the new crawler.

type Targets

dict

param Targets

A list of targets to crawl.

  • S3Targets (list) --

    Specifies Amazon Simple Storage Service (Amazon S3) targets.

    • (dict) --

      Specifies a data store in Amazon Simple Storage Service (Amazon S3).

      • Path (string) --

        The path to the Amazon S3 target.

      • Exclusions (list) --

        A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

        • (string) --

      • ConnectionName (string) --

        The name of a connection which allows a job or crawler to access data in Amazon S3 within an Amazon Virtual Private Cloud environment (Amazon VPC).

  • JdbcTargets (list) --

    Specifies JDBC targets.

    • (dict) --

      Specifies a JDBC data store to crawl.

      • ConnectionName (string) --

        The name of the connection to use to connect to the JDBC target.

      • Path (string) --

        The path of the JDBC target.

      • Exclusions (list) --

        A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler .

        • (string) --

  • MongoDBTargets (list) --

    Specifies Amazon DocumentDB or MongoDB targets.

    • (dict) --

      Specifies an Amazon DocumentDB or MongoDB data store to crawl.

      • ConnectionName (string) --

        The name of the connection to use to connect to the Amazon DocumentDB or MongoDB target.

      • Path (string) --

        The path of the Amazon DocumentDB or MongoDB target (database/collection).

      • ScanAll (boolean) --

        Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

        A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

  • DynamoDBTargets (list) --

    Specifies Amazon DynamoDB targets.

    • (dict) --

      Specifies an Amazon DynamoDB table to crawl.

      • Path (string) --

        The name of the DynamoDB table to crawl.

      • scanAll (boolean) --

        Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.

        A value of true means to scan all records, while a value of false means to sample the records. If no value is specified, the value defaults to true .

      • scanRate (float) --

        The percentage of the configured read capacity units to use by the AWS Glue crawler. Read capacity units is a term defined by DynamoDB, and is a numeric value that acts as rate limiter for the number of reads that can be performed on that table per second.

        The valid values are null or a value between 0.1 to 1.5. A null value is used when user does not provide a value, and defaults to 0.5 of the configured Read Capacity Unit (for provisioned tables), or 0.25 of the max configured Read Capacity Unit (for tables using on-demand mode).

  • CatalogTargets (list) --

    Specifies AWS Glue Data Catalog targets.

    • (dict) --

      Specifies an AWS Glue Data Catalog target.

      • DatabaseName (string) -- [REQUIRED]

        The name of the database to be synchronized.

      • Tables (list) -- [REQUIRED]

        A list of the tables to be synchronized.

        • (string) --

type Schedule

string

param Schedule

A cron expression used to specify the schedule (see Time-Based Schedules for Jobs and Crawlers . For example, to run something every day at 12:15 UTC, you would specify: cron(15 12 * * ? *) .

type Classifiers

list

param Classifiers

A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.

  • (string) --

type TablePrefix

string

param TablePrefix

The table prefix used for catalog tables that are created.

type SchemaChangePolicy

dict

param SchemaChangePolicy

The policy for the crawler's update and deletion behavior.

  • UpdateBehavior (string) --

    The update behavior when the crawler finds a changed schema.

  • DeleteBehavior (string) --

    The deletion behavior when the crawler finds a deleted object.

type RecrawlPolicy

dict

param RecrawlPolicy

A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.

  • RecrawlBehavior (string) --

    Specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.

    A value of CRAWL_EVERYTHING specifies crawling the entire dataset again.

    A value of CRAWL_NEW_FOLDERS_ONLY specifies crawling only folders that were added since the last crawler run.

type Configuration

string

param Configuration

Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Configuring a Crawler .

type CrawlerSecurityConfiguration

string

param CrawlerSecurityConfiguration

The name of the SecurityConfiguration structure to be used by this crawler.

rtype

dict

returns

Response Syntax

{}

Response Structure

  • (dict) --