Amazon QuickSight

2023/04/07 - Amazon QuickSight - 8 new 5 updated api methods

Changes  This release has two changes: adding the OR condition to tag-based RLS rules in CreateDataSet and UpdateDataSet; adding RefreshSchedule and Incremental RefreshProperties operations for users to programmatically configure SPICE dataset ingestions.

DeleteRefreshSchedule (new) Link ¶

Deletes a refresh schedule from a dataset.

See also: AWS API Documentation

Request Syntax

client.delete_refresh_schedule(
    DataSetId='string',
    AwsAccountId='string',
    ScheduleId='string'
)
type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset.

type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type ScheduleId

string

param ScheduleId

[REQUIRED]

The ID of the refresh schedule.

rtype

dict

returns

Response Syntax

{
    'Status': 123,
    'RequestId': 'string',
    'ScheduleId': 'string',
    'Arn': 'string'
}

Response Structure

  • (dict) --

    • Status (integer) --

      The HTTP status of the request.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • ScheduleId (string) --

      The ID of the refresh schedule.

    • Arn (string) --

      The Amazon Resource Name (ARN) for the refresh schedule.

DeleteDataSetRefreshProperties (new) Link ¶

Deletes the dataset refresh properties of the dataset.

See also: AWS API Documentation

Request Syntax

client.delete_data_set_refresh_properties(
    AwsAccountId='string',
    DataSetId='string'
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset.

rtype

dict

returns

Response Syntax

{
    'RequestId': 'string',
    'Status': 123
}

Response Structure

  • (dict) --

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Status (integer) --

      The HTTP status of the request.

CreateRefreshSchedule (new) Link ¶

Creates a refresh schedule for a dataset. You can create up to 5 different schedules for a single dataset.

See also: AWS API Documentation

Request Syntax

client.create_refresh_schedule(
    DataSetId='string',
    AwsAccountId='string',
    Schedule={
        'ScheduleId': 'string',
        'ScheduleFrequency': {
            'Interval': 'MINUTE15'|'MINUTE30'|'HOURLY'|'DAILY'|'WEEKLY'|'MONTHLY',
            'RefreshOnDay': {
                'DayOfWeek': 'SUNDAY'|'MONDAY'|'TUESDAY'|'WEDNESDAY'|'THURSDAY'|'FRIDAY'|'SATURDAY',
                'DayOfMonth': 'string'
            },
            'Timezone': 'string',
            'TimeOfTheDay': 'string'
        },
        'StartAfterDateTime': datetime(2015, 1, 1),
        'RefreshType': 'INCREMENTAL_REFRESH'|'FULL_REFRESH',
        'Arn': 'string'
    }
)
type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset.

type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type Schedule

dict

param Schedule

[REQUIRED]

The refresh schedule.

  • ScheduleId (string) -- [REQUIRED]

    An identifier for the refresh schedule.

  • ScheduleFrequency (dict) -- [REQUIRED]

    The frequency for the refresh schedule.

    • Interval (string) -- [REQUIRED]

      The interval between scheduled refreshes. Valid values are as follows:

      • MINUTE15 : The dataset refreshes every 15 minutes. This value is only supported for incremental refreshes. This interval can only be used for one schedule per dataset.

      • MINUTE30 :The dataset refreshes every 30 minutes. This value is only supported for incremental refreshes. This interval can only be used for one schedule per dataset.

      • HOURLY : The dataset refreshes every hour. This interval can only be used for one schedule per dataset.

      • DAILY : The dataset refreshes every day.

      • WEEKLY : The dataset refreshes every week.

      • MONTHLY : The dataset refreshes every month.

    • RefreshOnDay (dict) --

      The day of the week that you want to schedule the refresh on. This value is required for weekly and monthly refresh intervals.

      • DayOfWeek (string) --

        The day of the week that you want to schedule a refresh on.

      • DayOfMonth (string) --

        The day of the month that you want to schedule refresh on.

    • Timezone (string) --

      The timezone that you want the refresh schedule to use. The timezone ID must match a corresponding ID found on java.util.time.getAvailableIDs() .

    • TimeOfTheDay (string) --

      The time of day that you want the datset to refresh. This value is expressed in HH:MM format. This field is not required for schedules that refresh hourly.

  • StartAfterDateTime (datetime) --

    Time after which the refresh schedule can be started, expressed in YYYY-MM-DDTHH:MM:SS format.

  • RefreshType (string) -- [REQUIRED]

    The type of refresh that a datset undergoes. Valid values are as follows:

    • FULL_REFRESH : A complete refresh of a dataset.

    • INCREMENTAL_REFRESH : A partial refresh of some rows of a dataset, based on the time window specified.

    For more information on full and incremental refreshes, see Refreshing SPICE data in the Amazon QuickSight User Guide .

  • Arn (string) --

    The Amazon Resource Name (ARN) for the refresh schedule.

rtype

dict

returns

Response Syntax

{
    'Status': 123,
    'RequestId': 'string',
    'ScheduleId': 'string',
    'Arn': 'string'
}

Response Structure

  • (dict) --

    • Status (integer) --

      The HTTP status of the request.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • ScheduleId (string) --

      The ID of the refresh schedule.

    • Arn (string) --

      The Amazon Resource Name (ARN) for the refresh schedule.

UpdateRefreshSchedule (new) Link ¶

Updates a refresh schedule for a dataset.

See also: AWS API Documentation

Request Syntax

client.update_refresh_schedule(
    DataSetId='string',
    AwsAccountId='string',
    Schedule={
        'ScheduleId': 'string',
        'ScheduleFrequency': {
            'Interval': 'MINUTE15'|'MINUTE30'|'HOURLY'|'DAILY'|'WEEKLY'|'MONTHLY',
            'RefreshOnDay': {
                'DayOfWeek': 'SUNDAY'|'MONDAY'|'TUESDAY'|'WEDNESDAY'|'THURSDAY'|'FRIDAY'|'SATURDAY',
                'DayOfMonth': 'string'
            },
            'Timezone': 'string',
            'TimeOfTheDay': 'string'
        },
        'StartAfterDateTime': datetime(2015, 1, 1),
        'RefreshType': 'INCREMENTAL_REFRESH'|'FULL_REFRESH',
        'Arn': 'string'
    }
)
type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset.

type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type Schedule

dict

param Schedule

[REQUIRED]

The refresh schedule.

  • ScheduleId (string) -- [REQUIRED]

    An identifier for the refresh schedule.

  • ScheduleFrequency (dict) -- [REQUIRED]

    The frequency for the refresh schedule.

    • Interval (string) -- [REQUIRED]

      The interval between scheduled refreshes. Valid values are as follows:

      • MINUTE15 : The dataset refreshes every 15 minutes. This value is only supported for incremental refreshes. This interval can only be used for one schedule per dataset.

      • MINUTE30 :The dataset refreshes every 30 minutes. This value is only supported for incremental refreshes. This interval can only be used for one schedule per dataset.

      • HOURLY : The dataset refreshes every hour. This interval can only be used for one schedule per dataset.

      • DAILY : The dataset refreshes every day.

      • WEEKLY : The dataset refreshes every week.

      • MONTHLY : The dataset refreshes every month.

    • RefreshOnDay (dict) --

      The day of the week that you want to schedule the refresh on. This value is required for weekly and monthly refresh intervals.

      • DayOfWeek (string) --

        The day of the week that you want to schedule a refresh on.

      • DayOfMonth (string) --

        The day of the month that you want to schedule refresh on.

    • Timezone (string) --

      The timezone that you want the refresh schedule to use. The timezone ID must match a corresponding ID found on java.util.time.getAvailableIDs() .

    • TimeOfTheDay (string) --

      The time of day that you want the datset to refresh. This value is expressed in HH:MM format. This field is not required for schedules that refresh hourly.

  • StartAfterDateTime (datetime) --

    Time after which the refresh schedule can be started, expressed in YYYY-MM-DDTHH:MM:SS format.

  • RefreshType (string) -- [REQUIRED]

    The type of refresh that a datset undergoes. Valid values are as follows:

    • FULL_REFRESH : A complete refresh of a dataset.

    • INCREMENTAL_REFRESH : A partial refresh of some rows of a dataset, based on the time window specified.

    For more information on full and incremental refreshes, see Refreshing SPICE data in the Amazon QuickSight User Guide .

  • Arn (string) --

    The Amazon Resource Name (ARN) for the refresh schedule.

rtype

dict

returns

Response Syntax

{
    'Status': 123,
    'RequestId': 'string',
    'ScheduleId': 'string',
    'Arn': 'string'
}

Response Structure

  • (dict) --

    • Status (integer) --

      The HTTP status of the request.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • ScheduleId (string) --

      The ID of the refresh schedule.

    • Arn (string) --

      The Amazon Resource Name (ARN) for the refresh schedule.

DescribeDataSetRefreshProperties (new) Link ¶

Describes the refresh properties of a dataset.

See also: AWS API Documentation

Request Syntax

client.describe_data_set_refresh_properties(
    AwsAccountId='string',
    DataSetId='string'
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset.

rtype

dict

returns

Response Syntax

{
    'RequestId': 'string',
    'Status': 123,
    'DataSetRefreshProperties': {
        'RefreshConfiguration': {
            'IncrementalRefresh': {
                'LookbackWindow': {
                    'ColumnName': 'string',
                    'Size': 123,
                    'SizeUnit': 'HOUR'|'DAY'|'WEEK'
                }
            }
        }
    }
}

Response Structure

  • (dict) --

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Status (integer) --

      The HTTP status of the request.

    • DataSetRefreshProperties (dict) --

      The dataset refresh properties.

      • RefreshConfiguration (dict) --

        The refresh configuration for a dataset.

        • IncrementalRefresh (dict) --

          The incremental refresh for the dataset.

          • LookbackWindow (dict) --

            The lookback window setup for an incremental refresh configuration.

            • ColumnName (string) --

              The name of the lookback window column.

            • Size (integer) --

              The lookback window column size.

            • SizeUnit (string) --

              The size unit that is used for the lookback window column. Valid values for this structure are HOUR , DAY , and WEEK .

PutDataSetRefreshProperties (new) Link ¶

Creates or updates the dataset refresh properties for the dataset.

See also: AWS API Documentation

Request Syntax

client.put_data_set_refresh_properties(
    AwsAccountId='string',
    DataSetId='string',
    DataSetRefreshProperties={
        'RefreshConfiguration': {
            'IncrementalRefresh': {
                'LookbackWindow': {
                    'ColumnName': 'string',
                    'Size': 123,
                    'SizeUnit': 'HOUR'|'DAY'|'WEEK'
                }
            }
        }
    }
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset.

type DataSetRefreshProperties

dict

param DataSetRefreshProperties

[REQUIRED]

The dataset refresh properties.

  • RefreshConfiguration (dict) -- [REQUIRED]

    The refresh configuration for a dataset.

    • IncrementalRefresh (dict) -- [REQUIRED]

      The incremental refresh for the dataset.

      • LookbackWindow (dict) -- [REQUIRED]

        The lookback window setup for an incremental refresh configuration.

        • ColumnName (string) -- [REQUIRED]

          The name of the lookback window column.

        • Size (integer) -- [REQUIRED]

          The lookback window column size.

        • SizeUnit (string) -- [REQUIRED]

          The size unit that is used for the lookback window column. Valid values for this structure are HOUR , DAY , and WEEK .

rtype

dict

returns

Response Syntax

{
    'RequestId': 'string',
    'Status': 123
}

Response Structure

  • (dict) --

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Status (integer) --

      The HTTP status of the request.

ListRefreshSchedules (new) Link ¶

Lists the refresh schedules of a dataset. Each dataset can have up to 5 schedules.

See also: AWS API Documentation

Request Syntax

client.list_refresh_schedules(
    AwsAccountId='string',
    DataSetId='string'
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset.

rtype

dict

returns

Response Syntax

{
    'RefreshSchedules': [
        {
            'ScheduleId': 'string',
            'ScheduleFrequency': {
                'Interval': 'MINUTE15'|'MINUTE30'|'HOURLY'|'DAILY'|'WEEKLY'|'MONTHLY',
                'RefreshOnDay': {
                    'DayOfWeek': 'SUNDAY'|'MONDAY'|'TUESDAY'|'WEDNESDAY'|'THURSDAY'|'FRIDAY'|'SATURDAY',
                    'DayOfMonth': 'string'
                },
                'Timezone': 'string',
                'TimeOfTheDay': 'string'
            },
            'StartAfterDateTime': datetime(2015, 1, 1),
            'RefreshType': 'INCREMENTAL_REFRESH'|'FULL_REFRESH',
            'Arn': 'string'
        },
    ],
    'Status': 123,
    'RequestId': 'string'
}

Response Structure

  • (dict) --

    • RefreshSchedules (list) --

      The list of refresh schedules for the dataset.

      • (dict) --

        A list of RefreshSchedule objects.

        • ScheduleId (string) --

          An identifier for the refresh schedule.

        • ScheduleFrequency (dict) --

          The frequency for the refresh schedule.

          • Interval (string) --

            The interval between scheduled refreshes. Valid values are as follows:

            • MINUTE15 : The dataset refreshes every 15 minutes. This value is only supported for incremental refreshes. This interval can only be used for one schedule per dataset.

            • MINUTE30 :The dataset refreshes every 30 minutes. This value is only supported for incremental refreshes. This interval can only be used for one schedule per dataset.

            • HOURLY : The dataset refreshes every hour. This interval can only be used for one schedule per dataset.

            • DAILY : The dataset refreshes every day.

            • WEEKLY : The dataset refreshes every week.

            • MONTHLY : The dataset refreshes every month.

          • RefreshOnDay (dict) --

            The day of the week that you want to schedule the refresh on. This value is required for weekly and monthly refresh intervals.

            • DayOfWeek (string) --

              The day of the week that you want to schedule a refresh on.

            • DayOfMonth (string) --

              The day of the month that you want to schedule refresh on.

          • Timezone (string) --

            The timezone that you want the refresh schedule to use. The timezone ID must match a corresponding ID found on java.util.time.getAvailableIDs() .

          • TimeOfTheDay (string) --

            The time of day that you want the datset to refresh. This value is expressed in HH:MM format. This field is not required for schedules that refresh hourly.

        • StartAfterDateTime (datetime) --

          Time after which the refresh schedule can be started, expressed in YYYY-MM-DDTHH:MM:SS format.

        • RefreshType (string) --

          The type of refresh that a datset undergoes. Valid values are as follows:

          • FULL_REFRESH : A complete refresh of a dataset.

          • INCREMENTAL_REFRESH : A partial refresh of some rows of a dataset, based on the time window specified.

          For more information on full and incremental refreshes, see Refreshing SPICE data in the Amazon QuickSight User Guide .

        • Arn (string) --

          The Amazon Resource Name (ARN) for the refresh schedule.

    • Status (integer) --

      The HTTP status of the request.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

DescribeRefreshSchedule (new) Link ¶

Provides a summary of a refresh schedule.

See also: AWS API Documentation

Request Syntax

client.describe_refresh_schedule(
    AwsAccountId='string',
    DataSetId='string',
    ScheduleId='string'
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset.

type ScheduleId

string

param ScheduleId

[REQUIRED]

The ID of the refresh schedule.

rtype

dict

returns

Response Syntax

{
    'RefreshSchedule': {
        'ScheduleId': 'string',
        'ScheduleFrequency': {
            'Interval': 'MINUTE15'|'MINUTE30'|'HOURLY'|'DAILY'|'WEEKLY'|'MONTHLY',
            'RefreshOnDay': {
                'DayOfWeek': 'SUNDAY'|'MONDAY'|'TUESDAY'|'WEDNESDAY'|'THURSDAY'|'FRIDAY'|'SATURDAY',
                'DayOfMonth': 'string'
            },
            'Timezone': 'string',
            'TimeOfTheDay': 'string'
        },
        'StartAfterDateTime': datetime(2015, 1, 1),
        'RefreshType': 'INCREMENTAL_REFRESH'|'FULL_REFRESH',
        'Arn': 'string'
    },
    'Status': 123,
    'RequestId': 'string',
    'Arn': 'string'
}

Response Structure

  • (dict) --

    • RefreshSchedule (dict) --

      The refresh schedule.

      • ScheduleId (string) --

        An identifier for the refresh schedule.

      • ScheduleFrequency (dict) --

        The frequency for the refresh schedule.

        • Interval (string) --

          The interval between scheduled refreshes. Valid values are as follows:

          • MINUTE15 : The dataset refreshes every 15 minutes. This value is only supported for incremental refreshes. This interval can only be used for one schedule per dataset.

          • MINUTE30 :The dataset refreshes every 30 minutes. This value is only supported for incremental refreshes. This interval can only be used for one schedule per dataset.

          • HOURLY : The dataset refreshes every hour. This interval can only be used for one schedule per dataset.

          • DAILY : The dataset refreshes every day.

          • WEEKLY : The dataset refreshes every week.

          • MONTHLY : The dataset refreshes every month.

        • RefreshOnDay (dict) --

          The day of the week that you want to schedule the refresh on. This value is required for weekly and monthly refresh intervals.

          • DayOfWeek (string) --

            The day of the week that you want to schedule a refresh on.

          • DayOfMonth (string) --

            The day of the month that you want to schedule refresh on.

        • Timezone (string) --

          The timezone that you want the refresh schedule to use. The timezone ID must match a corresponding ID found on java.util.time.getAvailableIDs() .

        • TimeOfTheDay (string) --

          The time of day that you want the datset to refresh. This value is expressed in HH:MM format. This field is not required for schedules that refresh hourly.

      • StartAfterDateTime (datetime) --

        Time after which the refresh schedule can be started, expressed in YYYY-MM-DDTHH:MM:SS format.

      • RefreshType (string) --

        The type of refresh that a datset undergoes. Valid values are as follows:

        • FULL_REFRESH : A complete refresh of a dataset.

        • INCREMENTAL_REFRESH : A partial refresh of some rows of a dataset, based on the time window specified.

        For more information on full and incremental refreshes, see Refreshing SPICE data in the Amazon QuickSight User Guide .

      • Arn (string) --

        The Amazon Resource Name (ARN) for the refresh schedule.

    • Status (integer) --

      The HTTP status of the request.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Arn (string) --

      The Amazon Resource Name (ARN) for the refresh schedule.

CreateDataSet (updated) Link ¶
Changes (request)
{'RowLevelPermissionTagConfiguration': {'TagRuleConfigurations': [['string']]}}

Creates a dataset. This operation doesn't support datasets that include uploaded files as a source.

See also: AWS API Documentation

Request Syntax

client.create_data_set(
    AwsAccountId='string',
    DataSetId='string',
    Name='string',
    PhysicalTableMap={
        'string': {
            'RelationalTable': {
                'DataSourceArn': 'string',
                'Catalog': 'string',
                'Schema': 'string',
                'Name': 'string',
                'InputColumns': [
                    {
                        'Name': 'string',
                        'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                    },
                ]
            },
            'CustomSql': {
                'DataSourceArn': 'string',
                'Name': 'string',
                'SqlQuery': 'string',
                'Columns': [
                    {
                        'Name': 'string',
                        'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                    },
                ]
            },
            'S3Source': {
                'DataSourceArn': 'string',
                'UploadSettings': {
                    'Format': 'CSV'|'TSV'|'CLF'|'ELF'|'XLSX'|'JSON',
                    'StartFromRow': 123,
                    'ContainsHeader': True|False,
                    'TextQualifier': 'DOUBLE_QUOTE'|'SINGLE_QUOTE',
                    'Delimiter': 'string'
                },
                'InputColumns': [
                    {
                        'Name': 'string',
                        'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                    },
                ]
            }
        }
    },
    LogicalTableMap={
        'string': {
            'Alias': 'string',
            'DataTransforms': [
                {
                    'ProjectOperation': {
                        'ProjectedColumns': [
                            'string',
                        ]
                    },
                    'FilterOperation': {
                        'ConditionExpression': 'string'
                    },
                    'CreateColumnsOperation': {
                        'Columns': [
                            {
                                'ColumnName': 'string',
                                'ColumnId': 'string',
                                'Expression': 'string'
                            },
                        ]
                    },
                    'RenameColumnOperation': {
                        'ColumnName': 'string',
                        'NewColumnName': 'string'
                    },
                    'CastColumnTypeOperation': {
                        'ColumnName': 'string',
                        'NewColumnType': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME',
                        'Format': 'string'
                    },
                    'TagColumnOperation': {
                        'ColumnName': 'string',
                        'Tags': [
                            {
                                'ColumnGeographicRole': 'COUNTRY'|'STATE'|'COUNTY'|'CITY'|'POSTCODE'|'LONGITUDE'|'LATITUDE',
                                'ColumnDescription': {
                                    'Text': 'string'
                                }
                            },
                        ]
                    },
                    'UntagColumnOperation': {
                        'ColumnName': 'string',
                        'TagNames': [
                            'COLUMN_GEOGRAPHIC_ROLE'|'COLUMN_DESCRIPTION',
                        ]
                    }
                },
            ],
            'Source': {
                'JoinInstruction': {
                    'LeftOperand': 'string',
                    'RightOperand': 'string',
                    'LeftJoinKeyProperties': {
                        'UniqueKey': True|False
                    },
                    'RightJoinKeyProperties': {
                        'UniqueKey': True|False
                    },
                    'Type': 'INNER'|'OUTER'|'LEFT'|'RIGHT',
                    'OnClause': 'string'
                },
                'PhysicalTableId': 'string',
                'DataSetArn': 'string'
            }
        }
    },
    ImportMode='SPICE'|'DIRECT_QUERY',
    ColumnGroups=[
        {
            'GeoSpatialColumnGroup': {
                'Name': 'string',
                'CountryCode': 'US',
                'Columns': [
                    'string',
                ]
            }
        },
    ],
    FieldFolders={
        'string': {
            'description': 'string',
            'columns': [
                'string',
            ]
        }
    },
    Permissions=[
        {
            'Principal': 'string',
            'Actions': [
                'string',
            ]
        },
    ],
    RowLevelPermissionDataSet={
        'Namespace': 'string',
        'Arn': 'string',
        'PermissionPolicy': 'GRANT_ACCESS'|'DENY_ACCESS',
        'FormatVersion': 'VERSION_1'|'VERSION_2',
        'Status': 'ENABLED'|'DISABLED'
    },
    RowLevelPermissionTagConfiguration={
        'Status': 'ENABLED'|'DISABLED',
        'TagRules': [
            {
                'TagKey': 'string',
                'ColumnName': 'string',
                'TagMultiValueDelimiter': 'string',
                'MatchAllValue': 'string'
            },
        ],
        'TagRuleConfigurations': [
            [
                'string',
            ],
        ]
    },
    ColumnLevelPermissionRules=[
        {
            'Principals': [
                'string',
            ],
            'ColumnNames': [
                'string',
            ]
        },
    ],
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ],
    DataSetUsageConfiguration={
        'DisableUseAsDirectQuerySource': True|False,
        'DisableUseAsImportedSource': True|False
    }
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

An ID for the dataset that you want to create. This ID is unique per Amazon Web Services Region for each Amazon Web Services account.

type Name

string

param Name

[REQUIRED]

The display name for the dataset.

type PhysicalTableMap

dict

param PhysicalTableMap

[REQUIRED]

Declares the physical tables that are available in the underlying data sources.

  • (string) --

    • (dict) --

      A view of a data source that contains information about the shape of the data in the underlying source. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

      • RelationalTable (dict) --

        A physical table type for relational data sources.

        • DataSourceArn (string) -- [REQUIRED]

          The Amazon Resource Name (ARN) for the data source.

        • Catalog (string) --

          The catalog associated with a table.

        • Schema (string) --

          The schema name. This name applies to certain relational database engines.

        • Name (string) -- [REQUIRED]

          The name of the relational table.

        • InputColumns (list) -- [REQUIRED]

          The column schema of the table.

          • (dict) --

            Metadata for a column that is used as the input of a transform operation.

            • Name (string) -- [REQUIRED]

              The name of this column in the underlying data source.

            • Type (string) -- [REQUIRED]

              The data type of the column.

      • CustomSql (dict) --

        A physical table type built from the results of the custom SQL query.

        • DataSourceArn (string) -- [REQUIRED]

          The Amazon Resource Name (ARN) of the data source.

        • Name (string) -- [REQUIRED]

          A display name for the SQL query result.

        • SqlQuery (string) -- [REQUIRED]

          The SQL query.

        • Columns (list) --

          The column schema from the SQL query result set.

          • (dict) --

            Metadata for a column that is used as the input of a transform operation.

            • Name (string) -- [REQUIRED]

              The name of this column in the underlying data source.

            • Type (string) -- [REQUIRED]

              The data type of the column.

      • S3Source (dict) --

        A physical table type for as S3 data source.

        • DataSourceArn (string) -- [REQUIRED]

          The Amazon Resource Name (ARN) for the data source.

        • UploadSettings (dict) --

          Information about the format for the S3 source file or files.

          • Format (string) --

            File format.

          • StartFromRow (integer) --

            A row number to start reading data from.

          • ContainsHeader (boolean) --

            Whether the file has a header row, or the files each have a header row.

          • TextQualifier (string) --

            Text qualifier.

          • Delimiter (string) --

            The delimiter between values in the file.

        • InputColumns (list) -- [REQUIRED]

          A physical table type for an S3 data source.

          Note

          For files that aren't JSON, only STRING data types are supported in input columns.

          • (dict) --

            Metadata for a column that is used as the input of a transform operation.

            • Name (string) -- [REQUIRED]

              The name of this column in the underlying data source.

            • Type (string) -- [REQUIRED]

              The data type of the column.

type LogicalTableMap

dict

param LogicalTableMap

Configures the combination and transformation of the data from the physical tables.

  • (string) --

    • (dict) --

      A logical table is a unit that joins and that data transformations operate on. A logical table has a source, which can be either a physical table or result of a join. When a logical table points to a physical table, the logical table acts as a mutable copy of that physical table through transform operations.

      • Alias (string) -- [REQUIRED]

        A display name for the logical table.

      • DataTransforms (list) --

        Transform operations that act on this logical table. For this structure to be valid, only one of the attributes can be non-null.

        • (dict) --

          A data transformation on a logical table. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

          • ProjectOperation (dict) --

            An operation that projects columns. Operations that come after a projection can only refer to projected columns.

            • ProjectedColumns (list) -- [REQUIRED]

              Projected columns.

              • (string) --

          • FilterOperation (dict) --

            An operation that filters rows based on some condition.

            • ConditionExpression (string) -- [REQUIRED]

              An expression that must evaluate to a Boolean value. Rows for which the expression evaluates to true are kept in the dataset.

          • CreateColumnsOperation (dict) --

            An operation that creates calculated columns. Columns created in one such operation form a lexical closure.

            • Columns (list) -- [REQUIRED]

              Calculated columns to create.

              • (dict) --

                A calculated column for a dataset.

                • ColumnName (string) -- [REQUIRED]

                  Column name.

                • ColumnId (string) -- [REQUIRED]

                  A unique ID to identify a calculated column. During a dataset update, if the column ID of a calculated column matches that of an existing calculated column, Amazon QuickSight preserves the existing calculated column.

                • Expression (string) -- [REQUIRED]

                  An expression that defines the calculated column.

          • RenameColumnOperation (dict) --

            An operation that renames a column.

            • ColumnName (string) -- [REQUIRED]

              The name of the column to be renamed.

            • NewColumnName (string) -- [REQUIRED]

              The new name for the column.

          • CastColumnTypeOperation (dict) --

            A transform operation that casts a column to a different type.

            • ColumnName (string) -- [REQUIRED]

              Column name.

            • NewColumnType (string) -- [REQUIRED]

              New column data type.

            • Format (string) --

              When casting a column from string to datetime type, you can supply a string in a format supported by Amazon QuickSight to denote the source data format.

          • TagColumnOperation (dict) --

            An operation that tags a column with additional information.

            • ColumnName (string) -- [REQUIRED]

              The column that this operation acts on.

            • Tags (list) -- [REQUIRED]

              The dataset column tag, currently only used for geospatial type tagging.

              Note

              This is not tags for the Amazon Web Services tagging feature.

              • (dict) --

                A tag for a column in a `` TagColumnOperation `` structure. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

                • ColumnGeographicRole (string) --

                  A geospatial role for a column.

                • ColumnDescription (dict) --

                  A description for a column.

                  • Text (string) --

                    The text of a description for a column.

          • UntagColumnOperation (dict) --

            A transform operation that removes tags associated with a column.

            • ColumnName (string) -- [REQUIRED]

              The column that this operation acts on.

            • TagNames (list) -- [REQUIRED]

              The column tags to remove from this column.

              • (string) --

      • Source (dict) -- [REQUIRED]

        Source of this logical table.

        • JoinInstruction (dict) --

          Specifies the result of a join of two logical tables.

          • LeftOperand (string) -- [REQUIRED]

            The operand on the left side of a join.

          • RightOperand (string) -- [REQUIRED]

            The operand on the right side of a join.

          • LeftJoinKeyProperties (dict) --

            Join key properties of the left operand.

            • UniqueKey (boolean) --

              A value that indicates that a row in a table is uniquely identified by the columns in a join key. This is used by Amazon QuickSight to optimize query performance.

          • RightJoinKeyProperties (dict) --

            Join key properties of the right operand.

            • UniqueKey (boolean) --

              A value that indicates that a row in a table is uniquely identified by the columns in a join key. This is used by Amazon QuickSight to optimize query performance.

          • Type (string) -- [REQUIRED]

            The type of join that it is.

          • OnClause (string) -- [REQUIRED]

            The join instructions provided in the ON clause of a join.

        • PhysicalTableId (string) --

          Physical table ID.

        • DataSetArn (string) --

          The Amazon Resource Number (ARN) of the parent dataset.

type ImportMode

string

param ImportMode

[REQUIRED]

Indicates whether you want to import the data into SPICE.

type ColumnGroups

list

param ColumnGroups

Groupings of columns that work together in certain Amazon QuickSight features. Currently, only geospatial hierarchy is supported.

  • (dict) --

    Groupings of columns that work together in certain Amazon QuickSight features. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

    • GeoSpatialColumnGroup (dict) --

      Geospatial column group that denotes a hierarchy.

      • Name (string) -- [REQUIRED]

        A display name for the hierarchy.

      • CountryCode (string) --

        Country code.

      • Columns (list) -- [REQUIRED]

        Columns in this hierarchy.

        • (string) --

type FieldFolders

dict

param FieldFolders

The folder that contains fields and nested subfolders for your dataset.

  • (string) --

    • (dict) --

      A FieldFolder element is a folder that contains fields and nested subfolders.

      • description (string) --

        The description for a field folder.

      • columns (list) --

        A folder has a list of columns. A column can only be in one folder.

        • (string) --

type Permissions

list

param Permissions

A list of resource permissions on the dataset.

  • (dict) --

    Permission for the resource.

    • Principal (string) -- [REQUIRED]

      The Amazon Resource Name (ARN) of the principal. This can be one of the following:

      • The ARN of an Amazon QuickSight user or group associated with a data source or dataset. (This is common.)

      • The ARN of an Amazon QuickSight user, group, or namespace associated with an analysis, dashboard, template, or theme. (This is common.)

      • The ARN of an Amazon Web Services account root: This is an IAM ARN rather than a QuickSight ARN. Use this option only to share resources (templates) across Amazon Web Services accounts. (This is less common.)

    • Actions (list) -- [REQUIRED]

      The IAM action to grant or revoke permissions on.

      • (string) --

type RowLevelPermissionDataSet

dict

param RowLevelPermissionDataSet

The row-level security configuration for the data that you want to create.

  • Namespace (string) --

    The namespace associated with the dataset that contains permissions for RLS.

  • Arn (string) -- [REQUIRED]

    The Amazon Resource Name (ARN) of the dataset that contains permissions for RLS.

  • PermissionPolicy (string) -- [REQUIRED]

    The type of permissions to use when interpreting the permissions for RLS. DENY_ACCESS is included for backward compatibility only.

  • FormatVersion (string) --

    The user or group rules associated with the dataset that contains permissions for RLS.

    By default, FormatVersion is VERSION_1 . When FormatVersion is VERSION_1 , UserName and GroupName are required. When FormatVersion is VERSION_2 , UserARN and GroupARN are required, and Namespace must not exist.

  • Status (string) --

    The status of the row-level security permission dataset. If enabled, the status is ENABLED . If disabled, the status is DISABLED .

type RowLevelPermissionTagConfiguration

dict

param RowLevelPermissionTagConfiguration

The configuration of tags on a dataset to set row-level security. Row-level security tags are currently supported for anonymous embedding only.

  • Status (string) --

    The status of row-level security tags. If enabled, the status is ENABLED . If disabled, the status is DISABLED .

  • TagRules (list) -- [REQUIRED]

    A set of rules associated with row-level security, such as the tag names and columns that they are assigned to.

    • (dict) --

      A set of rules associated with a tag.

      • TagKey (string) -- [REQUIRED]

        The unique key for a tag.

      • ColumnName (string) -- [REQUIRED]

        The column name that a tag key is assigned to.

      • TagMultiValueDelimiter (string) --

        A string that you want to use to delimit the values when you pass the values at run time. For example, you can delimit the values with a comma.

      • MatchAllValue (string) --

        A string that you want to use to filter by all the values in a column in the dataset and don’t want to list the values one by one. For example, you can use an asterisk as your match all value.

  • TagRuleConfigurations (list) --

    A list of tag configuration rules to apply to a dataset. All tag configurations have the OR condition. Tags within each tile will be joined (AND). At least one rule in this structure must have all tag values assigned to it to apply Row-level security (RLS) to the dataset.

    • (list) --

      • (string) --

type ColumnLevelPermissionRules

list

param ColumnLevelPermissionRules

A set of one or more definitions of a `` ColumnLevelPermissionRule `` .

  • (dict) --

    A rule defined to grant access on one or more restricted columns. Each dataset can have multiple rules. To create a restricted column, you add it to one or more rules. Each rule must contain at least one column and at least one user or group. To be able to see a restricted column, a user or group needs to be added to a rule for that column.

    • Principals (list) --

      An array of Amazon Resource Names (ARNs) for Amazon QuickSight users or groups.

      • (string) --

    • ColumnNames (list) --

      An array of column names.

      • (string) --

type Tags

list

param Tags

Contains a map of the key-value pairs for the resource tag or tags assigned to the dataset.

  • (dict) --

    The key or keys of the key-value pairs for the resource tag or tags assigned to the resource.

    • Key (string) -- [REQUIRED]

      Tag key.

    • Value (string) -- [REQUIRED]

      Tag value.

type DataSetUsageConfiguration

dict

param DataSetUsageConfiguration

The usage configuration to apply to child datasets that reference this dataset as a source.

  • DisableUseAsDirectQuerySource (boolean) --

    An option that controls whether a child dataset of a direct query can use this dataset as a source.

  • DisableUseAsImportedSource (boolean) --

    An option that controls whether a child dataset that's stored in QuickSight can use this dataset as a source.

rtype

dict

returns

Response Syntax

{
    'Arn': 'string',
    'DataSetId': 'string',
    'IngestionArn': 'string',
    'IngestionId': 'string',
    'RequestId': 'string',
    'Status': 123
}

Response Structure

  • (dict) --

    • Arn (string) --

      The Amazon Resource Name (ARN) of the dataset.

    • DataSetId (string) --

      The ID for the dataset that you want to create. This ID is unique per Amazon Web Services Region for each Amazon Web Services account.

    • IngestionArn (string) --

      The ARN for the ingestion, which is triggered as a result of dataset creation if the import mode is SPICE.

    • IngestionId (string) --

      The ID of the ingestion, which is triggered as a result of dataset creation if the import mode is SPICE.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Status (integer) --

      The HTTP status of the request.

DescribeDataSet (updated) Link ¶
Changes (response)
{'DataSet': {'RowLevelPermissionTagConfiguration': {'TagRuleConfigurations': [['string']]}}}

Describes a dataset. This operation doesn't support datasets that include uploaded files as a source.

See also: AWS API Documentation

Request Syntax

client.describe_data_set(
    AwsAccountId='string',
    DataSetId='string'
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

The ID for the dataset that you want to create. This ID is unique per Amazon Web Services Region for each Amazon Web Services account.

rtype

dict

returns

Response Syntax

{
    'DataSet': {
        'Arn': 'string',
        'DataSetId': 'string',
        'Name': 'string',
        'CreatedTime': datetime(2015, 1, 1),
        'LastUpdatedTime': datetime(2015, 1, 1),
        'PhysicalTableMap': {
            'string': {
                'RelationalTable': {
                    'DataSourceArn': 'string',
                    'Catalog': 'string',
                    'Schema': 'string',
                    'Name': 'string',
                    'InputColumns': [
                        {
                            'Name': 'string',
                            'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                        },
                    ]
                },
                'CustomSql': {
                    'DataSourceArn': 'string',
                    'Name': 'string',
                    'SqlQuery': 'string',
                    'Columns': [
                        {
                            'Name': 'string',
                            'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                        },
                    ]
                },
                'S3Source': {
                    'DataSourceArn': 'string',
                    'UploadSettings': {
                        'Format': 'CSV'|'TSV'|'CLF'|'ELF'|'XLSX'|'JSON',
                        'StartFromRow': 123,
                        'ContainsHeader': True|False,
                        'TextQualifier': 'DOUBLE_QUOTE'|'SINGLE_QUOTE',
                        'Delimiter': 'string'
                    },
                    'InputColumns': [
                        {
                            'Name': 'string',
                            'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                        },
                    ]
                }
            }
        },
        'LogicalTableMap': {
            'string': {
                'Alias': 'string',
                'DataTransforms': [
                    {
                        'ProjectOperation': {
                            'ProjectedColumns': [
                                'string',
                            ]
                        },
                        'FilterOperation': {
                            'ConditionExpression': 'string'
                        },
                        'CreateColumnsOperation': {
                            'Columns': [
                                {
                                    'ColumnName': 'string',
                                    'ColumnId': 'string',
                                    'Expression': 'string'
                                },
                            ]
                        },
                        'RenameColumnOperation': {
                            'ColumnName': 'string',
                            'NewColumnName': 'string'
                        },
                        'CastColumnTypeOperation': {
                            'ColumnName': 'string',
                            'NewColumnType': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME',
                            'Format': 'string'
                        },
                        'TagColumnOperation': {
                            'ColumnName': 'string',
                            'Tags': [
                                {
                                    'ColumnGeographicRole': 'COUNTRY'|'STATE'|'COUNTY'|'CITY'|'POSTCODE'|'LONGITUDE'|'LATITUDE',
                                    'ColumnDescription': {
                                        'Text': 'string'
                                    }
                                },
                            ]
                        },
                        'UntagColumnOperation': {
                            'ColumnName': 'string',
                            'TagNames': [
                                'COLUMN_GEOGRAPHIC_ROLE'|'COLUMN_DESCRIPTION',
                            ]
                        }
                    },
                ],
                'Source': {
                    'JoinInstruction': {
                        'LeftOperand': 'string',
                        'RightOperand': 'string',
                        'LeftJoinKeyProperties': {
                            'UniqueKey': True|False
                        },
                        'RightJoinKeyProperties': {
                            'UniqueKey': True|False
                        },
                        'Type': 'INNER'|'OUTER'|'LEFT'|'RIGHT',
                        'OnClause': 'string'
                    },
                    'PhysicalTableId': 'string',
                    'DataSetArn': 'string'
                }
            }
        },
        'OutputColumns': [
            {
                'Name': 'string',
                'Description': 'string',
                'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'
            },
        ],
        'ImportMode': 'SPICE'|'DIRECT_QUERY',
        'ConsumedSpiceCapacityInBytes': 123,
        'ColumnGroups': [
            {
                'GeoSpatialColumnGroup': {
                    'Name': 'string',
                    'CountryCode': 'US',
                    'Columns': [
                        'string',
                    ]
                }
            },
        ],
        'FieldFolders': {
            'string': {
                'description': 'string',
                'columns': [
                    'string',
                ]
            }
        },
        'RowLevelPermissionDataSet': {
            'Namespace': 'string',
            'Arn': 'string',
            'PermissionPolicy': 'GRANT_ACCESS'|'DENY_ACCESS',
            'FormatVersion': 'VERSION_1'|'VERSION_2',
            'Status': 'ENABLED'|'DISABLED'
        },
        'RowLevelPermissionTagConfiguration': {
            'Status': 'ENABLED'|'DISABLED',
            'TagRules': [
                {
                    'TagKey': 'string',
                    'ColumnName': 'string',
                    'TagMultiValueDelimiter': 'string',
                    'MatchAllValue': 'string'
                },
            ],
            'TagRuleConfigurations': [
                [
                    'string',
                ],
            ]
        },
        'ColumnLevelPermissionRules': [
            {
                'Principals': [
                    'string',
                ],
                'ColumnNames': [
                    'string',
                ]
            },
        ],
        'DataSetUsageConfiguration': {
            'DisableUseAsDirectQuerySource': True|False,
            'DisableUseAsImportedSource': True|False
        }
    },
    'RequestId': 'string',
    'Status': 123
}

Response Structure

  • (dict) --

    • DataSet (dict) --

      Information on the dataset.

      • Arn (string) --

        The Amazon Resource Name (ARN) of the resource.

      • DataSetId (string) --

        The ID of the dataset.

      • Name (string) --

        A display name for the dataset.

      • CreatedTime (datetime) --

        The time that this dataset was created.

      • LastUpdatedTime (datetime) --

        The last time that this dataset was updated.

      • PhysicalTableMap (dict) --

        Declares the physical tables that are available in the underlying data sources.

        • (string) --

          • (dict) --

            A view of a data source that contains information about the shape of the data in the underlying source. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

            • RelationalTable (dict) --

              A physical table type for relational data sources.

              • DataSourceArn (string) --

                The Amazon Resource Name (ARN) for the data source.

              • Catalog (string) --

                The catalog associated with a table.

              • Schema (string) --

                The schema name. This name applies to certain relational database engines.

              • Name (string) --

                The name of the relational table.

              • InputColumns (list) --

                The column schema of the table.

                • (dict) --

                  Metadata for a column that is used as the input of a transform operation.

                  • Name (string) --

                    The name of this column in the underlying data source.

                  • Type (string) --

                    The data type of the column.

            • CustomSql (dict) --

              A physical table type built from the results of the custom SQL query.

              • DataSourceArn (string) --

                The Amazon Resource Name (ARN) of the data source.

              • Name (string) --

                A display name for the SQL query result.

              • SqlQuery (string) --

                The SQL query.

              • Columns (list) --

                The column schema from the SQL query result set.

                • (dict) --

                  Metadata for a column that is used as the input of a transform operation.

                  • Name (string) --

                    The name of this column in the underlying data source.

                  • Type (string) --

                    The data type of the column.

            • S3Source (dict) --

              A physical table type for as S3 data source.

              • DataSourceArn (string) --

                The Amazon Resource Name (ARN) for the data source.

              • UploadSettings (dict) --

                Information about the format for the S3 source file or files.

                • Format (string) --

                  File format.

                • StartFromRow (integer) --

                  A row number to start reading data from.

                • ContainsHeader (boolean) --

                  Whether the file has a header row, or the files each have a header row.

                • TextQualifier (string) --

                  Text qualifier.

                • Delimiter (string) --

                  The delimiter between values in the file.

              • InputColumns (list) --

                A physical table type for an S3 data source.

                Note

                For files that aren't JSON, only STRING data types are supported in input columns.

                • (dict) --

                  Metadata for a column that is used as the input of a transform operation.

                  • Name (string) --

                    The name of this column in the underlying data source.

                  • Type (string) --

                    The data type of the column.

      • LogicalTableMap (dict) --

        Configures the combination and transformation of the data from the physical tables.

        • (string) --

          • (dict) --

            A logical table is a unit that joins and that data transformations operate on. A logical table has a source, which can be either a physical table or result of a join. When a logical table points to a physical table, the logical table acts as a mutable copy of that physical table through transform operations.

            • Alias (string) --

              A display name for the logical table.

            • DataTransforms (list) --

              Transform operations that act on this logical table. For this structure to be valid, only one of the attributes can be non-null.

              • (dict) --

                A data transformation on a logical table. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

                • ProjectOperation (dict) --

                  An operation that projects columns. Operations that come after a projection can only refer to projected columns.

                  • ProjectedColumns (list) --

                    Projected columns.

                    • (string) --

                • FilterOperation (dict) --

                  An operation that filters rows based on some condition.

                  • ConditionExpression (string) --

                    An expression that must evaluate to a Boolean value. Rows for which the expression evaluates to true are kept in the dataset.

                • CreateColumnsOperation (dict) --

                  An operation that creates calculated columns. Columns created in one such operation form a lexical closure.

                  • Columns (list) --

                    Calculated columns to create.

                    • (dict) --

                      A calculated column for a dataset.

                      • ColumnName (string) --

                        Column name.

                      • ColumnId (string) --

                        A unique ID to identify a calculated column. During a dataset update, if the column ID of a calculated column matches that of an existing calculated column, Amazon QuickSight preserves the existing calculated column.

                      • Expression (string) --

                        An expression that defines the calculated column.

                • RenameColumnOperation (dict) --

                  An operation that renames a column.

                  • ColumnName (string) --

                    The name of the column to be renamed.

                  • NewColumnName (string) --

                    The new name for the column.

                • CastColumnTypeOperation (dict) --

                  A transform operation that casts a column to a different type.

                  • ColumnName (string) --

                    Column name.

                  • NewColumnType (string) --

                    New column data type.

                  • Format (string) --

                    When casting a column from string to datetime type, you can supply a string in a format supported by Amazon QuickSight to denote the source data format.

                • TagColumnOperation (dict) --

                  An operation that tags a column with additional information.

                  • ColumnName (string) --

                    The column that this operation acts on.

                  • Tags (list) --

                    The dataset column tag, currently only used for geospatial type tagging.

                    Note

                    This is not tags for the Amazon Web Services tagging feature.

                    • (dict) --

                      A tag for a column in a `` TagColumnOperation `` structure. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

                      • ColumnGeographicRole (string) --

                        A geospatial role for a column.

                      • ColumnDescription (dict) --

                        A description for a column.

                        • Text (string) --

                          The text of a description for a column.

                • UntagColumnOperation (dict) --

                  A transform operation that removes tags associated with a column.

                  • ColumnName (string) --

                    The column that this operation acts on.

                  • TagNames (list) --

                    The column tags to remove from this column.

                    • (string) --

            • Source (dict) --

              Source of this logical table.

              • JoinInstruction (dict) --

                Specifies the result of a join of two logical tables.

                • LeftOperand (string) --

                  The operand on the left side of a join.

                • RightOperand (string) --

                  The operand on the right side of a join.

                • LeftJoinKeyProperties (dict) --

                  Join key properties of the left operand.

                  • UniqueKey (boolean) --

                    A value that indicates that a row in a table is uniquely identified by the columns in a join key. This is used by Amazon QuickSight to optimize query performance.

                • RightJoinKeyProperties (dict) --

                  Join key properties of the right operand.

                  • UniqueKey (boolean) --

                    A value that indicates that a row in a table is uniquely identified by the columns in a join key. This is used by Amazon QuickSight to optimize query performance.

                • Type (string) --

                  The type of join that it is.

                • OnClause (string) --

                  The join instructions provided in the ON clause of a join.

              • PhysicalTableId (string) --

                Physical table ID.

              • DataSetArn (string) --

                The Amazon Resource Number (ARN) of the parent dataset.

      • OutputColumns (list) --

        The list of columns after all transforms. These columns are available in templates, analyses, and dashboards.

        • (dict) --

          Output column.

          • Name (string) --

            A display name for the dataset.

          • Description (string) --

            A description for a column.

          • Type (string) --

            Type.

      • ImportMode (string) --

        A value that indicates whether you want to import the data into SPICE.

      • ConsumedSpiceCapacityInBytes (integer) --

        The amount of SPICE capacity used by this dataset. This is 0 if the dataset isn't imported into SPICE.

      • ColumnGroups (list) --

        Groupings of columns that work together in certain Amazon QuickSight features. Currently, only geospatial hierarchy is supported.

        • (dict) --

          Groupings of columns that work together in certain Amazon QuickSight features. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

          • GeoSpatialColumnGroup (dict) --

            Geospatial column group that denotes a hierarchy.

            • Name (string) --

              A display name for the hierarchy.

            • CountryCode (string) --

              Country code.

            • Columns (list) --

              Columns in this hierarchy.

              • (string) --

      • FieldFolders (dict) --

        The folder that contains fields and nested subfolders for your dataset.

        • (string) --

          • (dict) --

            A FieldFolder element is a folder that contains fields and nested subfolders.

            • description (string) --

              The description for a field folder.

            • columns (list) --

              A folder has a list of columns. A column can only be in one folder.

              • (string) --

      • RowLevelPermissionDataSet (dict) --

        The row-level security configuration for the dataset.

        • Namespace (string) --

          The namespace associated with the dataset that contains permissions for RLS.

        • Arn (string) --

          The Amazon Resource Name (ARN) of the dataset that contains permissions for RLS.

        • PermissionPolicy (string) --

          The type of permissions to use when interpreting the permissions for RLS. DENY_ACCESS is included for backward compatibility only.

        • FormatVersion (string) --

          The user or group rules associated with the dataset that contains permissions for RLS.

          By default, FormatVersion is VERSION_1 . When FormatVersion is VERSION_1 , UserName and GroupName are required. When FormatVersion is VERSION_2 , UserARN and GroupARN are required, and Namespace must not exist.

        • Status (string) --

          The status of the row-level security permission dataset. If enabled, the status is ENABLED . If disabled, the status is DISABLED .

      • RowLevelPermissionTagConfiguration (dict) --

        The element you can use to define tags for row-level security.

        • Status (string) --

          The status of row-level security tags. If enabled, the status is ENABLED . If disabled, the status is DISABLED .

        • TagRules (list) --

          A set of rules associated with row-level security, such as the tag names and columns that they are assigned to.

          • (dict) --

            A set of rules associated with a tag.

            • TagKey (string) --

              The unique key for a tag.

            • ColumnName (string) --

              The column name that a tag key is assigned to.

            • TagMultiValueDelimiter (string) --

              A string that you want to use to delimit the values when you pass the values at run time. For example, you can delimit the values with a comma.

            • MatchAllValue (string) --

              A string that you want to use to filter by all the values in a column in the dataset and don’t want to list the values one by one. For example, you can use an asterisk as your match all value.

        • TagRuleConfigurations (list) --

          A list of tag configuration rules to apply to a dataset. All tag configurations have the OR condition. Tags within each tile will be joined (AND). At least one rule in this structure must have all tag values assigned to it to apply Row-level security (RLS) to the dataset.

          • (list) --

            • (string) --

      • ColumnLevelPermissionRules (list) --

        A set of one or more definitions of a `` ColumnLevelPermissionRule `` .

        • (dict) --

          A rule defined to grant access on one or more restricted columns. Each dataset can have multiple rules. To create a restricted column, you add it to one or more rules. Each rule must contain at least one column and at least one user or group. To be able to see a restricted column, a user or group needs to be added to a rule for that column.

          • Principals (list) --

            An array of Amazon Resource Names (ARNs) for Amazon QuickSight users or groups.

            • (string) --

          • ColumnNames (list) --

            An array of column names.

            • (string) --

      • DataSetUsageConfiguration (dict) --

        The usage configuration to apply to child datasets that reference this dataset as a source.

        • DisableUseAsDirectQuerySource (boolean) --

          An option that controls whether a child dataset of a direct query can use this dataset as a source.

        • DisableUseAsImportedSource (boolean) --

          An option that controls whether a child dataset that's stored in QuickSight can use this dataset as a source.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Status (integer) --

      The HTTP status of the request.

DescribeIngestion (updated) Link ¶
Changes (response)
{'Ingestion': {'ErrorInfo': {'Type': {'DUPLICATE_COLUMN_NAMES_FOUND'}}}}

Describes a SPICE ingestion.

See also: AWS API Documentation

Request Syntax

client.describe_ingestion(
    AwsAccountId='string',
    DataSetId='string',
    IngestionId='string'
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset used in the ingestion.

type IngestionId

string

param IngestionId

[REQUIRED]

An ID for the ingestion.

rtype

dict

returns

Response Syntax

{
    'Ingestion': {
        'Arn': 'string',
        'IngestionId': 'string',
        'IngestionStatus': 'INITIALIZED'|'QUEUED'|'RUNNING'|'FAILED'|'COMPLETED'|'CANCELLED',
        'ErrorInfo': {
            'Type': 'FAILURE_TO_ASSUME_ROLE'|'INGESTION_SUPERSEDED'|'INGESTION_CANCELED'|'DATA_SET_DELETED'|'DATA_SET_NOT_SPICE'|'S3_UPLOADED_FILE_DELETED'|'S3_MANIFEST_ERROR'|'DATA_TOLERANCE_EXCEPTION'|'SPICE_TABLE_NOT_FOUND'|'DATA_SET_SIZE_LIMIT_EXCEEDED'|'ROW_SIZE_LIMIT_EXCEEDED'|'ACCOUNT_CAPACITY_LIMIT_EXCEEDED'|'CUSTOMER_ERROR'|'DATA_SOURCE_NOT_FOUND'|'IAM_ROLE_NOT_AVAILABLE'|'CONNECTION_FAILURE'|'SQL_TABLE_NOT_FOUND'|'PERMISSION_DENIED'|'SSL_CERTIFICATE_VALIDATION_FAILURE'|'OAUTH_TOKEN_FAILURE'|'SOURCE_API_LIMIT_EXCEEDED_FAILURE'|'PASSWORD_AUTHENTICATION_FAILURE'|'SQL_SCHEMA_MISMATCH_ERROR'|'INVALID_DATE_FORMAT'|'INVALID_DATAPREP_SYNTAX'|'SOURCE_RESOURCE_LIMIT_EXCEEDED'|'SQL_INVALID_PARAMETER_VALUE'|'QUERY_TIMEOUT'|'SQL_NUMERIC_OVERFLOW'|'UNRESOLVABLE_HOST'|'UNROUTABLE_HOST'|'SQL_EXCEPTION'|'S3_FILE_INACCESSIBLE'|'IOT_FILE_NOT_FOUND'|'IOT_DATA_SET_FILE_EMPTY'|'INVALID_DATA_SOURCE_CONFIG'|'DATA_SOURCE_AUTH_FAILED'|'DATA_SOURCE_CONNECTION_FAILED'|'FAILURE_TO_PROCESS_JSON_FILE'|'INTERNAL_SERVICE_ERROR'|'REFRESH_SUPPRESSED_BY_EDIT'|'PERMISSION_NOT_FOUND'|'ELASTICSEARCH_CURSOR_NOT_ENABLED'|'CURSOR_NOT_ENABLED'|'DUPLICATE_COLUMN_NAMES_FOUND',
            'Message': 'string'
        },
        'RowInfo': {
            'RowsIngested': 123,
            'RowsDropped': 123,
            'TotalRowsInDataset': 123
        },
        'QueueInfo': {
            'WaitingOnIngestion': 'string',
            'QueuedIngestion': 'string'
        },
        'CreatedTime': datetime(2015, 1, 1),
        'IngestionTimeInSeconds': 123,
        'IngestionSizeInBytes': 123,
        'RequestSource': 'MANUAL'|'SCHEDULED',
        'RequestType': 'INITIAL_INGESTION'|'EDIT'|'INCREMENTAL_REFRESH'|'FULL_REFRESH'
    },
    'RequestId': 'string',
    'Status': 123
}

Response Structure

  • (dict) --

    • Ingestion (dict) --

      Information about the ingestion.

      • Arn (string) --

        The Amazon Resource Name (ARN) of the resource.

      • IngestionId (string) --

        Ingestion ID.

      • IngestionStatus (string) --

        Ingestion status.

      • ErrorInfo (dict) --

        Error information for this ingestion.

        • Type (string) --

          Error type.

        • Message (string) --

          Error message.

      • RowInfo (dict) --

        Information about rows for a data set SPICE ingestion.

        • RowsIngested (integer) --

          The number of rows that were ingested.

        • RowsDropped (integer) --

          The number of rows that were not ingested.

        • TotalRowsInDataset (integer) --

          The total number of rows in the dataset.

      • QueueInfo (dict) --

        Information about a queued dataset SPICE ingestion.

        • WaitingOnIngestion (string) --

          The ID of the queued ingestion.

        • QueuedIngestion (string) --

          The ID of the ongoing ingestion. The queued ingestion is waiting for the ongoing ingestion to complete.

      • CreatedTime (datetime) --

        The time that this ingestion started.

      • IngestionTimeInSeconds (integer) --

        The time that this ingestion took, measured in seconds.

      • IngestionSizeInBytes (integer) --

        The size of the data ingested, in bytes.

      • RequestSource (string) --

        Event source for this ingestion.

      • RequestType (string) --

        Type of this ingestion.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Status (integer) --

      The HTTP status of the request.

ListIngestions (updated) Link ¶
Changes (response)
{'Ingestions': {'ErrorInfo': {'Type': {'DUPLICATE_COLUMN_NAMES_FOUND'}}}}

Lists the history of SPICE ingestions for a dataset.

See also: AWS API Documentation

Request Syntax

client.list_ingestions(
    DataSetId='string',
    NextToken='string',
    AwsAccountId='string',
    MaxResults=123
)
type DataSetId

string

param DataSetId

[REQUIRED]

The ID of the dataset used in the ingestion.

type NextToken

string

param NextToken

The token for the next set of results, or null if there are no more results.

type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type MaxResults

integer

param MaxResults

The maximum number of results to be returned per request.

rtype

dict

returns

Response Syntax

{
    'Ingestions': [
        {
            'Arn': 'string',
            'IngestionId': 'string',
            'IngestionStatus': 'INITIALIZED'|'QUEUED'|'RUNNING'|'FAILED'|'COMPLETED'|'CANCELLED',
            'ErrorInfo': {
                'Type': 'FAILURE_TO_ASSUME_ROLE'|'INGESTION_SUPERSEDED'|'INGESTION_CANCELED'|'DATA_SET_DELETED'|'DATA_SET_NOT_SPICE'|'S3_UPLOADED_FILE_DELETED'|'S3_MANIFEST_ERROR'|'DATA_TOLERANCE_EXCEPTION'|'SPICE_TABLE_NOT_FOUND'|'DATA_SET_SIZE_LIMIT_EXCEEDED'|'ROW_SIZE_LIMIT_EXCEEDED'|'ACCOUNT_CAPACITY_LIMIT_EXCEEDED'|'CUSTOMER_ERROR'|'DATA_SOURCE_NOT_FOUND'|'IAM_ROLE_NOT_AVAILABLE'|'CONNECTION_FAILURE'|'SQL_TABLE_NOT_FOUND'|'PERMISSION_DENIED'|'SSL_CERTIFICATE_VALIDATION_FAILURE'|'OAUTH_TOKEN_FAILURE'|'SOURCE_API_LIMIT_EXCEEDED_FAILURE'|'PASSWORD_AUTHENTICATION_FAILURE'|'SQL_SCHEMA_MISMATCH_ERROR'|'INVALID_DATE_FORMAT'|'INVALID_DATAPREP_SYNTAX'|'SOURCE_RESOURCE_LIMIT_EXCEEDED'|'SQL_INVALID_PARAMETER_VALUE'|'QUERY_TIMEOUT'|'SQL_NUMERIC_OVERFLOW'|'UNRESOLVABLE_HOST'|'UNROUTABLE_HOST'|'SQL_EXCEPTION'|'S3_FILE_INACCESSIBLE'|'IOT_FILE_NOT_FOUND'|'IOT_DATA_SET_FILE_EMPTY'|'INVALID_DATA_SOURCE_CONFIG'|'DATA_SOURCE_AUTH_FAILED'|'DATA_SOURCE_CONNECTION_FAILED'|'FAILURE_TO_PROCESS_JSON_FILE'|'INTERNAL_SERVICE_ERROR'|'REFRESH_SUPPRESSED_BY_EDIT'|'PERMISSION_NOT_FOUND'|'ELASTICSEARCH_CURSOR_NOT_ENABLED'|'CURSOR_NOT_ENABLED'|'DUPLICATE_COLUMN_NAMES_FOUND',
                'Message': 'string'
            },
            'RowInfo': {
                'RowsIngested': 123,
                'RowsDropped': 123,
                'TotalRowsInDataset': 123
            },
            'QueueInfo': {
                'WaitingOnIngestion': 'string',
                'QueuedIngestion': 'string'
            },
            'CreatedTime': datetime(2015, 1, 1),
            'IngestionTimeInSeconds': 123,
            'IngestionSizeInBytes': 123,
            'RequestSource': 'MANUAL'|'SCHEDULED',
            'RequestType': 'INITIAL_INGESTION'|'EDIT'|'INCREMENTAL_REFRESH'|'FULL_REFRESH'
        },
    ],
    'NextToken': 'string',
    'RequestId': 'string',
    'Status': 123
}

Response Structure

  • (dict) --

    • Ingestions (list) --

      A list of the ingestions.

      • (dict) --

        Information about the SPICE ingestion for a dataset.

        • Arn (string) --

          The Amazon Resource Name (ARN) of the resource.

        • IngestionId (string) --

          Ingestion ID.

        • IngestionStatus (string) --

          Ingestion status.

        • ErrorInfo (dict) --

          Error information for this ingestion.

          • Type (string) --

            Error type.

          • Message (string) --

            Error message.

        • RowInfo (dict) --

          Information about rows for a data set SPICE ingestion.

          • RowsIngested (integer) --

            The number of rows that were ingested.

          • RowsDropped (integer) --

            The number of rows that were not ingested.

          • TotalRowsInDataset (integer) --

            The total number of rows in the dataset.

        • QueueInfo (dict) --

          Information about a queued dataset SPICE ingestion.

          • WaitingOnIngestion (string) --

            The ID of the queued ingestion.

          • QueuedIngestion (string) --

            The ID of the ongoing ingestion. The queued ingestion is waiting for the ongoing ingestion to complete.

        • CreatedTime (datetime) --

          The time that this ingestion started.

        • IngestionTimeInSeconds (integer) --

          The time that this ingestion took, measured in seconds.

        • IngestionSizeInBytes (integer) --

          The size of the data ingested, in bytes.

        • RequestSource (string) --

          Event source for this ingestion.

        • RequestType (string) --

          Type of this ingestion.

    • NextToken (string) --

      The token for the next set of results, or null if there are no more results.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Status (integer) --

      The HTTP status of the request.

UpdateDataSet (updated) Link ¶
Changes (request)
{'RowLevelPermissionTagConfiguration': {'TagRuleConfigurations': [['string']]}}

Updates a dataset. This operation doesn't support datasets that include uploaded files as a source. Partial updates are not supported by this operation.

See also: AWS API Documentation

Request Syntax

client.update_data_set(
    AwsAccountId='string',
    DataSetId='string',
    Name='string',
    PhysicalTableMap={
        'string': {
            'RelationalTable': {
                'DataSourceArn': 'string',
                'Catalog': 'string',
                'Schema': 'string',
                'Name': 'string',
                'InputColumns': [
                    {
                        'Name': 'string',
                        'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                    },
                ]
            },
            'CustomSql': {
                'DataSourceArn': 'string',
                'Name': 'string',
                'SqlQuery': 'string',
                'Columns': [
                    {
                        'Name': 'string',
                        'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                    },
                ]
            },
            'S3Source': {
                'DataSourceArn': 'string',
                'UploadSettings': {
                    'Format': 'CSV'|'TSV'|'CLF'|'ELF'|'XLSX'|'JSON',
                    'StartFromRow': 123,
                    'ContainsHeader': True|False,
                    'TextQualifier': 'DOUBLE_QUOTE'|'SINGLE_QUOTE',
                    'Delimiter': 'string'
                },
                'InputColumns': [
                    {
                        'Name': 'string',
                        'Type': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME'|'BIT'|'BOOLEAN'|'JSON'
                    },
                ]
            }
        }
    },
    LogicalTableMap={
        'string': {
            'Alias': 'string',
            'DataTransforms': [
                {
                    'ProjectOperation': {
                        'ProjectedColumns': [
                            'string',
                        ]
                    },
                    'FilterOperation': {
                        'ConditionExpression': 'string'
                    },
                    'CreateColumnsOperation': {
                        'Columns': [
                            {
                                'ColumnName': 'string',
                                'ColumnId': 'string',
                                'Expression': 'string'
                            },
                        ]
                    },
                    'RenameColumnOperation': {
                        'ColumnName': 'string',
                        'NewColumnName': 'string'
                    },
                    'CastColumnTypeOperation': {
                        'ColumnName': 'string',
                        'NewColumnType': 'STRING'|'INTEGER'|'DECIMAL'|'DATETIME',
                        'Format': 'string'
                    },
                    'TagColumnOperation': {
                        'ColumnName': 'string',
                        'Tags': [
                            {
                                'ColumnGeographicRole': 'COUNTRY'|'STATE'|'COUNTY'|'CITY'|'POSTCODE'|'LONGITUDE'|'LATITUDE',
                                'ColumnDescription': {
                                    'Text': 'string'
                                }
                            },
                        ]
                    },
                    'UntagColumnOperation': {
                        'ColumnName': 'string',
                        'TagNames': [
                            'COLUMN_GEOGRAPHIC_ROLE'|'COLUMN_DESCRIPTION',
                        ]
                    }
                },
            ],
            'Source': {
                'JoinInstruction': {
                    'LeftOperand': 'string',
                    'RightOperand': 'string',
                    'LeftJoinKeyProperties': {
                        'UniqueKey': True|False
                    },
                    'RightJoinKeyProperties': {
                        'UniqueKey': True|False
                    },
                    'Type': 'INNER'|'OUTER'|'LEFT'|'RIGHT',
                    'OnClause': 'string'
                },
                'PhysicalTableId': 'string',
                'DataSetArn': 'string'
            }
        }
    },
    ImportMode='SPICE'|'DIRECT_QUERY',
    ColumnGroups=[
        {
            'GeoSpatialColumnGroup': {
                'Name': 'string',
                'CountryCode': 'US',
                'Columns': [
                    'string',
                ]
            }
        },
    ],
    FieldFolders={
        'string': {
            'description': 'string',
            'columns': [
                'string',
            ]
        }
    },
    RowLevelPermissionDataSet={
        'Namespace': 'string',
        'Arn': 'string',
        'PermissionPolicy': 'GRANT_ACCESS'|'DENY_ACCESS',
        'FormatVersion': 'VERSION_1'|'VERSION_2',
        'Status': 'ENABLED'|'DISABLED'
    },
    RowLevelPermissionTagConfiguration={
        'Status': 'ENABLED'|'DISABLED',
        'TagRules': [
            {
                'TagKey': 'string',
                'ColumnName': 'string',
                'TagMultiValueDelimiter': 'string',
                'MatchAllValue': 'string'
            },
        ],
        'TagRuleConfigurations': [
            [
                'string',
            ],
        ]
    },
    ColumnLevelPermissionRules=[
        {
            'Principals': [
                'string',
            ],
            'ColumnNames': [
                'string',
            ]
        },
    ],
    DataSetUsageConfiguration={
        'DisableUseAsDirectQuerySource': True|False,
        'DisableUseAsImportedSource': True|False
    }
)
type AwsAccountId

string

param AwsAccountId

[REQUIRED]

The Amazon Web Services account ID.

type DataSetId

string

param DataSetId

[REQUIRED]

The ID for the dataset that you want to update. This ID is unique per Amazon Web Services Region for each Amazon Web Services account.

type Name

string

param Name

[REQUIRED]

The display name for the dataset.

type PhysicalTableMap

dict

param PhysicalTableMap

[REQUIRED]

Declares the physical tables that are available in the underlying data sources.

  • (string) --

    • (dict) --

      A view of a data source that contains information about the shape of the data in the underlying source. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

      • RelationalTable (dict) --

        A physical table type for relational data sources.

        • DataSourceArn (string) -- [REQUIRED]

          The Amazon Resource Name (ARN) for the data source.

        • Catalog (string) --

          The catalog associated with a table.

        • Schema (string) --

          The schema name. This name applies to certain relational database engines.

        • Name (string) -- [REQUIRED]

          The name of the relational table.

        • InputColumns (list) -- [REQUIRED]

          The column schema of the table.

          • (dict) --

            Metadata for a column that is used as the input of a transform operation.

            • Name (string) -- [REQUIRED]

              The name of this column in the underlying data source.

            • Type (string) -- [REQUIRED]

              The data type of the column.

      • CustomSql (dict) --

        A physical table type built from the results of the custom SQL query.

        • DataSourceArn (string) -- [REQUIRED]

          The Amazon Resource Name (ARN) of the data source.

        • Name (string) -- [REQUIRED]

          A display name for the SQL query result.

        • SqlQuery (string) -- [REQUIRED]

          The SQL query.

        • Columns (list) --

          The column schema from the SQL query result set.

          • (dict) --

            Metadata for a column that is used as the input of a transform operation.

            • Name (string) -- [REQUIRED]

              The name of this column in the underlying data source.

            • Type (string) -- [REQUIRED]

              The data type of the column.

      • S3Source (dict) --

        A physical table type for as S3 data source.

        • DataSourceArn (string) -- [REQUIRED]

          The Amazon Resource Name (ARN) for the data source.

        • UploadSettings (dict) --

          Information about the format for the S3 source file or files.

          • Format (string) --

            File format.

          • StartFromRow (integer) --

            A row number to start reading data from.

          • ContainsHeader (boolean) --

            Whether the file has a header row, or the files each have a header row.

          • TextQualifier (string) --

            Text qualifier.

          • Delimiter (string) --

            The delimiter between values in the file.

        • InputColumns (list) -- [REQUIRED]

          A physical table type for an S3 data source.

          Note

          For files that aren't JSON, only STRING data types are supported in input columns.

          • (dict) --

            Metadata for a column that is used as the input of a transform operation.

            • Name (string) -- [REQUIRED]

              The name of this column in the underlying data source.

            • Type (string) -- [REQUIRED]

              The data type of the column.

type LogicalTableMap

dict

param LogicalTableMap

Configures the combination and transformation of the data from the physical tables.

  • (string) --

    • (dict) --

      A logical table is a unit that joins and that data transformations operate on. A logical table has a source, which can be either a physical table or result of a join. When a logical table points to a physical table, the logical table acts as a mutable copy of that physical table through transform operations.

      • Alias (string) -- [REQUIRED]

        A display name for the logical table.

      • DataTransforms (list) --

        Transform operations that act on this logical table. For this structure to be valid, only one of the attributes can be non-null.

        • (dict) --

          A data transformation on a logical table. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

          • ProjectOperation (dict) --

            An operation that projects columns. Operations that come after a projection can only refer to projected columns.

            • ProjectedColumns (list) -- [REQUIRED]

              Projected columns.

              • (string) --

          • FilterOperation (dict) --

            An operation that filters rows based on some condition.

            • ConditionExpression (string) -- [REQUIRED]

              An expression that must evaluate to a Boolean value. Rows for which the expression evaluates to true are kept in the dataset.

          • CreateColumnsOperation (dict) --

            An operation that creates calculated columns. Columns created in one such operation form a lexical closure.

            • Columns (list) -- [REQUIRED]

              Calculated columns to create.

              • (dict) --

                A calculated column for a dataset.

                • ColumnName (string) -- [REQUIRED]

                  Column name.

                • ColumnId (string) -- [REQUIRED]

                  A unique ID to identify a calculated column. During a dataset update, if the column ID of a calculated column matches that of an existing calculated column, Amazon QuickSight preserves the existing calculated column.

                • Expression (string) -- [REQUIRED]

                  An expression that defines the calculated column.

          • RenameColumnOperation (dict) --

            An operation that renames a column.

            • ColumnName (string) -- [REQUIRED]

              The name of the column to be renamed.

            • NewColumnName (string) -- [REQUIRED]

              The new name for the column.

          • CastColumnTypeOperation (dict) --

            A transform operation that casts a column to a different type.

            • ColumnName (string) -- [REQUIRED]

              Column name.

            • NewColumnType (string) -- [REQUIRED]

              New column data type.

            • Format (string) --

              When casting a column from string to datetime type, you can supply a string in a format supported by Amazon QuickSight to denote the source data format.

          • TagColumnOperation (dict) --

            An operation that tags a column with additional information.

            • ColumnName (string) -- [REQUIRED]

              The column that this operation acts on.

            • Tags (list) -- [REQUIRED]

              The dataset column tag, currently only used for geospatial type tagging.

              Note

              This is not tags for the Amazon Web Services tagging feature.

              • (dict) --

                A tag for a column in a `` TagColumnOperation `` structure. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

                • ColumnGeographicRole (string) --

                  A geospatial role for a column.

                • ColumnDescription (dict) --

                  A description for a column.

                  • Text (string) --

                    The text of a description for a column.

          • UntagColumnOperation (dict) --

            A transform operation that removes tags associated with a column.

            • ColumnName (string) -- [REQUIRED]

              The column that this operation acts on.

            • TagNames (list) -- [REQUIRED]

              The column tags to remove from this column.

              • (string) --

      • Source (dict) -- [REQUIRED]

        Source of this logical table.

        • JoinInstruction (dict) --

          Specifies the result of a join of two logical tables.

          • LeftOperand (string) -- [REQUIRED]

            The operand on the left side of a join.

          • RightOperand (string) -- [REQUIRED]

            The operand on the right side of a join.

          • LeftJoinKeyProperties (dict) --

            Join key properties of the left operand.

            • UniqueKey (boolean) --

              A value that indicates that a row in a table is uniquely identified by the columns in a join key. This is used by Amazon QuickSight to optimize query performance.

          • RightJoinKeyProperties (dict) --

            Join key properties of the right operand.

            • UniqueKey (boolean) --

              A value that indicates that a row in a table is uniquely identified by the columns in a join key. This is used by Amazon QuickSight to optimize query performance.

          • Type (string) -- [REQUIRED]

            The type of join that it is.

          • OnClause (string) -- [REQUIRED]

            The join instructions provided in the ON clause of a join.

        • PhysicalTableId (string) --

          Physical table ID.

        • DataSetArn (string) --

          The Amazon Resource Number (ARN) of the parent dataset.

type ImportMode

string

param ImportMode

[REQUIRED]

Indicates whether you want to import the data into SPICE.

type ColumnGroups

list

param ColumnGroups

Groupings of columns that work together in certain Amazon QuickSight features. Currently, only geospatial hierarchy is supported.

  • (dict) --

    Groupings of columns that work together in certain Amazon QuickSight features. This is a variant type structure. For this structure to be valid, only one of the attributes can be non-null.

    • GeoSpatialColumnGroup (dict) --

      Geospatial column group that denotes a hierarchy.

      • Name (string) -- [REQUIRED]

        A display name for the hierarchy.

      • CountryCode (string) --

        Country code.

      • Columns (list) -- [REQUIRED]

        Columns in this hierarchy.

        • (string) --

type FieldFolders

dict

param FieldFolders

The folder that contains fields and nested subfolders for your dataset.

  • (string) --

    • (dict) --

      A FieldFolder element is a folder that contains fields and nested subfolders.

      • description (string) --

        The description for a field folder.

      • columns (list) --

        A folder has a list of columns. A column can only be in one folder.

        • (string) --

type RowLevelPermissionDataSet

dict

param RowLevelPermissionDataSet

The row-level security configuration for the data you want to create.

  • Namespace (string) --

    The namespace associated with the dataset that contains permissions for RLS.

  • Arn (string) -- [REQUIRED]

    The Amazon Resource Name (ARN) of the dataset that contains permissions for RLS.

  • PermissionPolicy (string) -- [REQUIRED]

    The type of permissions to use when interpreting the permissions for RLS. DENY_ACCESS is included for backward compatibility only.

  • FormatVersion (string) --

    The user or group rules associated with the dataset that contains permissions for RLS.

    By default, FormatVersion is VERSION_1 . When FormatVersion is VERSION_1 , UserName and GroupName are required. When FormatVersion is VERSION_2 , UserARN and GroupARN are required, and Namespace must not exist.

  • Status (string) --

    The status of the row-level security permission dataset. If enabled, the status is ENABLED . If disabled, the status is DISABLED .

type RowLevelPermissionTagConfiguration

dict

param RowLevelPermissionTagConfiguration

The configuration of tags on a dataset to set row-level security. Row-level security tags are currently supported for anonymous embedding only.

  • Status (string) --

    The status of row-level security tags. If enabled, the status is ENABLED . If disabled, the status is DISABLED .

  • TagRules (list) -- [REQUIRED]

    A set of rules associated with row-level security, such as the tag names and columns that they are assigned to.

    • (dict) --

      A set of rules associated with a tag.

      • TagKey (string) -- [REQUIRED]

        The unique key for a tag.

      • ColumnName (string) -- [REQUIRED]

        The column name that a tag key is assigned to.

      • TagMultiValueDelimiter (string) --

        A string that you want to use to delimit the values when you pass the values at run time. For example, you can delimit the values with a comma.

      • MatchAllValue (string) --

        A string that you want to use to filter by all the values in a column in the dataset and don’t want to list the values one by one. For example, you can use an asterisk as your match all value.

  • TagRuleConfigurations (list) --

    A list of tag configuration rules to apply to a dataset. All tag configurations have the OR condition. Tags within each tile will be joined (AND). At least one rule in this structure must have all tag values assigned to it to apply Row-level security (RLS) to the dataset.

    • (list) --

      • (string) --

type ColumnLevelPermissionRules

list

param ColumnLevelPermissionRules

A set of one or more definitions of a `` ColumnLevelPermissionRule `` .

  • (dict) --

    A rule defined to grant access on one or more restricted columns. Each dataset can have multiple rules. To create a restricted column, you add it to one or more rules. Each rule must contain at least one column and at least one user or group. To be able to see a restricted column, a user or group needs to be added to a rule for that column.

    • Principals (list) --

      An array of Amazon Resource Names (ARNs) for Amazon QuickSight users or groups.

      • (string) --

    • ColumnNames (list) --

      An array of column names.

      • (string) --

type DataSetUsageConfiguration

dict

param DataSetUsageConfiguration

The usage configuration to apply to child datasets that reference this dataset as a source.

  • DisableUseAsDirectQuerySource (boolean) --

    An option that controls whether a child dataset of a direct query can use this dataset as a source.

  • DisableUseAsImportedSource (boolean) --

    An option that controls whether a child dataset that's stored in QuickSight can use this dataset as a source.

rtype

dict

returns

Response Syntax

{
    'Arn': 'string',
    'DataSetId': 'string',
    'IngestionArn': 'string',
    'IngestionId': 'string',
    'RequestId': 'string',
    'Status': 123
}

Response Structure

  • (dict) --

    • Arn (string) --

      The Amazon Resource Name (ARN) of the dataset.

    • DataSetId (string) --

      The ID for the dataset that you want to create. This ID is unique per Amazon Web Services Region for each Amazon Web Services account.

    • IngestionArn (string) --

      The ARN for the ingestion, which is triggered as a result of dataset creation if the import mode is SPICE.

    • IngestionId (string) --

      The ID of the ingestion, which is triggered as a result of dataset creation if the import mode is SPICE.

    • RequestId (string) --

      The Amazon Web Services request ID for this operation.

    • Status (integer) --

      The HTTP status of the request.