2020/10/15 - AWS Database Migration Service - 6 updated api methods
Changes When creating Endpoints, Replication Instances, and Replication Tasks, the feature provides you the option to specify friendly name to the resources.
{'RedshiftSettings': {'CaseSensitiveNames': 'boolean', 'CompUpdate': 'boolean', 'ExplicitIds': 'boolean'}, 'ResourceIdentifier': 'string'}Response
{'Endpoint': {'RedshiftSettings': {'CaseSensitiveNames': 'boolean', 'CompUpdate': 'boolean', 'ExplicitIds': 'boolean'}}}
Creates an endpoint using the provided settings.
See also: AWS API Documentation
Request Syntax
client.create_endpoint( EndpointIdentifier='string', EndpointType='source'|'target', EngineName='string', Username='string', Password='string', ServerName='string', Port=123, DatabaseName='string', ExtraConnectionAttributes='string', KmsKeyId='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ], CertificateArn='string', SslMode='none'|'require'|'verify-ca'|'verify-full', ServiceAccessRoleArn='string', ExternalTableDefinition='string', DynamoDbSettings={ 'ServiceAccessRoleArn': 'string' }, S3Settings={ 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'CsvRowDelimiter': 'string', 'CsvDelimiter': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CompressionType': 'none'|'gzip', 'EncryptionMode': 'sse-s3'|'sse-kms', 'ServerSideEncryptionKmsKeyId': 'string', 'DataFormat': 'csv'|'parquet', 'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary', 'DictPageSizeLimit': 123, 'RowGroupLength': 123, 'DataPageSize': 123, 'ParquetVersion': 'parquet-1-0'|'parquet-2-0', 'EnableStatistics': True|False, 'IncludeOpForFullLoad': True|False, 'CdcInsertsOnly': True|False, 'TimestampColumnName': 'string', 'ParquetTimestampInMillisecond': True|False, 'CdcInsertsAndUpdates': True|False, 'DatePartitionEnabled': True|False, 'DatePartitionSequence': 'YYYYMMDD'|'YYYYMMDDHH'|'YYYYMM'|'MMYYYYDD'|'DDMMYYYY', 'DatePartitionDelimiter': 'SLASH'|'UNDERSCORE'|'DASH'|'NONE' }, DmsTransferSettings={ 'ServiceAccessRoleArn': 'string', 'BucketName': 'string' }, MongoDbSettings={ 'Username': 'string', 'Password': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'AuthType': 'no'|'password', 'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1', 'NestingLevel': 'none'|'one', 'ExtractDocId': 'string', 'DocsToInvestigate': 'string', 'AuthSource': 'string', 'KmsKeyId': 'string' }, KinesisSettings={ 'StreamArn': 'string', 'MessageFormat': 'json'|'json-unformatted', 'ServiceAccessRoleArn': 'string', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'IncludeNullAndEmpty': True|False }, KafkaSettings={ 'Broker': 'string', 'Topic': 'string', 'MessageFormat': 'json'|'json-unformatted', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'MessageMaxBytes': 123, 'IncludeNullAndEmpty': True|False }, ElasticsearchSettings={ 'ServiceAccessRoleArn': 'string', 'EndpointUri': 'string', 'FullLoadErrorPercentage': 123, 'ErrorRetryDuration': 123 }, NeptuneSettings={ 'ServiceAccessRoleArn': 'string', 'S3BucketName': 'string', 'S3BucketFolder': 'string', 'ErrorRetryDuration': 123, 'MaxFileSize': 123, 'MaxRetryCount': 123, 'IamAuthEnabled': True|False }, RedshiftSettings={ 'AcceptAnyDate': True|False, 'AfterConnectScript': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CaseSensitiveNames': True|False, 'CompUpdate': True|False, 'ConnectionTimeout': 123, 'DatabaseName': 'string', 'DateFormat': 'string', 'EmptyAsNull': True|False, 'EncryptionMode': 'sse-s3'|'sse-kms', 'ExplicitIds': True|False, 'FileTransferUploadStreams': 123, 'LoadTimeout': 123, 'MaxFileSize': 123, 'Password': 'string', 'Port': 123, 'RemoveQuotes': True|False, 'ReplaceInvalidChars': 'string', 'ReplaceChars': 'string', 'ServerName': 'string', 'ServiceAccessRoleArn': 'string', 'ServerSideEncryptionKmsKeyId': 'string', 'TimeFormat': 'string', 'TrimBlanks': True|False, 'TruncateColumns': True|False, 'Username': 'string', 'WriteBufferSize': 123 }, PostgreSQLSettings={ 'AfterConnectScript': 'string', 'CaptureDdls': True|False, 'MaxFileSize': 123, 'DatabaseName': 'string', 'DdlArtifactsSchema': 'string', 'ExecuteTimeout': 123, 'FailTasksOnLobTruncation': True|False, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string', 'SlotName': 'string' }, MySQLSettings={ 'AfterConnectScript': 'string', 'DatabaseName': 'string', 'EventsPollInterval': 123, 'TargetDbType': 'specific-database'|'multiple-databases', 'MaxFileSize': 123, 'ParallelLoadThreads': 123, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'ServerTimezone': 'string', 'Username': 'string' }, OracleSettings={ 'AddSupplementalLogging': True|False, 'ArchivedLogDestId': 123, 'AdditionalArchivedLogDestId': 123, 'AllowSelectNestedTables': True|False, 'ParallelAsmReadThreads': 123, 'ReadAheadBlocks': 123, 'AccessAlternateDirectly': True|False, 'UseAlternateFolderForOnline': True|False, 'OraclePathPrefix': 'string', 'UsePathPrefix': 'string', 'ReplacePathPrefix': True|False, 'EnableHomogenousTablespace': True|False, 'DirectPathNoLog': True|False, 'ArchivedLogsOnly': True|False, 'AsmPassword': 'string', 'AsmServer': 'string', 'AsmUser': 'string', 'CharLengthSemantics': 'default'|'char'|'byte', 'DatabaseName': 'string', 'DirectPathParallelLoad': True|False, 'FailTasksOnLobTruncation': True|False, 'NumberDatatypeScale': 123, 'Password': 'string', 'Port': 123, 'ReadTableSpaceName': True|False, 'RetryInterval': 123, 'SecurityDbEncryption': 'string', 'SecurityDbEncryptionName': 'string', 'ServerName': 'string', 'Username': 'string' }, SybaseSettings={ 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string' }, MicrosoftSQLServerSettings={ 'Port': 123, 'BcpPacketSize': 123, 'DatabaseName': 'string', 'ControlTablesFileGroup': 'string', 'Password': 'string', 'ReadBackupOnly': True|False, 'SafeguardPolicy': 'rely-on-sql-server-replication-agent'|'exclusive-automatic-truncation'|'shared-automatic-truncation', 'ServerName': 'string', 'Username': 'string', 'UseBcpFullLoad': True|False }, IBMDb2Settings={ 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'SetDataCaptureChanges': True|False, 'CurrentLsn': 'string', 'MaxKBytesPerRead': 123, 'Username': 'string' }, ResourceIdentifier='string' )
string
[REQUIRED]
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen, or contain two consecutive hyphens.
string
[REQUIRED]
The type of endpoint. Valid values are source and target .
string
[REQUIRED]
The type of engine for the endpoint. Valid values, depending on the EndpointType value, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "docdb" , "sqlserver" , and "neptune" .
string
The user name to be used to log in to the endpoint database.
string
The password to be used to log in to the endpoint database.
string
The name of the server where the endpoint database resides.
integer
The port used by the endpoint database.
string
The name of the endpoint database.
string
Additional attributes associated with the connection. Each attribute is specified as a name-value pair associated by an equal sign (=). Multiple attributes are separated by a semicolon (;) with no additional white space. For information on the attributes available for connecting your source or target endpoint, see Working with AWS DMS Endpoints in the AWS Database Migration Service User Guide.
string
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
list
One or more tags to be assigned to the endpoint.
(dict) --
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
AddTagsToResource
ListTagsForResource
RemoveTagsFromResource
Key (string) --
A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
Value (string) --
A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
string
The Amazon Resource Name (ARN) for the certificate.
string
The Secure Sockets Layer (SSL) mode to use for the SSL connection. The default is none
string
The Amazon Resource Name (ARN) for the service access role that you want to use to create the endpoint.
string
The external table definition.
dict
Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) used by the service access IAM role.
dict
Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role. It is a required parameter that enables DMS to write and read objects from an 3S bucket.
ExternalTableDefinition (string) --
Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
csv : This is a row-based file format with comma-separated values (.csv).
parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
EncodingType (string) --
The type of encoding you are using:
RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
PLAIN doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DatePartitionEnabled (boolean) --
When set to true , this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false . For more information about date-based folder partitoning, see Using date-based folder partitioning
DatePartitionSequence (string) --
Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD . Use this parameter when DatePartitionedEnabled is set to true .
DatePartitionDelimiter (string) --
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH . Use this parameter when DatePartitionedEnabled is set to true .
dict
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.
BucketName - The name of the S3 bucket to use.
CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
dict
Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see Using MongoDB as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
When when set to "no" , user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x, "default" is "mongodb_cr" . For MongoDB version 3.x or later, "default" is "scram_sha_1" . This setting isn't used when AuthType is set to "no" .
NestingLevel (string) --
Specifies either document or table mode.
Default value is "none" . Specify "none" to use document mode. Specify "one" to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to "none" .
Default value is "false" .
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one" .
Must be a positive value greater than 0 . Default value is 1000 .
AuthSource (string) --
The MongoDB database name. This setting isn't used when AuthType is set to "no" .
The default is "admin" .
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
dict
Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using Amazon Kinesis Data Streams as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is false .
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
dict
Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using Apache Kafka as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kafka message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false .
MessageMaxBytes (integer) --
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
dict
Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS in the AWS Database Migration Service User Guide .
ServiceAccessRoleArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) -- [REQUIRED]
The endpoint for the Elasticsearch cluster. AWS DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
dict
Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see Specifying Endpoint Settings for Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
S3BucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder (string) -- [REQUIRED]
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true . Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn . The default is false .
dict
Provides information that defines an Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift COPY command to upload the .csv files to the target table. The files are deleted once the COPY operation has finished. For more information, see Amazon Redshift Database Developer Guide
For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName (string) --
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames (boolean) --
If Amazon Redshift is configured to support case sensitive schema names, set CaseSensitiveNames to true . The default is false .
CompUpdate (boolean) --
If you set CompUpdate to true Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW . If you set CompUpdate to false , automatic compression is disabled and existing column encodings aren't changed. The default is true .
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
ExplicitIds (boolean) --
This setting is only valid for a full-load migration task. Set ExplicitIds to true to have tables with IDENTITY columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false .
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview .
FileTransferUploadStreams accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out of operations performed by AWS DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
dict
Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see Extra connection attributes when using PostgreSQL as a source for AWS DMS and Extra connection attributes when using PostgreSQL as a target for AWS DMS in the AWS Database Migration Service User Guide.
AfterConnectScript (string) --
For use with change data capture (CDC) only, this attribute has AWS DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example: afterConnectScript=SET session_replication_role='replica'
CaptureDdls (boolean) --
To capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to N , you don't have to create tables or triggers on the source database.
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example: maxFileSize=512
DatabaseName (string) --
Database name for the endpoint.
DdlArtifactsSchema (string) --
The schema in which the operational DDL database artifacts are created.
Example: ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout (integer) --
Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example: executeTimeout=100;
FailTasksOnLobTruncation (boolean) --
When set to true , this value causes a task to fail if the actual size of a LOB column is greater than the specified LobMaxSize .
If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SlotName (string) --
Sets the name of a previously created logical replication slot for a CDC load of the PostgreSQL source instance.
When used with the AWS DMS API CdcStartPosition request parameter, this attribute also enables using native CDC start points.
dict
Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see Extra connection attributes when using MySQL as a source for AWS DMS and Extra connection attributes when using a MySQL-compatible database as a target for AWS DMS in the AWS Database Migration Service User Guide.
AfterConnectScript (string) --
Specifies a script to run immediately after AWS DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
DatabaseName (string) --
Database name for the endpoint.
EventsPollInterval (integer) --
Specifies how often to check the binary log for new changes/events when the database is idle.
Example: eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.
TargetDbType (string) --
Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example: targetDbType=MULTIPLE_DATABASES
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example: maxFileSize=512
ParallelLoadThreads (integer) --
Improves performance when loading data into the MySQLcompatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example: parallelLoadThreads=1
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
ServerTimezone (string) --
Specifies the time zone for the source MySQL database.
Example: serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username (string) --
Endpoint connection user name.
dict
Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see Extra connection attributes when using Oracle as a source for AWS DMS and Extra connection attributes when using Oracle as a target for AWS DMS in the AWS Database Migration Service User Guide.
AddSupplementalLogging (boolean) --
Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId (integer) --
Specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the v$archived_log table. When working with multiple log destinations (DEST_ID), we recommend that you to specify an archived redo logs location identifier. Doing this improves performance by ensuring that the correct logs are accessed from the outset.
AdditionalArchivedLogDestId (integer) --
Set this attribute with archivedLogDestId in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.
AllowSelectNestedTables (boolean) --
Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types.
ParallelAsmReadThreads (integer) --
Set this attribute to change the number of threads that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the readAheadBlocks attribute.
ReadAheadBlocks (integer) --
Set this attribute to change the number of read-ahead blocks that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly (boolean) --
Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.
UseAlternateFolderForOnline (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.
OraclePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified usePathPrefix setting to access the redo logs.
EnableHomogenousTablespace (boolean) --
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog (boolean) --
When set to true , this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.
ArchivedLogsOnly (boolean) --
When this field is set to Y , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the AWS DMS user account needs to be granted ASM privileges.
AsmPassword (string) --
For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the `` asm_user_password `` value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmServer (string) --
For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmUser (string) --
For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
CharLengthSemantics (string) --
Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to CHAR . Otherwise, the character column length is in bytes.
Example: charLengthSemantics=CHAR;
DatabaseName (string) --
Database name for the endpoint.
DirectPathParallelLoad (boolean) --
When set to true , this attribute specifies a parallel load when useDirectPathFullLoad is set to Y . This attribute also only applies when you use the AWS DMS parallel load feature. Note that the target table cannot have any constraints or indexes.
FailTasksOnLobTruncation (boolean) --
When set to true , this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize .
If a task is set to limited LOB mode and this option is set to true , the task fails instead of truncating the LOB data.
NumberDatatypeScale (integer) --
Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example: numberDataTypeScale=12
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ReadTableSpaceName (boolean) --
When set to true , this attribute supports tablespace replication.
RetryInterval (integer) --
Specifies the number of seconds that the system waits before resending a query.
Example: retryInterval=6;
SecurityDbEncryption (string) --
For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the `` TDE_Password `` part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
SecurityDbEncryptionName (string) --
For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName , see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
dict
Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see Extra connection attributes when using SAP ASE as a source for AWS DMS and Extra connection attributes when using SAP ASE as a target for AWS DMS in the AWS Database Migration Service User Guide.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
dict
Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see Extra connection attributes when using SQL Server as a source for AWS DMS and Extra connection attributes when using SQL Server as a target for AWS DMS in the AWS Database Migration Service User Guide.
Port (integer) --
Endpoint TCP port.
BcpPacketSize (integer) --
The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName (string) --
Database name for the endpoint.
ControlTablesFileGroup (string) --
Specify a filegroup for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created on the specified filegroup.
Password (string) --
Endpoint connection password.
ReadBackupOnly (boolean) --
When this attribute is set to Y , AWS DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.
SafeguardPolicy (string) --
Use this attribute to minimize the need to access the backup log and enable AWS DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task : When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
UseBcpFullLoad (boolean) --
Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
dict
Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see Extra connection attributes when using Db2 LUW as a source for AWS DMS in the AWS Database Migration Service User Guide.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
SetDataCaptureChanges (boolean) --
Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn (string) --
For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead (integer) --
Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username (string) --
Endpoint connection user name.
string
A friendly name for the resource identifier at the end of the EndpointArn response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1 . For example, this value might result in the EndpointArn value arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1 . If you don't specify a ResourceIdentifier value, AWS DMS generates a default identifier value for the end of EndpointArn .
dict
Response Syntax
{ 'Endpoint': { 'EndpointIdentifier': 'string', 'EndpointType': 'source'|'target', 'EngineName': 'string', 'EngineDisplayName': 'string', 'Username': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'ExtraConnectionAttributes': 'string', 'Status': 'string', 'KmsKeyId': 'string', 'EndpointArn': 'string', 'CertificateArn': 'string', 'SslMode': 'none'|'require'|'verify-ca'|'verify-full', 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'ExternalId': 'string', 'DynamoDbSettings': { 'ServiceAccessRoleArn': 'string' }, 'S3Settings': { 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'CsvRowDelimiter': 'string', 'CsvDelimiter': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CompressionType': 'none'|'gzip', 'EncryptionMode': 'sse-s3'|'sse-kms', 'ServerSideEncryptionKmsKeyId': 'string', 'DataFormat': 'csv'|'parquet', 'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary', 'DictPageSizeLimit': 123, 'RowGroupLength': 123, 'DataPageSize': 123, 'ParquetVersion': 'parquet-1-0'|'parquet-2-0', 'EnableStatistics': True|False, 'IncludeOpForFullLoad': True|False, 'CdcInsertsOnly': True|False, 'TimestampColumnName': 'string', 'ParquetTimestampInMillisecond': True|False, 'CdcInsertsAndUpdates': True|False, 'DatePartitionEnabled': True|False, 'DatePartitionSequence': 'YYYYMMDD'|'YYYYMMDDHH'|'YYYYMM'|'MMYYYYDD'|'DDMMYYYY', 'DatePartitionDelimiter': 'SLASH'|'UNDERSCORE'|'DASH'|'NONE' }, 'DmsTransferSettings': { 'ServiceAccessRoleArn': 'string', 'BucketName': 'string' }, 'MongoDbSettings': { 'Username': 'string', 'Password': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'AuthType': 'no'|'password', 'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1', 'NestingLevel': 'none'|'one', 'ExtractDocId': 'string', 'DocsToInvestigate': 'string', 'AuthSource': 'string', 'KmsKeyId': 'string' }, 'KinesisSettings': { 'StreamArn': 'string', 'MessageFormat': 'json'|'json-unformatted', 'ServiceAccessRoleArn': 'string', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'IncludeNullAndEmpty': True|False }, 'KafkaSettings': { 'Broker': 'string', 'Topic': 'string', 'MessageFormat': 'json'|'json-unformatted', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'MessageMaxBytes': 123, 'IncludeNullAndEmpty': True|False }, 'ElasticsearchSettings': { 'ServiceAccessRoleArn': 'string', 'EndpointUri': 'string', 'FullLoadErrorPercentage': 123, 'ErrorRetryDuration': 123 }, 'NeptuneSettings': { 'ServiceAccessRoleArn': 'string', 'S3BucketName': 'string', 'S3BucketFolder': 'string', 'ErrorRetryDuration': 123, 'MaxFileSize': 123, 'MaxRetryCount': 123, 'IamAuthEnabled': True|False }, 'RedshiftSettings': { 'AcceptAnyDate': True|False, 'AfterConnectScript': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CaseSensitiveNames': True|False, 'CompUpdate': True|False, 'ConnectionTimeout': 123, 'DatabaseName': 'string', 'DateFormat': 'string', 'EmptyAsNull': True|False, 'EncryptionMode': 'sse-s3'|'sse-kms', 'ExplicitIds': True|False, 'FileTransferUploadStreams': 123, 'LoadTimeout': 123, 'MaxFileSize': 123, 'Password': 'string', 'Port': 123, 'RemoveQuotes': True|False, 'ReplaceInvalidChars': 'string', 'ReplaceChars': 'string', 'ServerName': 'string', 'ServiceAccessRoleArn': 'string', 'ServerSideEncryptionKmsKeyId': 'string', 'TimeFormat': 'string', 'TrimBlanks': True|False, 'TruncateColumns': True|False, 'Username': 'string', 'WriteBufferSize': 123 }, 'PostgreSQLSettings': { 'AfterConnectScript': 'string', 'CaptureDdls': True|False, 'MaxFileSize': 123, 'DatabaseName': 'string', 'DdlArtifactsSchema': 'string', 'ExecuteTimeout': 123, 'FailTasksOnLobTruncation': True|False, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string', 'SlotName': 'string' }, 'MySQLSettings': { 'AfterConnectScript': 'string', 'DatabaseName': 'string', 'EventsPollInterval': 123, 'TargetDbType': 'specific-database'|'multiple-databases', 'MaxFileSize': 123, 'ParallelLoadThreads': 123, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'ServerTimezone': 'string', 'Username': 'string' }, 'OracleSettings': { 'AddSupplementalLogging': True|False, 'ArchivedLogDestId': 123, 'AdditionalArchivedLogDestId': 123, 'AllowSelectNestedTables': True|False, 'ParallelAsmReadThreads': 123, 'ReadAheadBlocks': 123, 'AccessAlternateDirectly': True|False, 'UseAlternateFolderForOnline': True|False, 'OraclePathPrefix': 'string', 'UsePathPrefix': 'string', 'ReplacePathPrefix': True|False, 'EnableHomogenousTablespace': True|False, 'DirectPathNoLog': True|False, 'ArchivedLogsOnly': True|False, 'AsmPassword': 'string', 'AsmServer': 'string', 'AsmUser': 'string', 'CharLengthSemantics': 'default'|'char'|'byte', 'DatabaseName': 'string', 'DirectPathParallelLoad': True|False, 'FailTasksOnLobTruncation': True|False, 'NumberDatatypeScale': 123, 'Password': 'string', 'Port': 123, 'ReadTableSpaceName': True|False, 'RetryInterval': 123, 'SecurityDbEncryption': 'string', 'SecurityDbEncryptionName': 'string', 'ServerName': 'string', 'Username': 'string' }, 'SybaseSettings': { 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string' }, 'MicrosoftSQLServerSettings': { 'Port': 123, 'BcpPacketSize': 123, 'DatabaseName': 'string', 'ControlTablesFileGroup': 'string', 'Password': 'string', 'ReadBackupOnly': True|False, 'SafeguardPolicy': 'rely-on-sql-server-replication-agent'|'exclusive-automatic-truncation'|'shared-automatic-truncation', 'ServerName': 'string', 'Username': 'string', 'UseBcpFullLoad': True|False }, 'IBMDb2Settings': { 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'SetDataCaptureChanges': True|False, 'CurrentLsn': 'string', 'MaxKBytesPerRead': 123, 'Username': 'string' } } }
Response Structure
(dict) --
Endpoint (dict) --
The endpoint that was created.
EndpointIdentifier (string) --
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , "sqlserver" , and "neptune" .
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Username (string) --
The user name used to connect to the endpoint.
ServerName (string) --
The name of the server at the endpoint.
Port (integer) --
The port value used to access the endpoint.
DatabaseName (string) --
The name of the database at the endpoint.
ExtraConnectionAttributes (string) --
Additional connection attributes used to connect to the endpoint.
Status (string) --
The status of the endpoint.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn (string) --
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode (string) --
The SSL mode used to connect to the endpoint. The default value is none .
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
ExternalId (string) --
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings (dict) --
The settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings (dict) --
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role. It is a required parameter that enables DMS to write and read objects from an 3S bucket.
ExternalTableDefinition (string) --
Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
csv : This is a row-based file format with comma-separated values (.csv).
parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
EncodingType (string) --
The type of encoding you are using:
RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
PLAIN doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DatePartitionEnabled (boolean) --
When set to true , this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false . For more information about date-based folder partitoning, see Using date-based folder partitioning
DatePartitionSequence (string) --
Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD . Use this parameter when DatePartitionedEnabled is set to true .
DatePartitionDelimiter (string) --
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH . Use this parameter when DatePartitionedEnabled is set to true .
DmsTransferSettings (dict) --
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.
BucketName - The name of the S3 bucket to use.
CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
MongoDbSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
When when set to "no" , user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x, "default" is "mongodb_cr" . For MongoDB version 3.x or later, "default" is "scram_sha_1" . This setting isn't used when AuthType is set to "no" .
NestingLevel (string) --
Specifies either document or table mode.
Default value is "none" . Specify "none" to use document mode. Specify "one" to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to "none" .
Default value is "false" .
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one" .
Must be a positive value greater than 0 . Default value is 1000 .
AuthSource (string) --
The MongoDB database name. This setting isn't used when AuthType is set to "no" .
The default is "admin" .
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
KinesisSettings (dict) --
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is false .
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
KafkaSettings (dict) --
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kafka message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false .
MessageMaxBytes (integer) --
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
ElasticsearchSettings (dict) --
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) --
The endpoint for the Elasticsearch cluster. AWS DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings (dict) --
The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
S3BucketName (string) --
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder (string) --
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true . Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn . The default is false .
RedshiftSettings (dict) --
Settings for the Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift COPY command to upload the .csv files to the target table. The files are deleted once the COPY operation has finished. For more information, see Amazon Redshift Database Developer Guide
For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName (string) --
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames (boolean) --
If Amazon Redshift is configured to support case sensitive schema names, set CaseSensitiveNames to true . The default is false .
CompUpdate (boolean) --
If you set CompUpdate to true Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW . If you set CompUpdate to false , automatic compression is disabled and existing column encodings aren't changed. The default is true .
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
ExplicitIds (boolean) --
This setting is only valid for a full-load migration task. Set ExplicitIds to true to have tables with IDENTITY columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false .
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview .
FileTransferUploadStreams accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out of operations performed by AWS DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
PostgreSQLSettings (dict) --
The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.
AfterConnectScript (string) --
For use with change data capture (CDC) only, this attribute has AWS DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example: afterConnectScript=SET session_replication_role='replica'
CaptureDdls (boolean) --
To capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to N , you don't have to create tables or triggers on the source database.
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example: maxFileSize=512
DatabaseName (string) --
Database name for the endpoint.
DdlArtifactsSchema (string) --
The schema in which the operational DDL database artifacts are created.
Example: ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout (integer) --
Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example: executeTimeout=100;
FailTasksOnLobTruncation (boolean) --
When set to true , this value causes a task to fail if the actual size of a LOB column is greater than the specified LobMaxSize .
If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SlotName (string) --
Sets the name of a previously created logical replication slot for a CDC load of the PostgreSQL source instance.
When used with the AWS DMS API CdcStartPosition request parameter, this attribute also enables using native CDC start points.
MySQLSettings (dict) --
The settings for the MySQL source and target endpoint. For more information, see the MySQLSettings structure.
AfterConnectScript (string) --
Specifies a script to run immediately after AWS DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
DatabaseName (string) --
Database name for the endpoint.
EventsPollInterval (integer) --
Specifies how often to check the binary log for new changes/events when the database is idle.
Example: eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.
TargetDbType (string) --
Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example: targetDbType=MULTIPLE_DATABASES
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example: maxFileSize=512
ParallelLoadThreads (integer) --
Improves performance when loading data into the MySQLcompatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example: parallelLoadThreads=1
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
ServerTimezone (string) --
Specifies the time zone for the source MySQL database.
Example: serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username (string) --
Endpoint connection user name.
OracleSettings (dict) --
The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure.
AddSupplementalLogging (boolean) --
Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId (integer) --
Specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the v$archived_log table. When working with multiple log destinations (DEST_ID), we recommend that you to specify an archived redo logs location identifier. Doing this improves performance by ensuring that the correct logs are accessed from the outset.
AdditionalArchivedLogDestId (integer) --
Set this attribute with archivedLogDestId in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.
AllowSelectNestedTables (boolean) --
Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types.
ParallelAsmReadThreads (integer) --
Set this attribute to change the number of threads that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the readAheadBlocks attribute.
ReadAheadBlocks (integer) --
Set this attribute to change the number of read-ahead blocks that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly (boolean) --
Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.
UseAlternateFolderForOnline (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.
OraclePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified usePathPrefix setting to access the redo logs.
EnableHomogenousTablespace (boolean) --
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog (boolean) --
When set to true , this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.
ArchivedLogsOnly (boolean) --
When this field is set to Y , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the AWS DMS user account needs to be granted ASM privileges.
AsmPassword (string) --
For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the `` asm_user_password `` value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmServer (string) --
For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmUser (string) --
For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
CharLengthSemantics (string) --
Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to CHAR . Otherwise, the character column length is in bytes.
Example: charLengthSemantics=CHAR;
DatabaseName (string) --
Database name for the endpoint.
DirectPathParallelLoad (boolean) --
When set to true , this attribute specifies a parallel load when useDirectPathFullLoad is set to Y . This attribute also only applies when you use the AWS DMS parallel load feature. Note that the target table cannot have any constraints or indexes.
FailTasksOnLobTruncation (boolean) --
When set to true , this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize .
If a task is set to limited LOB mode and this option is set to true , the task fails instead of truncating the LOB data.
NumberDatatypeScale (integer) --
Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example: numberDataTypeScale=12
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ReadTableSpaceName (boolean) --
When set to true , this attribute supports tablespace replication.
RetryInterval (integer) --
Specifies the number of seconds that the system waits before resending a query.
Example: retryInterval=6;
SecurityDbEncryption (string) --
For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the `` TDE_Password `` part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
SecurityDbEncryptionName (string) --
For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName , see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SybaseSettings (dict) --
The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
MicrosoftSQLServerSettings (dict) --
The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.
Port (integer) --
Endpoint TCP port.
BcpPacketSize (integer) --
The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName (string) --
Database name for the endpoint.
ControlTablesFileGroup (string) --
Specify a filegroup for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created on the specified filegroup.
Password (string) --
Endpoint connection password.
ReadBackupOnly (boolean) --
When this attribute is set to Y , AWS DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.
SafeguardPolicy (string) --
Use this attribute to minimize the need to access the backup log and enable AWS DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task : When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
UseBcpFullLoad (boolean) --
Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
IBMDb2Settings (dict) --
The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
SetDataCaptureChanges (boolean) --
Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn (string) --
For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead (integer) --
Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username (string) --
Endpoint connection user name.
{'ResourceIdentifier': 'string'}
Creates the replication instance using the specified parameters.
AWS DMS requires that your account have certain roles with appropriate permissions before you can create a replication instance. For information on the required roles, see Creating the IAM Roles to Use With the AWS CLI and AWS DMS API . For information on the required permissions, see IAM Permissions Needed to Use AWS DMS .
See also: AWS API Documentation
Request Syntax
client.create_replication_instance( ReplicationInstanceIdentifier='string', AllocatedStorage=123, ReplicationInstanceClass='string', VpcSecurityGroupIds=[ 'string', ], AvailabilityZone='string', ReplicationSubnetGroupIdentifier='string', PreferredMaintenanceWindow='string', MultiAZ=True|False, EngineVersion='string', AutoMinorVersionUpgrade=True|False, Tags=[ { 'Key': 'string', 'Value': 'string' }, ], KmsKeyId='string', PubliclyAccessible=True|False, DnsNameServers='string', ResourceIdentifier='string' )
string
[REQUIRED]
The replication instance identifier. This parameter is stored as a lowercase string.
Constraints:
Must contain 1-63 alphanumeric characters or hyphens.
First character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Example: myrepinstance
integer
The amount of storage (in gigabytes) to be initially allocated for the replication instance.
string
[REQUIRED]
The compute and memory capacity of the replication instance as defined for the specified replication instance class. For example to specify the instance class dms.c4.large, set this parameter to "dms.c4.large" .
For more information on the settings and capacities for the available replication instance classes, see Selecting the right AWS DMS replication instance for your migration .
list
Specifies the VPC security group to be used with the replication instance. The VPC security group must work with the VPC containing the replication instance.
(string) --
string
The Availability Zone where the replication instance will be created. The default value is a random, system-chosen Availability Zone in the endpoint's AWS Region, for example: us-east-1d
string
A subnet group to associate with the replication instance.
string
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: ddd:hh24:mi-ddd:hh24:mi
Default: A 30-minute window selected at random from an 8-hour block of time per AWS Region, occurring on a random day of the week.
Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun
Constraints: Minimum 30-minute window.
boolean
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
string
The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
boolean
A value that indicates whether minor engine upgrades are applied automatically to the replication instance during the maintenance window. This parameter defaults to true .
Default: true
list
One or more tags to be assigned to the replication instance.
(dict) --
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
AddTagsToResource
ListTagsForResource
RemoveTagsFromResource
Key (string) --
A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
Value (string) --
A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
string
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
boolean
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true .
string
A list of custom DNS name servers supported for the replication instance to access your on-premise source or target database. This list overrides the default name servers supported by the replication instance. You can specify a comma-separated list of internet addresses for up to four on-premise DNS name servers. For example: "1.1.1.1,2.2.2.2,3.3.3.3,4.4.4.4"
string
A friendly name for the resource identifier at the end of the EndpointArn response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1 . For example, this value might result in the EndpointArn value arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1 . If you don't specify a ResourceIdentifier value, AWS DMS generates a default identifier value for the end of EndpointArn .
dict
Response Syntax
{ 'ReplicationInstance': { 'ReplicationInstanceIdentifier': 'string', 'ReplicationInstanceClass': 'string', 'ReplicationInstanceStatus': 'string', 'AllocatedStorage': 123, 'InstanceCreateTime': datetime(2015, 1, 1), 'VpcSecurityGroups': [ { 'VpcSecurityGroupId': 'string', 'Status': 'string' }, ], 'AvailabilityZone': 'string', 'ReplicationSubnetGroup': { 'ReplicationSubnetGroupIdentifier': 'string', 'ReplicationSubnetGroupDescription': 'string', 'VpcId': 'string', 'SubnetGroupStatus': 'string', 'Subnets': [ { 'SubnetIdentifier': 'string', 'SubnetAvailabilityZone': { 'Name': 'string' }, 'SubnetStatus': 'string' }, ] }, 'PreferredMaintenanceWindow': 'string', 'PendingModifiedValues': { 'ReplicationInstanceClass': 'string', 'AllocatedStorage': 123, 'MultiAZ': True|False, 'EngineVersion': 'string' }, 'MultiAZ': True|False, 'EngineVersion': 'string', 'AutoMinorVersionUpgrade': True|False, 'KmsKeyId': 'string', 'ReplicationInstanceArn': 'string', 'ReplicationInstancePublicIpAddress': 'string', 'ReplicationInstancePrivateIpAddress': 'string', 'ReplicationInstancePublicIpAddresses': [ 'string', ], 'ReplicationInstancePrivateIpAddresses': [ 'string', ], 'PubliclyAccessible': True|False, 'SecondaryAvailabilityZone': 'string', 'FreeUntil': datetime(2015, 1, 1), 'DnsNameServers': 'string' } }
Response Structure
(dict) --
ReplicationInstance (dict) --
The replication instance that was created.
ReplicationInstanceIdentifier (string) --
The replication instance identifier is a required parameter. This parameter is stored as a lowercase string.
Constraints:
Must contain 1-63 alphanumeric characters or hyphens.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
Example: myrepinstance
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance as defined for the specified replication instance class. It is a required parameter, although a defualt value is pre-selected in the DMS console.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right AWS DMS replication instance for your migration .
ReplicationInstanceStatus (string) --
The status of the replication instance. The possible return values include:
"available"
"creating"
"deleted"
"deleting"
"failed"
"modifying"
"upgrading"
"rebooting"
"resetting-master-credentials"
"storage-full"
"incompatible-credentials"
"incompatible-network"
"maintenance"
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
InstanceCreateTime (datetime) --
The time the replication instance was created.
VpcSecurityGroups (list) --
The VPC security group for the instance.
(dict) --
Describes the status of a security group associated with the virtual private cloud (VPC) hosting your replication and DB instances.
VpcSecurityGroupId (string) --
The VPC security group ID.
Status (string) --
The status of the VPC security group.
AvailabilityZone (string) --
The Availability Zone for the instance.
ReplicationSubnetGroup (dict) --
The subnet group for the replication instance.
ReplicationSubnetGroupIdentifier (string) --
The identifier of the replication instance subnet group.
ReplicationSubnetGroupDescription (string) --
A description for the replication subnet group.
VpcId (string) --
The ID of the VPC.
SubnetGroupStatus (string) --
The status of the subnet group.
Subnets (list) --
The subnets that are in the subnet group.
(dict) --
In response to a request by the DescribeReplicationSubnetGroups operation, this object identifies a subnet by its given Availability Zone, subnet identifier, and status.
SubnetIdentifier (string) --
The subnet identifier.
SubnetAvailabilityZone (dict) --
The Availability Zone of the subnet.
Name (string) --
The name of the Availability Zone.
SubnetStatus (string) --
The status of the subnet.
PreferredMaintenanceWindow (string) --
The maintenance window times for the replication instance. Any pending upgrades to the replication instance are performed during this time.
PendingModifiedValues (dict) --
The pending modification values.
ReplicationInstanceClass (string) --
The compute and memory capacity of the replication instance as defined for the specified replication instance class.
For more information on the settings and capacities for the available replication instance classes, see Selecting the right AWS DMS replication instance for your migration .
AllocatedStorage (integer) --
The amount of storage (in gigabytes) that is allocated for the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
MultiAZ (boolean) --
Specifies whether the replication instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the Multi-AZ parameter is set to true .
EngineVersion (string) --
The engine version number of the replication instance.
If an engine version number is not specified when a replication instance is created, the default is the latest engine version available.
When modifying a major engine version of an instance, also set AllowMajorVersionUpgrade to true .
AutoMinorVersionUpgrade (boolean) --
Boolean value indicating if minor version upgrades will be automatically applied to the instance.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the data on the replication instance.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
ReplicationInstancePublicIpAddress (string) --
The public IP address of the replication instance.
ReplicationInstancePrivateIpAddress (string) --
The private IP address of the replication instance.
ReplicationInstancePublicIpAddresses (list) --
One or more public IP addresses for the replication instance.
(string) --
ReplicationInstancePrivateIpAddresses (list) --
One or more private IP addresses for the replication instance.
(string) --
PubliclyAccessible (boolean) --
Specifies the accessibility options for the replication instance. A value of true represents an instance with a public IP address. A value of false represents an instance with a private IP address. The default value is true .
SecondaryAvailabilityZone (string) --
The Availability Zone of the standby replication instance in a Multi-AZ deployment.
FreeUntil (datetime) --
The expiration date of the free replication instance that is part of the Free DMS program.
DnsNameServers (string) --
The DNS name servers supported for the replication instance to access your on-premise source or target database.
{'ResourceIdentifier': 'string'}
Creates a replication task using the specified parameters.
See also: AWS API Documentation
Request Syntax
client.create_replication_task( ReplicationTaskIdentifier='string', SourceEndpointArn='string', TargetEndpointArn='string', ReplicationInstanceArn='string', MigrationType='full-load'|'cdc'|'full-load-and-cdc', TableMappings='string', ReplicationTaskSettings='string', CdcStartTime=datetime(2015, 1, 1), CdcStartPosition='string', CdcStopPosition='string', Tags=[ { 'Key': 'string', 'Value': 'string' }, ], TaskData='string', ResourceIdentifier='string' )
string
[REQUIRED]
An identifier for the replication task.
Constraints:
Must contain 1-255 alphanumeric characters or hyphens.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
string
[REQUIRED]
An Amazon Resource Name (ARN) that uniquely identifies the source endpoint.
string
[REQUIRED]
An Amazon Resource Name (ARN) that uniquely identifies the target endpoint.
string
[REQUIRED]
The Amazon Resource Name (ARN) of a replication instance.
string
[REQUIRED]
The migration type. Valid values: full-load | cdc | full-load-and-cdc
string
[REQUIRED]
The table mappings for the task, in JSON format. For more information, see Using Table Mapping to Specify Task Settings in the AWS Database Migration Service User Guide.
string
Overall settings for the task, in JSON format. For more information, see Specifying Task Settings for AWS Database Migration Service Tasks in the AWS Database Migration User Guide.
datetime
Indicates the start time for a change data capture (CDC) operation. Use either CdcStartTime or CdcStartPosition to specify when you want a CDC operation to start. Specifying both values results in an error.
Timestamp Example: --cdc-start-time “2018-03-08T12:12:12”
string
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want a CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
Note
When you use this task setting with a source PostgreSQL database, a logical replication slot should already be created and associated with the source endpoint. You can verify this by setting the slotName extra connection attribute to the name of this logical replication slot. For more information, see Extra Connection Attributes When Using PostgreSQL as a Source for AWS DMS .
string
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
list
One or more tags to be assigned to the replication task.
(dict) --
A user-defined key-value pair that describes metadata added to an AWS DMS resource and that is used by operations such as the following:
AddTagsToResource
ListTagsForResource
RemoveTagsFromResource
Key (string) --
A key is the required name of the tag. The string value can be 1-128 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
Value (string) --
A value is the optional value of the tag. The string value can be 1-256 Unicode characters in length and can't be prefixed with "aws:" or "dms:". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regular expressions: "^([\p{L}\p{Z}\p{N}_.:/=+\-]*)$").
string
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration Service User Guide.
string
A friendly name for the resource identifier at the end of the EndpointArn response parameter that is returned in the created Endpoint object. The value for this parameter can have up to 31 characters. It can contain only ASCII letters, digits, and hyphen ('-'). Also, it can't end with a hyphen or contain two consecutive hyphens, and can only begin with a letter, such as Example-App-ARN1 . For example, this value might result in the EndpointArn value arn:aws:dms:eu-west-1:012345678901:rep:Example-App-ARN1 . If you don't specify a ResourceIdentifier value, AWS DMS generates a default identifier value for the end of EndpointArn .
dict
Response Syntax
{ 'ReplicationTask': { 'ReplicationTaskIdentifier': 'string', 'SourceEndpointArn': 'string', 'TargetEndpointArn': 'string', 'ReplicationInstanceArn': 'string', 'MigrationType': 'full-load'|'cdc'|'full-load-and-cdc', 'TableMappings': 'string', 'ReplicationTaskSettings': 'string', 'Status': 'string', 'LastFailureMessage': 'string', 'StopReason': 'string', 'ReplicationTaskCreationDate': datetime(2015, 1, 1), 'ReplicationTaskStartDate': datetime(2015, 1, 1), 'CdcStartPosition': 'string', 'CdcStopPosition': 'string', 'RecoveryCheckpoint': 'string', 'ReplicationTaskArn': 'string', 'ReplicationTaskStats': { 'FullLoadProgressPercent': 123, 'ElapsedTimeMillis': 123, 'TablesLoaded': 123, 'TablesLoading': 123, 'TablesQueued': 123, 'TablesErrored': 123, 'FreshStartDate': datetime(2015, 1, 1), 'StartDate': datetime(2015, 1, 1), 'StopDate': datetime(2015, 1, 1), 'FullLoadStartDate': datetime(2015, 1, 1), 'FullLoadFinishDate': datetime(2015, 1, 1) }, 'TaskData': 'string' } }
Response Structure
(dict) --
ReplicationTask (dict) --
The replication task that was created.
ReplicationTaskIdentifier (string) --
The user-assigned replication task identifier or name.
Constraints:
Must contain 1-255 alphanumeric characters or hyphens.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
SourceEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
TargetEndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
ReplicationInstanceArn (string) --
The Amazon Resource Name (ARN) of the replication instance.
MigrationType (string) --
The type of migration.
TableMappings (string) --
Table mappings specified in the task.
ReplicationTaskSettings (string) --
The settings for the replication task.
Status (string) --
The status of the replication task.
LastFailureMessage (string) --
The last error (failure) message generated for the replication task.
StopReason (string) --
The reason the replication task was stopped. This response parameter can return one of the following values:
"STOP_REASON_FULL_LOAD_COMPLETED" – Full-load migration completed.
"STOP_REASON_CACHED_CHANGES_APPLIED" – Change data capture (CDC) load completed.
"STOP_REASON_CACHED_CHANGES_NOT_APPLIED" – In a full-load and CDC migration, the full-load stopped as specified before starting the CDC migration.
"STOP_REASON_SERVER_TIME" – The migration stopped at the specified server time.
ReplicationTaskCreationDate (datetime) --
The date the replication task was created.
ReplicationTaskStartDate (datetime) --
The date the replication task is scheduled to start.
CdcStartPosition (string) --
Indicates when you want a change data capture (CDC) operation to start. Use either CdcStartPosition or CdcStartTime to specify when you want the CDC operation to start. Specifying both values results in an error.
The value can be in date, checkpoint, or LSN/SCN format.
Date Example: --cdc-start-position “2018-03-08T12:12:12”
Checkpoint Example: --cdc-start-position "checkpoint:V1#27#mysql-bin-changelog.157832:1975:-1:2002:677883278264080:mysql-bin-changelog.157832:1876#0#0#*#0#93"
LSN Example: --cdc-start-position “mysql-bin-changelog.000024:373”
CdcStopPosition (string) --
Indicates when you want a change data capture (CDC) operation to stop. The value can be either server time or commit time.
Server time example: --cdc-stop-position “server_time:2018-02-09T12:12:12”
Commit time example: --cdc-stop-position “commit_time: 2018-02-09T12:12:12 “
RecoveryCheckpoint (string) --
Indicates the last checkpoint that occurred during a change data capture (CDC) operation. You can provide this value to the CdcStartPosition parameter to start a CDC operation that begins at that checkpoint.
ReplicationTaskArn (string) --
The Amazon Resource Name (ARN) of the replication task.
ReplicationTaskStats (dict) --
The statistics for the task, including elapsed time, tables loaded, and table errors.
FullLoadProgressPercent (integer) --
The percent complete for the full load migration task.
ElapsedTimeMillis (integer) --
The elapsed time of the task, in milliseconds.
TablesLoaded (integer) --
The number of tables loaded for this task.
TablesLoading (integer) --
The number of tables currently loading for this task.
TablesQueued (integer) --
The number of tables queued for this task.
TablesErrored (integer) --
The number of errors that have occurred during this task.
FreshStartDate (datetime) --
The date the replication task was started either with a fresh start or a target reload.
StartDate (datetime) --
The date the replication task was started either with a fresh start or a resume. For more information, see StartReplicationTaskType .
StopDate (datetime) --
The date the replication task was stopped.
FullLoadStartDate (datetime) --
The date the replication task full load was started.
FullLoadFinishDate (datetime) --
The date the replication task full load was completed.
TaskData (string) --
Supplemental information that the task requires to migrate the data for certain source and target endpoints. For more information, see Specifying Supplemental Data for Task Settings in the AWS Database Migration Service User Guide.
{'Endpoint': {'RedshiftSettings': {'CaseSensitiveNames': 'boolean', 'CompUpdate': 'boolean', 'ExplicitIds': 'boolean'}}}
Deletes the specified endpoint.
Note
All tasks associated with the endpoint must be deleted before you can delete the endpoint.
See also: AWS API Documentation
Request Syntax
client.delete_endpoint( EndpointArn='string' )
string
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
dict
Response Syntax
{ 'Endpoint': { 'EndpointIdentifier': 'string', 'EndpointType': 'source'|'target', 'EngineName': 'string', 'EngineDisplayName': 'string', 'Username': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'ExtraConnectionAttributes': 'string', 'Status': 'string', 'KmsKeyId': 'string', 'EndpointArn': 'string', 'CertificateArn': 'string', 'SslMode': 'none'|'require'|'verify-ca'|'verify-full', 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'ExternalId': 'string', 'DynamoDbSettings': { 'ServiceAccessRoleArn': 'string' }, 'S3Settings': { 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'CsvRowDelimiter': 'string', 'CsvDelimiter': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CompressionType': 'none'|'gzip', 'EncryptionMode': 'sse-s3'|'sse-kms', 'ServerSideEncryptionKmsKeyId': 'string', 'DataFormat': 'csv'|'parquet', 'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary', 'DictPageSizeLimit': 123, 'RowGroupLength': 123, 'DataPageSize': 123, 'ParquetVersion': 'parquet-1-0'|'parquet-2-0', 'EnableStatistics': True|False, 'IncludeOpForFullLoad': True|False, 'CdcInsertsOnly': True|False, 'TimestampColumnName': 'string', 'ParquetTimestampInMillisecond': True|False, 'CdcInsertsAndUpdates': True|False, 'DatePartitionEnabled': True|False, 'DatePartitionSequence': 'YYYYMMDD'|'YYYYMMDDHH'|'YYYYMM'|'MMYYYYDD'|'DDMMYYYY', 'DatePartitionDelimiter': 'SLASH'|'UNDERSCORE'|'DASH'|'NONE' }, 'DmsTransferSettings': { 'ServiceAccessRoleArn': 'string', 'BucketName': 'string' }, 'MongoDbSettings': { 'Username': 'string', 'Password': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'AuthType': 'no'|'password', 'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1', 'NestingLevel': 'none'|'one', 'ExtractDocId': 'string', 'DocsToInvestigate': 'string', 'AuthSource': 'string', 'KmsKeyId': 'string' }, 'KinesisSettings': { 'StreamArn': 'string', 'MessageFormat': 'json'|'json-unformatted', 'ServiceAccessRoleArn': 'string', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'IncludeNullAndEmpty': True|False }, 'KafkaSettings': { 'Broker': 'string', 'Topic': 'string', 'MessageFormat': 'json'|'json-unformatted', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'MessageMaxBytes': 123, 'IncludeNullAndEmpty': True|False }, 'ElasticsearchSettings': { 'ServiceAccessRoleArn': 'string', 'EndpointUri': 'string', 'FullLoadErrorPercentage': 123, 'ErrorRetryDuration': 123 }, 'NeptuneSettings': { 'ServiceAccessRoleArn': 'string', 'S3BucketName': 'string', 'S3BucketFolder': 'string', 'ErrorRetryDuration': 123, 'MaxFileSize': 123, 'MaxRetryCount': 123, 'IamAuthEnabled': True|False }, 'RedshiftSettings': { 'AcceptAnyDate': True|False, 'AfterConnectScript': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CaseSensitiveNames': True|False, 'CompUpdate': True|False, 'ConnectionTimeout': 123, 'DatabaseName': 'string', 'DateFormat': 'string', 'EmptyAsNull': True|False, 'EncryptionMode': 'sse-s3'|'sse-kms', 'ExplicitIds': True|False, 'FileTransferUploadStreams': 123, 'LoadTimeout': 123, 'MaxFileSize': 123, 'Password': 'string', 'Port': 123, 'RemoveQuotes': True|False, 'ReplaceInvalidChars': 'string', 'ReplaceChars': 'string', 'ServerName': 'string', 'ServiceAccessRoleArn': 'string', 'ServerSideEncryptionKmsKeyId': 'string', 'TimeFormat': 'string', 'TrimBlanks': True|False, 'TruncateColumns': True|False, 'Username': 'string', 'WriteBufferSize': 123 }, 'PostgreSQLSettings': { 'AfterConnectScript': 'string', 'CaptureDdls': True|False, 'MaxFileSize': 123, 'DatabaseName': 'string', 'DdlArtifactsSchema': 'string', 'ExecuteTimeout': 123, 'FailTasksOnLobTruncation': True|False, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string', 'SlotName': 'string' }, 'MySQLSettings': { 'AfterConnectScript': 'string', 'DatabaseName': 'string', 'EventsPollInterval': 123, 'TargetDbType': 'specific-database'|'multiple-databases', 'MaxFileSize': 123, 'ParallelLoadThreads': 123, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'ServerTimezone': 'string', 'Username': 'string' }, 'OracleSettings': { 'AddSupplementalLogging': True|False, 'ArchivedLogDestId': 123, 'AdditionalArchivedLogDestId': 123, 'AllowSelectNestedTables': True|False, 'ParallelAsmReadThreads': 123, 'ReadAheadBlocks': 123, 'AccessAlternateDirectly': True|False, 'UseAlternateFolderForOnline': True|False, 'OraclePathPrefix': 'string', 'UsePathPrefix': 'string', 'ReplacePathPrefix': True|False, 'EnableHomogenousTablespace': True|False, 'DirectPathNoLog': True|False, 'ArchivedLogsOnly': True|False, 'AsmPassword': 'string', 'AsmServer': 'string', 'AsmUser': 'string', 'CharLengthSemantics': 'default'|'char'|'byte', 'DatabaseName': 'string', 'DirectPathParallelLoad': True|False, 'FailTasksOnLobTruncation': True|False, 'NumberDatatypeScale': 123, 'Password': 'string', 'Port': 123, 'ReadTableSpaceName': True|False, 'RetryInterval': 123, 'SecurityDbEncryption': 'string', 'SecurityDbEncryptionName': 'string', 'ServerName': 'string', 'Username': 'string' }, 'SybaseSettings': { 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string' }, 'MicrosoftSQLServerSettings': { 'Port': 123, 'BcpPacketSize': 123, 'DatabaseName': 'string', 'ControlTablesFileGroup': 'string', 'Password': 'string', 'ReadBackupOnly': True|False, 'SafeguardPolicy': 'rely-on-sql-server-replication-agent'|'exclusive-automatic-truncation'|'shared-automatic-truncation', 'ServerName': 'string', 'Username': 'string', 'UseBcpFullLoad': True|False }, 'IBMDb2Settings': { 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'SetDataCaptureChanges': True|False, 'CurrentLsn': 'string', 'MaxKBytesPerRead': 123, 'Username': 'string' } } }
Response Structure
(dict) --
Endpoint (dict) --
The endpoint that was deleted.
EndpointIdentifier (string) --
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , "sqlserver" , and "neptune" .
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Username (string) --
The user name used to connect to the endpoint.
ServerName (string) --
The name of the server at the endpoint.
Port (integer) --
The port value used to access the endpoint.
DatabaseName (string) --
The name of the database at the endpoint.
ExtraConnectionAttributes (string) --
Additional connection attributes used to connect to the endpoint.
Status (string) --
The status of the endpoint.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn (string) --
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode (string) --
The SSL mode used to connect to the endpoint. The default value is none .
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
ExternalId (string) --
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings (dict) --
The settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings (dict) --
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role. It is a required parameter that enables DMS to write and read objects from an 3S bucket.
ExternalTableDefinition (string) --
Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
csv : This is a row-based file format with comma-separated values (.csv).
parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
EncodingType (string) --
The type of encoding you are using:
RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
PLAIN doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DatePartitionEnabled (boolean) --
When set to true , this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false . For more information about date-based folder partitoning, see Using date-based folder partitioning
DatePartitionSequence (string) --
Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD . Use this parameter when DatePartitionedEnabled is set to true .
DatePartitionDelimiter (string) --
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH . Use this parameter when DatePartitionedEnabled is set to true .
DmsTransferSettings (dict) --
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.
BucketName - The name of the S3 bucket to use.
CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
MongoDbSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
When when set to "no" , user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x, "default" is "mongodb_cr" . For MongoDB version 3.x or later, "default" is "scram_sha_1" . This setting isn't used when AuthType is set to "no" .
NestingLevel (string) --
Specifies either document or table mode.
Default value is "none" . Specify "none" to use document mode. Specify "one" to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to "none" .
Default value is "false" .
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one" .
Must be a positive value greater than 0 . Default value is 1000 .
AuthSource (string) --
The MongoDB database name. This setting isn't used when AuthType is set to "no" .
The default is "admin" .
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
KinesisSettings (dict) --
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is false .
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
KafkaSettings (dict) --
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kafka message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false .
MessageMaxBytes (integer) --
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
ElasticsearchSettings (dict) --
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) --
The endpoint for the Elasticsearch cluster. AWS DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings (dict) --
The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
S3BucketName (string) --
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder (string) --
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true . Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn . The default is false .
RedshiftSettings (dict) --
Settings for the Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift COPY command to upload the .csv files to the target table. The files are deleted once the COPY operation has finished. For more information, see Amazon Redshift Database Developer Guide
For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName (string) --
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames (boolean) --
If Amazon Redshift is configured to support case sensitive schema names, set CaseSensitiveNames to true . The default is false .
CompUpdate (boolean) --
If you set CompUpdate to true Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW . If you set CompUpdate to false , automatic compression is disabled and existing column encodings aren't changed. The default is true .
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
ExplicitIds (boolean) --
This setting is only valid for a full-load migration task. Set ExplicitIds to true to have tables with IDENTITY columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false .
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview .
FileTransferUploadStreams accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out of operations performed by AWS DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
PostgreSQLSettings (dict) --
The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.
AfterConnectScript (string) --
For use with change data capture (CDC) only, this attribute has AWS DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example: afterConnectScript=SET session_replication_role='replica'
CaptureDdls (boolean) --
To capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to N , you don't have to create tables or triggers on the source database.
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example: maxFileSize=512
DatabaseName (string) --
Database name for the endpoint.
DdlArtifactsSchema (string) --
The schema in which the operational DDL database artifacts are created.
Example: ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout (integer) --
Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example: executeTimeout=100;
FailTasksOnLobTruncation (boolean) --
When set to true , this value causes a task to fail if the actual size of a LOB column is greater than the specified LobMaxSize .
If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SlotName (string) --
Sets the name of a previously created logical replication slot for a CDC load of the PostgreSQL source instance.
When used with the AWS DMS API CdcStartPosition request parameter, this attribute also enables using native CDC start points.
MySQLSettings (dict) --
The settings for the MySQL source and target endpoint. For more information, see the MySQLSettings structure.
AfterConnectScript (string) --
Specifies a script to run immediately after AWS DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
DatabaseName (string) --
Database name for the endpoint.
EventsPollInterval (integer) --
Specifies how often to check the binary log for new changes/events when the database is idle.
Example: eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.
TargetDbType (string) --
Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example: targetDbType=MULTIPLE_DATABASES
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example: maxFileSize=512
ParallelLoadThreads (integer) --
Improves performance when loading data into the MySQLcompatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example: parallelLoadThreads=1
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
ServerTimezone (string) --
Specifies the time zone for the source MySQL database.
Example: serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username (string) --
Endpoint connection user name.
OracleSettings (dict) --
The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure.
AddSupplementalLogging (boolean) --
Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId (integer) --
Specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the v$archived_log table. When working with multiple log destinations (DEST_ID), we recommend that you to specify an archived redo logs location identifier. Doing this improves performance by ensuring that the correct logs are accessed from the outset.
AdditionalArchivedLogDestId (integer) --
Set this attribute with archivedLogDestId in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.
AllowSelectNestedTables (boolean) --
Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types.
ParallelAsmReadThreads (integer) --
Set this attribute to change the number of threads that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the readAheadBlocks attribute.
ReadAheadBlocks (integer) --
Set this attribute to change the number of read-ahead blocks that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly (boolean) --
Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.
UseAlternateFolderForOnline (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.
OraclePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified usePathPrefix setting to access the redo logs.
EnableHomogenousTablespace (boolean) --
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog (boolean) --
When set to true , this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.
ArchivedLogsOnly (boolean) --
When this field is set to Y , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the AWS DMS user account needs to be granted ASM privileges.
AsmPassword (string) --
For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the `` asm_user_password `` value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmServer (string) --
For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmUser (string) --
For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
CharLengthSemantics (string) --
Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to CHAR . Otherwise, the character column length is in bytes.
Example: charLengthSemantics=CHAR;
DatabaseName (string) --
Database name for the endpoint.
DirectPathParallelLoad (boolean) --
When set to true , this attribute specifies a parallel load when useDirectPathFullLoad is set to Y . This attribute also only applies when you use the AWS DMS parallel load feature. Note that the target table cannot have any constraints or indexes.
FailTasksOnLobTruncation (boolean) --
When set to true , this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize .
If a task is set to limited LOB mode and this option is set to true , the task fails instead of truncating the LOB data.
NumberDatatypeScale (integer) --
Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example: numberDataTypeScale=12
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ReadTableSpaceName (boolean) --
When set to true , this attribute supports tablespace replication.
RetryInterval (integer) --
Specifies the number of seconds that the system waits before resending a query.
Example: retryInterval=6;
SecurityDbEncryption (string) --
For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the `` TDE_Password `` part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
SecurityDbEncryptionName (string) --
For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName , see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SybaseSettings (dict) --
The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
MicrosoftSQLServerSettings (dict) --
The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.
Port (integer) --
Endpoint TCP port.
BcpPacketSize (integer) --
The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName (string) --
Database name for the endpoint.
ControlTablesFileGroup (string) --
Specify a filegroup for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created on the specified filegroup.
Password (string) --
Endpoint connection password.
ReadBackupOnly (boolean) --
When this attribute is set to Y , AWS DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.
SafeguardPolicy (string) --
Use this attribute to minimize the need to access the backup log and enable AWS DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task : When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
UseBcpFullLoad (boolean) --
Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
IBMDb2Settings (dict) --
The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
SetDataCaptureChanges (boolean) --
Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn (string) --
For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead (integer) --
Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username (string) --
Endpoint connection user name.
{'Endpoints': {'RedshiftSettings': {'CaseSensitiveNames': 'boolean', 'CompUpdate': 'boolean', 'ExplicitIds': 'boolean'}}}
Returns information about the endpoints for your account in the current region.
See also: AWS API Documentation
Request Syntax
client.describe_endpoints( Filters=[ { 'Name': 'string', 'Values': [ 'string', ] }, ], MaxRecords=123, Marker='string' )
list
Filters applied to the endpoints.
Valid filter names: endpoint-arn | endpoint-type | endpoint-id | engine-name
(dict) --
Identifies the name and value of a filter object. This filter is used to limit the number and type of AWS DMS objects that are returned for a particular Describe* call or similar operation. Filters are used as an optional parameter to the following APIs.
Name (string) -- [REQUIRED]
The name of the filter as specified for a Describe* or similar operation.
Values (list) -- [REQUIRED]
The filter value, which can specify one or more values used to narrow the returned results.
(string) --
integer
The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.
Default: 100
Constraints: Minimum 20, maximum 100.
string
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
dict
Response Syntax
{ 'Marker': 'string', 'Endpoints': [ { 'EndpointIdentifier': 'string', 'EndpointType': 'source'|'target', 'EngineName': 'string', 'EngineDisplayName': 'string', 'Username': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'ExtraConnectionAttributes': 'string', 'Status': 'string', 'KmsKeyId': 'string', 'EndpointArn': 'string', 'CertificateArn': 'string', 'SslMode': 'none'|'require'|'verify-ca'|'verify-full', 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'ExternalId': 'string', 'DynamoDbSettings': { 'ServiceAccessRoleArn': 'string' }, 'S3Settings': { 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'CsvRowDelimiter': 'string', 'CsvDelimiter': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CompressionType': 'none'|'gzip', 'EncryptionMode': 'sse-s3'|'sse-kms', 'ServerSideEncryptionKmsKeyId': 'string', 'DataFormat': 'csv'|'parquet', 'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary', 'DictPageSizeLimit': 123, 'RowGroupLength': 123, 'DataPageSize': 123, 'ParquetVersion': 'parquet-1-0'|'parquet-2-0', 'EnableStatistics': True|False, 'IncludeOpForFullLoad': True|False, 'CdcInsertsOnly': True|False, 'TimestampColumnName': 'string', 'ParquetTimestampInMillisecond': True|False, 'CdcInsertsAndUpdates': True|False, 'DatePartitionEnabled': True|False, 'DatePartitionSequence': 'YYYYMMDD'|'YYYYMMDDHH'|'YYYYMM'|'MMYYYYDD'|'DDMMYYYY', 'DatePartitionDelimiter': 'SLASH'|'UNDERSCORE'|'DASH'|'NONE' }, 'DmsTransferSettings': { 'ServiceAccessRoleArn': 'string', 'BucketName': 'string' }, 'MongoDbSettings': { 'Username': 'string', 'Password': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'AuthType': 'no'|'password', 'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1', 'NestingLevel': 'none'|'one', 'ExtractDocId': 'string', 'DocsToInvestigate': 'string', 'AuthSource': 'string', 'KmsKeyId': 'string' }, 'KinesisSettings': { 'StreamArn': 'string', 'MessageFormat': 'json'|'json-unformatted', 'ServiceAccessRoleArn': 'string', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'IncludeNullAndEmpty': True|False }, 'KafkaSettings': { 'Broker': 'string', 'Topic': 'string', 'MessageFormat': 'json'|'json-unformatted', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'MessageMaxBytes': 123, 'IncludeNullAndEmpty': True|False }, 'ElasticsearchSettings': { 'ServiceAccessRoleArn': 'string', 'EndpointUri': 'string', 'FullLoadErrorPercentage': 123, 'ErrorRetryDuration': 123 }, 'NeptuneSettings': { 'ServiceAccessRoleArn': 'string', 'S3BucketName': 'string', 'S3BucketFolder': 'string', 'ErrorRetryDuration': 123, 'MaxFileSize': 123, 'MaxRetryCount': 123, 'IamAuthEnabled': True|False }, 'RedshiftSettings': { 'AcceptAnyDate': True|False, 'AfterConnectScript': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CaseSensitiveNames': True|False, 'CompUpdate': True|False, 'ConnectionTimeout': 123, 'DatabaseName': 'string', 'DateFormat': 'string', 'EmptyAsNull': True|False, 'EncryptionMode': 'sse-s3'|'sse-kms', 'ExplicitIds': True|False, 'FileTransferUploadStreams': 123, 'LoadTimeout': 123, 'MaxFileSize': 123, 'Password': 'string', 'Port': 123, 'RemoveQuotes': True|False, 'ReplaceInvalidChars': 'string', 'ReplaceChars': 'string', 'ServerName': 'string', 'ServiceAccessRoleArn': 'string', 'ServerSideEncryptionKmsKeyId': 'string', 'TimeFormat': 'string', 'TrimBlanks': True|False, 'TruncateColumns': True|False, 'Username': 'string', 'WriteBufferSize': 123 }, 'PostgreSQLSettings': { 'AfterConnectScript': 'string', 'CaptureDdls': True|False, 'MaxFileSize': 123, 'DatabaseName': 'string', 'DdlArtifactsSchema': 'string', 'ExecuteTimeout': 123, 'FailTasksOnLobTruncation': True|False, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string', 'SlotName': 'string' }, 'MySQLSettings': { 'AfterConnectScript': 'string', 'DatabaseName': 'string', 'EventsPollInterval': 123, 'TargetDbType': 'specific-database'|'multiple-databases', 'MaxFileSize': 123, 'ParallelLoadThreads': 123, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'ServerTimezone': 'string', 'Username': 'string' }, 'OracleSettings': { 'AddSupplementalLogging': True|False, 'ArchivedLogDestId': 123, 'AdditionalArchivedLogDestId': 123, 'AllowSelectNestedTables': True|False, 'ParallelAsmReadThreads': 123, 'ReadAheadBlocks': 123, 'AccessAlternateDirectly': True|False, 'UseAlternateFolderForOnline': True|False, 'OraclePathPrefix': 'string', 'UsePathPrefix': 'string', 'ReplacePathPrefix': True|False, 'EnableHomogenousTablespace': True|False, 'DirectPathNoLog': True|False, 'ArchivedLogsOnly': True|False, 'AsmPassword': 'string', 'AsmServer': 'string', 'AsmUser': 'string', 'CharLengthSemantics': 'default'|'char'|'byte', 'DatabaseName': 'string', 'DirectPathParallelLoad': True|False, 'FailTasksOnLobTruncation': True|False, 'NumberDatatypeScale': 123, 'Password': 'string', 'Port': 123, 'ReadTableSpaceName': True|False, 'RetryInterval': 123, 'SecurityDbEncryption': 'string', 'SecurityDbEncryptionName': 'string', 'ServerName': 'string', 'Username': 'string' }, 'SybaseSettings': { 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string' }, 'MicrosoftSQLServerSettings': { 'Port': 123, 'BcpPacketSize': 123, 'DatabaseName': 'string', 'ControlTablesFileGroup': 'string', 'Password': 'string', 'ReadBackupOnly': True|False, 'SafeguardPolicy': 'rely-on-sql-server-replication-agent'|'exclusive-automatic-truncation'|'shared-automatic-truncation', 'ServerName': 'string', 'Username': 'string', 'UseBcpFullLoad': True|False }, 'IBMDb2Settings': { 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'SetDataCaptureChanges': True|False, 'CurrentLsn': 'string', 'MaxKBytesPerRead': 123, 'Username': 'string' } }, ] }
Response Structure
(dict) --
Marker (string) --
An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords .
Endpoints (list) --
Endpoint description.
(dict) --
Describes an endpoint of a database instance in response to operations such as the following:
CreateEndpoint
DescribeEndpoint
DescribeEndpointTypes
ModifyEndpoint
EndpointIdentifier (string) --
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , "sqlserver" , and "neptune" .
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Username (string) --
The user name used to connect to the endpoint.
ServerName (string) --
The name of the server at the endpoint.
Port (integer) --
The port value used to access the endpoint.
DatabaseName (string) --
The name of the database at the endpoint.
ExtraConnectionAttributes (string) --
Additional connection attributes used to connect to the endpoint.
Status (string) --
The status of the endpoint.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn (string) --
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode (string) --
The SSL mode used to connect to the endpoint. The default value is none .
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
ExternalId (string) --
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings (dict) --
The settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings (dict) --
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role. It is a required parameter that enables DMS to write and read objects from an 3S bucket.
ExternalTableDefinition (string) --
Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
csv : This is a row-based file format with comma-separated values (.csv).
parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
EncodingType (string) --
The type of encoding you are using:
RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
PLAIN doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DatePartitionEnabled (boolean) --
When set to true , this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false . For more information about date-based folder partitoning, see Using date-based folder partitioning
DatePartitionSequence (string) --
Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD . Use this parameter when DatePartitionedEnabled is set to true .
DatePartitionDelimiter (string) --
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH . Use this parameter when DatePartitionedEnabled is set to true .
DmsTransferSettings (dict) --
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.
BucketName - The name of the S3 bucket to use.
CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
MongoDbSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
When when set to "no" , user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x, "default" is "mongodb_cr" . For MongoDB version 3.x or later, "default" is "scram_sha_1" . This setting isn't used when AuthType is set to "no" .
NestingLevel (string) --
Specifies either document or table mode.
Default value is "none" . Specify "none" to use document mode. Specify "one" to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to "none" .
Default value is "false" .
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one" .
Must be a positive value greater than 0 . Default value is 1000 .
AuthSource (string) --
The MongoDB database name. This setting isn't used when AuthType is set to "no" .
The default is "admin" .
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
KinesisSettings (dict) --
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is false .
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
KafkaSettings (dict) --
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kafka message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false .
MessageMaxBytes (integer) --
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
ElasticsearchSettings (dict) --
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) --
The endpoint for the Elasticsearch cluster. AWS DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings (dict) --
The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
S3BucketName (string) --
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder (string) --
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true . Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn . The default is false .
RedshiftSettings (dict) --
Settings for the Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift COPY command to upload the .csv files to the target table. The files are deleted once the COPY operation has finished. For more information, see Amazon Redshift Database Developer Guide
For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName (string) --
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames (boolean) --
If Amazon Redshift is configured to support case sensitive schema names, set CaseSensitiveNames to true . The default is false .
CompUpdate (boolean) --
If you set CompUpdate to true Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW . If you set CompUpdate to false , automatic compression is disabled and existing column encodings aren't changed. The default is true .
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
ExplicitIds (boolean) --
This setting is only valid for a full-load migration task. Set ExplicitIds to true to have tables with IDENTITY columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false .
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview .
FileTransferUploadStreams accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out of operations performed by AWS DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
PostgreSQLSettings (dict) --
The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.
AfterConnectScript (string) --
For use with change data capture (CDC) only, this attribute has AWS DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example: afterConnectScript=SET session_replication_role='replica'
CaptureDdls (boolean) --
To capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to N , you don't have to create tables or triggers on the source database.
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example: maxFileSize=512
DatabaseName (string) --
Database name for the endpoint.
DdlArtifactsSchema (string) --
The schema in which the operational DDL database artifacts are created.
Example: ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout (integer) --
Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example: executeTimeout=100;
FailTasksOnLobTruncation (boolean) --
When set to true , this value causes a task to fail if the actual size of a LOB column is greater than the specified LobMaxSize .
If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SlotName (string) --
Sets the name of a previously created logical replication slot for a CDC load of the PostgreSQL source instance.
When used with the AWS DMS API CdcStartPosition request parameter, this attribute also enables using native CDC start points.
MySQLSettings (dict) --
The settings for the MySQL source and target endpoint. For more information, see the MySQLSettings structure.
AfterConnectScript (string) --
Specifies a script to run immediately after AWS DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
DatabaseName (string) --
Database name for the endpoint.
EventsPollInterval (integer) --
Specifies how often to check the binary log for new changes/events when the database is idle.
Example: eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.
TargetDbType (string) --
Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example: targetDbType=MULTIPLE_DATABASES
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example: maxFileSize=512
ParallelLoadThreads (integer) --
Improves performance when loading data into the MySQLcompatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example: parallelLoadThreads=1
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
ServerTimezone (string) --
Specifies the time zone for the source MySQL database.
Example: serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username (string) --
Endpoint connection user name.
OracleSettings (dict) --
The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure.
AddSupplementalLogging (boolean) --
Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId (integer) --
Specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the v$archived_log table. When working with multiple log destinations (DEST_ID), we recommend that you to specify an archived redo logs location identifier. Doing this improves performance by ensuring that the correct logs are accessed from the outset.
AdditionalArchivedLogDestId (integer) --
Set this attribute with archivedLogDestId in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.
AllowSelectNestedTables (boolean) --
Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types.
ParallelAsmReadThreads (integer) --
Set this attribute to change the number of threads that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the readAheadBlocks attribute.
ReadAheadBlocks (integer) --
Set this attribute to change the number of read-ahead blocks that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly (boolean) --
Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.
UseAlternateFolderForOnline (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.
OraclePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified usePathPrefix setting to access the redo logs.
EnableHomogenousTablespace (boolean) --
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog (boolean) --
When set to true , this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.
ArchivedLogsOnly (boolean) --
When this field is set to Y , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the AWS DMS user account needs to be granted ASM privileges.
AsmPassword (string) --
For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the `` asm_user_password `` value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmServer (string) --
For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmUser (string) --
For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
CharLengthSemantics (string) --
Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to CHAR . Otherwise, the character column length is in bytes.
Example: charLengthSemantics=CHAR;
DatabaseName (string) --
Database name for the endpoint.
DirectPathParallelLoad (boolean) --
When set to true , this attribute specifies a parallel load when useDirectPathFullLoad is set to Y . This attribute also only applies when you use the AWS DMS parallel load feature. Note that the target table cannot have any constraints or indexes.
FailTasksOnLobTruncation (boolean) --
When set to true , this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize .
If a task is set to limited LOB mode and this option is set to true , the task fails instead of truncating the LOB data.
NumberDatatypeScale (integer) --
Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example: numberDataTypeScale=12
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ReadTableSpaceName (boolean) --
When set to true , this attribute supports tablespace replication.
RetryInterval (integer) --
Specifies the number of seconds that the system waits before resending a query.
Example: retryInterval=6;
SecurityDbEncryption (string) --
For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the `` TDE_Password `` part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
SecurityDbEncryptionName (string) --
For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName , see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SybaseSettings (dict) --
The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
MicrosoftSQLServerSettings (dict) --
The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.
Port (integer) --
Endpoint TCP port.
BcpPacketSize (integer) --
The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName (string) --
Database name for the endpoint.
ControlTablesFileGroup (string) --
Specify a filegroup for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created on the specified filegroup.
Password (string) --
Endpoint connection password.
ReadBackupOnly (boolean) --
When this attribute is set to Y , AWS DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.
SafeguardPolicy (string) --
Use this attribute to minimize the need to access the backup log and enable AWS DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task : When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
UseBcpFullLoad (boolean) --
Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
IBMDb2Settings (dict) --
The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
SetDataCaptureChanges (boolean) --
Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn (string) --
For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead (integer) --
Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username (string) --
Endpoint connection user name.
{'RedshiftSettings': {'CaseSensitiveNames': 'boolean', 'CompUpdate': 'boolean', 'ExplicitIds': 'boolean'}}Response
{'Endpoint': {'RedshiftSettings': {'CaseSensitiveNames': 'boolean', 'CompUpdate': 'boolean', 'ExplicitIds': 'boolean'}}}
Modifies the specified endpoint.
See also: AWS API Documentation
Request Syntax
client.modify_endpoint( EndpointArn='string', EndpointIdentifier='string', EndpointType='source'|'target', EngineName='string', Username='string', Password='string', ServerName='string', Port=123, DatabaseName='string', ExtraConnectionAttributes='string', CertificateArn='string', SslMode='none'|'require'|'verify-ca'|'verify-full', ServiceAccessRoleArn='string', ExternalTableDefinition='string', DynamoDbSettings={ 'ServiceAccessRoleArn': 'string' }, S3Settings={ 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'CsvRowDelimiter': 'string', 'CsvDelimiter': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CompressionType': 'none'|'gzip', 'EncryptionMode': 'sse-s3'|'sse-kms', 'ServerSideEncryptionKmsKeyId': 'string', 'DataFormat': 'csv'|'parquet', 'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary', 'DictPageSizeLimit': 123, 'RowGroupLength': 123, 'DataPageSize': 123, 'ParquetVersion': 'parquet-1-0'|'parquet-2-0', 'EnableStatistics': True|False, 'IncludeOpForFullLoad': True|False, 'CdcInsertsOnly': True|False, 'TimestampColumnName': 'string', 'ParquetTimestampInMillisecond': True|False, 'CdcInsertsAndUpdates': True|False, 'DatePartitionEnabled': True|False, 'DatePartitionSequence': 'YYYYMMDD'|'YYYYMMDDHH'|'YYYYMM'|'MMYYYYDD'|'DDMMYYYY', 'DatePartitionDelimiter': 'SLASH'|'UNDERSCORE'|'DASH'|'NONE' }, DmsTransferSettings={ 'ServiceAccessRoleArn': 'string', 'BucketName': 'string' }, MongoDbSettings={ 'Username': 'string', 'Password': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'AuthType': 'no'|'password', 'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1', 'NestingLevel': 'none'|'one', 'ExtractDocId': 'string', 'DocsToInvestigate': 'string', 'AuthSource': 'string', 'KmsKeyId': 'string' }, KinesisSettings={ 'StreamArn': 'string', 'MessageFormat': 'json'|'json-unformatted', 'ServiceAccessRoleArn': 'string', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'IncludeNullAndEmpty': True|False }, KafkaSettings={ 'Broker': 'string', 'Topic': 'string', 'MessageFormat': 'json'|'json-unformatted', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'MessageMaxBytes': 123, 'IncludeNullAndEmpty': True|False }, ElasticsearchSettings={ 'ServiceAccessRoleArn': 'string', 'EndpointUri': 'string', 'FullLoadErrorPercentage': 123, 'ErrorRetryDuration': 123 }, NeptuneSettings={ 'ServiceAccessRoleArn': 'string', 'S3BucketName': 'string', 'S3BucketFolder': 'string', 'ErrorRetryDuration': 123, 'MaxFileSize': 123, 'MaxRetryCount': 123, 'IamAuthEnabled': True|False }, RedshiftSettings={ 'AcceptAnyDate': True|False, 'AfterConnectScript': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CaseSensitiveNames': True|False, 'CompUpdate': True|False, 'ConnectionTimeout': 123, 'DatabaseName': 'string', 'DateFormat': 'string', 'EmptyAsNull': True|False, 'EncryptionMode': 'sse-s3'|'sse-kms', 'ExplicitIds': True|False, 'FileTransferUploadStreams': 123, 'LoadTimeout': 123, 'MaxFileSize': 123, 'Password': 'string', 'Port': 123, 'RemoveQuotes': True|False, 'ReplaceInvalidChars': 'string', 'ReplaceChars': 'string', 'ServerName': 'string', 'ServiceAccessRoleArn': 'string', 'ServerSideEncryptionKmsKeyId': 'string', 'TimeFormat': 'string', 'TrimBlanks': True|False, 'TruncateColumns': True|False, 'Username': 'string', 'WriteBufferSize': 123 }, PostgreSQLSettings={ 'AfterConnectScript': 'string', 'CaptureDdls': True|False, 'MaxFileSize': 123, 'DatabaseName': 'string', 'DdlArtifactsSchema': 'string', 'ExecuteTimeout': 123, 'FailTasksOnLobTruncation': True|False, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string', 'SlotName': 'string' }, MySQLSettings={ 'AfterConnectScript': 'string', 'DatabaseName': 'string', 'EventsPollInterval': 123, 'TargetDbType': 'specific-database'|'multiple-databases', 'MaxFileSize': 123, 'ParallelLoadThreads': 123, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'ServerTimezone': 'string', 'Username': 'string' }, OracleSettings={ 'AddSupplementalLogging': True|False, 'ArchivedLogDestId': 123, 'AdditionalArchivedLogDestId': 123, 'AllowSelectNestedTables': True|False, 'ParallelAsmReadThreads': 123, 'ReadAheadBlocks': 123, 'AccessAlternateDirectly': True|False, 'UseAlternateFolderForOnline': True|False, 'OraclePathPrefix': 'string', 'UsePathPrefix': 'string', 'ReplacePathPrefix': True|False, 'EnableHomogenousTablespace': True|False, 'DirectPathNoLog': True|False, 'ArchivedLogsOnly': True|False, 'AsmPassword': 'string', 'AsmServer': 'string', 'AsmUser': 'string', 'CharLengthSemantics': 'default'|'char'|'byte', 'DatabaseName': 'string', 'DirectPathParallelLoad': True|False, 'FailTasksOnLobTruncation': True|False, 'NumberDatatypeScale': 123, 'Password': 'string', 'Port': 123, 'ReadTableSpaceName': True|False, 'RetryInterval': 123, 'SecurityDbEncryption': 'string', 'SecurityDbEncryptionName': 'string', 'ServerName': 'string', 'Username': 'string' }, SybaseSettings={ 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string' }, MicrosoftSQLServerSettings={ 'Port': 123, 'BcpPacketSize': 123, 'DatabaseName': 'string', 'ControlTablesFileGroup': 'string', 'Password': 'string', 'ReadBackupOnly': True|False, 'SafeguardPolicy': 'rely-on-sql-server-replication-agent'|'exclusive-automatic-truncation'|'shared-automatic-truncation', 'ServerName': 'string', 'Username': 'string', 'UseBcpFullLoad': True|False }, IBMDb2Settings={ 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'SetDataCaptureChanges': True|False, 'CurrentLsn': 'string', 'MaxKBytesPerRead': 123, 'Username': 'string' } )
string
[REQUIRED]
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
string
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
string
The type of endpoint. Valid values are source and target .
string
The type of engine for the endpoint. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , "sqlserver" , and "neptune" .
string
The user name to be used to login to the endpoint database.
string
The password to be used to login to the endpoint database.
string
The name of the server where the endpoint database resides.
integer
The port used by the endpoint database.
string
The name of the endpoint database.
string
Additional attributes associated with the connection. To reset this parameter, pass the empty string ("") as an argument.
string
The Amazon Resource Name (ARN) of the certificate used for SSL connection.
string
The SSL mode used to connect to the endpoint. The default value is none .
string
The Amazon Resource Name (ARN) for the service access role you want to use to modify the endpoint.
string
The external table definition.
dict
Settings in JSON format for the target Amazon DynamoDB endpoint. For information about other available settings, see Using Object Mapping to Migrate Data to DynamoDB in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) used by the service access IAM role.
dict
Settings in JSON format for the target Amazon S3 endpoint. For more information about the available settings, see Extra Connection Attributes When Using Amazon S3 as a Target for AWS DMS in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role. It is a required parameter that enables DMS to write and read objects from an 3S bucket.
ExternalTableDefinition (string) --
Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
csv : This is a row-based file format with comma-separated values (.csv).
parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
EncodingType (string) --
The type of encoding you are using:
RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
PLAIN doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DatePartitionEnabled (boolean) --
When set to true , this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false . For more information about date-based folder partitoning, see Using date-based folder partitioning
DatePartitionSequence (string) --
Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD . Use this parameter when DatePartitionedEnabled is set to true .
DatePartitionDelimiter (string) --
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH . Use this parameter when DatePartitionedEnabled is set to true .
dict
The settings in JSON format for the DMS transfer type of source endpoint.
Attributes include the following:
serviceAccessRoleArn - The AWS Identity and Access Management (IAM) role that has permission to access the Amazon S3 bucket.
BucketName - The name of the S3 bucket to use.
compressionType - An optional parameter to use GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed.
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string ,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
dict
Settings in JSON format for the source MongoDB endpoint. For more information about the available settings, see the configuration properties section in Using MongoDB as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
When when set to "no" , user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x, "default" is "mongodb_cr" . For MongoDB version 3.x or later, "default" is "scram_sha_1" . This setting isn't used when AuthType is set to "no" .
NestingLevel (string) --
Specifies either document or table mode.
Default value is "none" . Specify "none" to use document mode. Specify "one" to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to "none" .
Default value is "false" .
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one" .
Must be a positive value greater than 0 . Default value is 1000 .
AuthSource (string) --
The MongoDB database name. This setting isn't used when AuthType is set to "no" .
The default is "admin" .
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
dict
Settings in JSON format for the target endpoint for Amazon Kinesis Data Streams. For more information about the available settings, see Using Amazon Kinesis Data Streams as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is false .
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
dict
Settings in JSON format for the target Apache Kafka endpoint. For more information about the available settings, see Using Apache Kafka as a Target for AWS Database Migration Service in the AWS Database Migration Service User Guide.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kafka message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false .
MessageMaxBytes (integer) --
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
dict
Settings in JSON format for the target Elasticsearch endpoint. For more information about the available settings, see Extra Connection Attributes When Using Elasticsearch as a Target for AWS DMS in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn (string) -- [REQUIRED]
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) -- [REQUIRED]
The endpoint for the Elasticsearch cluster. AWS DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
dict
Settings in JSON format for the target Amazon Neptune endpoint. For more information about the available settings, see Specifying Endpoint Settings for Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
S3BucketName (string) -- [REQUIRED]
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder (string) -- [REQUIRED]
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true . Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn . The default is false .
dict
Provides information that defines an Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift COPY command to upload the .csv files to the target table. The files are deleted once the COPY operation has finished. For more information, see Amazon Redshift Database Developer Guide
For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName (string) --
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames (boolean) --
If Amazon Redshift is configured to support case sensitive schema names, set CaseSensitiveNames to true . The default is false .
CompUpdate (boolean) --
If you set CompUpdate to true Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW . If you set CompUpdate to false , automatic compression is disabled and existing column encodings aren't changed. The default is true .
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
ExplicitIds (boolean) --
This setting is only valid for a full-load migration task. Set ExplicitIds to true to have tables with IDENTITY columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false .
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview .
FileTransferUploadStreams accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out of operations performed by AWS DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
dict
Settings in JSON format for the source and target PostgreSQL endpoint. For information about other available settings, see Extra connection attributes when using PostgreSQL as a source for AWS DMS and Extra connection attributes when using PostgreSQL as a target for AWS DMS in the AWS Database Migration Service User Guide.
AfterConnectScript (string) --
For use with change data capture (CDC) only, this attribute has AWS DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example: afterConnectScript=SET session_replication_role='replica'
CaptureDdls (boolean) --
To capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to N , you don't have to create tables or triggers on the source database.
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example: maxFileSize=512
DatabaseName (string) --
Database name for the endpoint.
DdlArtifactsSchema (string) --
The schema in which the operational DDL database artifacts are created.
Example: ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout (integer) --
Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example: executeTimeout=100;
FailTasksOnLobTruncation (boolean) --
When set to true , this value causes a task to fail if the actual size of a LOB column is greater than the specified LobMaxSize .
If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SlotName (string) --
Sets the name of a previously created logical replication slot for a CDC load of the PostgreSQL source instance.
When used with the AWS DMS API CdcStartPosition request parameter, this attribute also enables using native CDC start points.
dict
Settings in JSON format for the source and target MySQL endpoint. For information about other available settings, see Extra connection attributes when using MySQL as a source for AWS DMS and Extra connection attributes when using a MySQL-compatible database as a target for AWS DMS in the AWS Database Migration Service User Guide.
AfterConnectScript (string) --
Specifies a script to run immediately after AWS DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
DatabaseName (string) --
Database name for the endpoint.
EventsPollInterval (integer) --
Specifies how often to check the binary log for new changes/events when the database is idle.
Example: eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.
TargetDbType (string) --
Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example: targetDbType=MULTIPLE_DATABASES
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example: maxFileSize=512
ParallelLoadThreads (integer) --
Improves performance when loading data into the MySQLcompatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example: parallelLoadThreads=1
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
ServerTimezone (string) --
Specifies the time zone for the source MySQL database.
Example: serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username (string) --
Endpoint connection user name.
dict
Settings in JSON format for the source and target Oracle endpoint. For information about other available settings, see Extra connection attributes when using Oracle as a source for AWS DMS and Extra connection attributes when using Oracle as a target for AWS DMS in the AWS Database Migration Service User Guide.
AddSupplementalLogging (boolean) --
Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId (integer) --
Specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the v$archived_log table. When working with multiple log destinations (DEST_ID), we recommend that you to specify an archived redo logs location identifier. Doing this improves performance by ensuring that the correct logs are accessed from the outset.
AdditionalArchivedLogDestId (integer) --
Set this attribute with archivedLogDestId in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.
AllowSelectNestedTables (boolean) --
Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types.
ParallelAsmReadThreads (integer) --
Set this attribute to change the number of threads that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the readAheadBlocks attribute.
ReadAheadBlocks (integer) --
Set this attribute to change the number of read-ahead blocks that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly (boolean) --
Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.
UseAlternateFolderForOnline (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.
OraclePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified usePathPrefix setting to access the redo logs.
EnableHomogenousTablespace (boolean) --
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog (boolean) --
When set to true , this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.
ArchivedLogsOnly (boolean) --
When this field is set to Y , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the AWS DMS user account needs to be granted ASM privileges.
AsmPassword (string) --
For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the `` asm_user_password `` value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmServer (string) --
For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmUser (string) --
For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
CharLengthSemantics (string) --
Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to CHAR . Otherwise, the character column length is in bytes.
Example: charLengthSemantics=CHAR;
DatabaseName (string) --
Database name for the endpoint.
DirectPathParallelLoad (boolean) --
When set to true , this attribute specifies a parallel load when useDirectPathFullLoad is set to Y . This attribute also only applies when you use the AWS DMS parallel load feature. Note that the target table cannot have any constraints or indexes.
FailTasksOnLobTruncation (boolean) --
When set to true , this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize .
If a task is set to limited LOB mode and this option is set to true , the task fails instead of truncating the LOB data.
NumberDatatypeScale (integer) --
Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example: numberDataTypeScale=12
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ReadTableSpaceName (boolean) --
When set to true , this attribute supports tablespace replication.
RetryInterval (integer) --
Specifies the number of seconds that the system waits before resending a query.
Example: retryInterval=6;
SecurityDbEncryption (string) --
For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the `` TDE_Password `` part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
SecurityDbEncryptionName (string) --
For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName , see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
dict
Settings in JSON format for the source and target SAP ASE endpoint. For information about other available settings, see Extra connection attributes when using SAP ASE as a source for AWS DMS and Extra connection attributes when using SAP ASE as a target for AWS DMS in the AWS Database Migration Service User Guide.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
dict
Settings in JSON format for the source and target Microsoft SQL Server endpoint. For information about other available settings, see Extra connection attributes when using SQL Server as a source for AWS DMS and Extra connection attributes when using SQL Server as a target for AWS DMS in the AWS Database Migration Service User Guide.
Port (integer) --
Endpoint TCP port.
BcpPacketSize (integer) --
The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName (string) --
Database name for the endpoint.
ControlTablesFileGroup (string) --
Specify a filegroup for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created on the specified filegroup.
Password (string) --
Endpoint connection password.
ReadBackupOnly (boolean) --
When this attribute is set to Y , AWS DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.
SafeguardPolicy (string) --
Use this attribute to minimize the need to access the backup log and enable AWS DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task : When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
UseBcpFullLoad (boolean) --
Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
dict
Settings in JSON format for the source IBM Db2 LUW endpoint. For information about other available settings, see Extra connection attributes when using Db2 LUW as a source for AWS DMS in the AWS Database Migration Service User Guide.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
SetDataCaptureChanges (boolean) --
Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn (string) --
For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead (integer) --
Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username (string) --
Endpoint connection user name.
dict
Response Syntax
{ 'Endpoint': { 'EndpointIdentifier': 'string', 'EndpointType': 'source'|'target', 'EngineName': 'string', 'EngineDisplayName': 'string', 'Username': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'ExtraConnectionAttributes': 'string', 'Status': 'string', 'KmsKeyId': 'string', 'EndpointArn': 'string', 'CertificateArn': 'string', 'SslMode': 'none'|'require'|'verify-ca'|'verify-full', 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'ExternalId': 'string', 'DynamoDbSettings': { 'ServiceAccessRoleArn': 'string' }, 'S3Settings': { 'ServiceAccessRoleArn': 'string', 'ExternalTableDefinition': 'string', 'CsvRowDelimiter': 'string', 'CsvDelimiter': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CompressionType': 'none'|'gzip', 'EncryptionMode': 'sse-s3'|'sse-kms', 'ServerSideEncryptionKmsKeyId': 'string', 'DataFormat': 'csv'|'parquet', 'EncodingType': 'plain'|'plain-dictionary'|'rle-dictionary', 'DictPageSizeLimit': 123, 'RowGroupLength': 123, 'DataPageSize': 123, 'ParquetVersion': 'parquet-1-0'|'parquet-2-0', 'EnableStatistics': True|False, 'IncludeOpForFullLoad': True|False, 'CdcInsertsOnly': True|False, 'TimestampColumnName': 'string', 'ParquetTimestampInMillisecond': True|False, 'CdcInsertsAndUpdates': True|False, 'DatePartitionEnabled': True|False, 'DatePartitionSequence': 'YYYYMMDD'|'YYYYMMDDHH'|'YYYYMM'|'MMYYYYDD'|'DDMMYYYY', 'DatePartitionDelimiter': 'SLASH'|'UNDERSCORE'|'DASH'|'NONE' }, 'DmsTransferSettings': { 'ServiceAccessRoleArn': 'string', 'BucketName': 'string' }, 'MongoDbSettings': { 'Username': 'string', 'Password': 'string', 'ServerName': 'string', 'Port': 123, 'DatabaseName': 'string', 'AuthType': 'no'|'password', 'AuthMechanism': 'default'|'mongodb_cr'|'scram_sha_1', 'NestingLevel': 'none'|'one', 'ExtractDocId': 'string', 'DocsToInvestigate': 'string', 'AuthSource': 'string', 'KmsKeyId': 'string' }, 'KinesisSettings': { 'StreamArn': 'string', 'MessageFormat': 'json'|'json-unformatted', 'ServiceAccessRoleArn': 'string', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'IncludeNullAndEmpty': True|False }, 'KafkaSettings': { 'Broker': 'string', 'Topic': 'string', 'MessageFormat': 'json'|'json-unformatted', 'IncludeTransactionDetails': True|False, 'IncludePartitionValue': True|False, 'PartitionIncludeSchemaTable': True|False, 'IncludeTableAlterOperations': True|False, 'IncludeControlDetails': True|False, 'MessageMaxBytes': 123, 'IncludeNullAndEmpty': True|False }, 'ElasticsearchSettings': { 'ServiceAccessRoleArn': 'string', 'EndpointUri': 'string', 'FullLoadErrorPercentage': 123, 'ErrorRetryDuration': 123 }, 'NeptuneSettings': { 'ServiceAccessRoleArn': 'string', 'S3BucketName': 'string', 'S3BucketFolder': 'string', 'ErrorRetryDuration': 123, 'MaxFileSize': 123, 'MaxRetryCount': 123, 'IamAuthEnabled': True|False }, 'RedshiftSettings': { 'AcceptAnyDate': True|False, 'AfterConnectScript': 'string', 'BucketFolder': 'string', 'BucketName': 'string', 'CaseSensitiveNames': True|False, 'CompUpdate': True|False, 'ConnectionTimeout': 123, 'DatabaseName': 'string', 'DateFormat': 'string', 'EmptyAsNull': True|False, 'EncryptionMode': 'sse-s3'|'sse-kms', 'ExplicitIds': True|False, 'FileTransferUploadStreams': 123, 'LoadTimeout': 123, 'MaxFileSize': 123, 'Password': 'string', 'Port': 123, 'RemoveQuotes': True|False, 'ReplaceInvalidChars': 'string', 'ReplaceChars': 'string', 'ServerName': 'string', 'ServiceAccessRoleArn': 'string', 'ServerSideEncryptionKmsKeyId': 'string', 'TimeFormat': 'string', 'TrimBlanks': True|False, 'TruncateColumns': True|False, 'Username': 'string', 'WriteBufferSize': 123 }, 'PostgreSQLSettings': { 'AfterConnectScript': 'string', 'CaptureDdls': True|False, 'MaxFileSize': 123, 'DatabaseName': 'string', 'DdlArtifactsSchema': 'string', 'ExecuteTimeout': 123, 'FailTasksOnLobTruncation': True|False, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string', 'SlotName': 'string' }, 'MySQLSettings': { 'AfterConnectScript': 'string', 'DatabaseName': 'string', 'EventsPollInterval': 123, 'TargetDbType': 'specific-database'|'multiple-databases', 'MaxFileSize': 123, 'ParallelLoadThreads': 123, 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'ServerTimezone': 'string', 'Username': 'string' }, 'OracleSettings': { 'AddSupplementalLogging': True|False, 'ArchivedLogDestId': 123, 'AdditionalArchivedLogDestId': 123, 'AllowSelectNestedTables': True|False, 'ParallelAsmReadThreads': 123, 'ReadAheadBlocks': 123, 'AccessAlternateDirectly': True|False, 'UseAlternateFolderForOnline': True|False, 'OraclePathPrefix': 'string', 'UsePathPrefix': 'string', 'ReplacePathPrefix': True|False, 'EnableHomogenousTablespace': True|False, 'DirectPathNoLog': True|False, 'ArchivedLogsOnly': True|False, 'AsmPassword': 'string', 'AsmServer': 'string', 'AsmUser': 'string', 'CharLengthSemantics': 'default'|'char'|'byte', 'DatabaseName': 'string', 'DirectPathParallelLoad': True|False, 'FailTasksOnLobTruncation': True|False, 'NumberDatatypeScale': 123, 'Password': 'string', 'Port': 123, 'ReadTableSpaceName': True|False, 'RetryInterval': 123, 'SecurityDbEncryption': 'string', 'SecurityDbEncryptionName': 'string', 'ServerName': 'string', 'Username': 'string' }, 'SybaseSettings': { 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'Username': 'string' }, 'MicrosoftSQLServerSettings': { 'Port': 123, 'BcpPacketSize': 123, 'DatabaseName': 'string', 'ControlTablesFileGroup': 'string', 'Password': 'string', 'ReadBackupOnly': True|False, 'SafeguardPolicy': 'rely-on-sql-server-replication-agent'|'exclusive-automatic-truncation'|'shared-automatic-truncation', 'ServerName': 'string', 'Username': 'string', 'UseBcpFullLoad': True|False }, 'IBMDb2Settings': { 'DatabaseName': 'string', 'Password': 'string', 'Port': 123, 'ServerName': 'string', 'SetDataCaptureChanges': True|False, 'CurrentLsn': 'string', 'MaxKBytesPerRead': 123, 'Username': 'string' } } }
Response Structure
(dict) --
Endpoint (dict) --
The modified endpoint.
EndpointIdentifier (string) --
The database endpoint identifier. Identifiers must begin with a letter and must contain only ASCII letters, digits, and hyphens. They can't end with a hyphen or contain two consecutive hyphens.
EndpointType (string) --
The type of endpoint. Valid values are source and target .
EngineName (string) --
The database engine name. Valid values, depending on the EndpointType, include "mysql" , "oracle" , "postgres" , "mariadb" , "aurora" , "aurora-postgresql" , "redshift" , "s3" , "db2" , "azuredb" , "sybase" , "dynamodb" , "mongodb" , "kinesis" , "kafka" , "elasticsearch" , "documentdb" , "sqlserver" , and "neptune" .
EngineDisplayName (string) --
The expanded name for the engine name. For example, if the EngineName parameter is "aurora," this value would be "Amazon Aurora MySQL."
Username (string) --
The user name used to connect to the endpoint.
ServerName (string) --
The name of the server at the endpoint.
Port (integer) --
The port value used to access the endpoint.
DatabaseName (string) --
The name of the database at the endpoint.
ExtraConnectionAttributes (string) --
Additional connection attributes used to connect to the endpoint.
Status (string) --
The status of the endpoint.
KmsKeyId (string) --
An AWS KMS key identifier that is used to encrypt the connection parameters for the endpoint.
If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
EndpointArn (string) --
The Amazon Resource Name (ARN) string that uniquely identifies the endpoint.
CertificateArn (string) --
The Amazon Resource Name (ARN) used for SSL connection to the endpoint.
SslMode (string) --
The SSL mode used to connect to the endpoint. The default value is none .
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
ExternalTableDefinition (string) --
The external table definition.
ExternalId (string) --
Value returned by a call to CreateEndpoint that can be used for cross-account validation. Use it on a subsequent call to CreateEndpoint to create the endpoint with a cross-account.
DynamoDbSettings (dict) --
The settings for the DynamoDB target endpoint. For more information, see the DynamoDBSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role.
S3Settings (dict) --
The settings for the S3 target endpoint. For more information, see the S3Settings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by the service access IAM role. It is a required parameter that enables DMS to write and read objects from an 3S bucket.
ExternalTableDefinition (string) --
Specifies how tables are defined in the S3 source files only.
CsvRowDelimiter (string) --
The delimiter used to separate rows in the .csv file for both source and target. The default is a carriage return (\n ).
CsvDelimiter (string) --
The delimiter used to separate columns in the .csv file for both source and target. The default is a comma.
BucketFolder (string) --
An optional parameter to set a folder name in the S3 bucket. If provided, tables are created in the path `` bucketFolder /schema_name /table_name /`` . If this parameter isn't specified, then the path used is `` schema_name /table_name /`` .
BucketName (string) --
The name of the S3 bucket.
CompressionType (string) --
An optional parameter to use GZIP to compress the target files. Set to GZIP to compress the target files. Either set this parameter to NONE (the default) or don't use it to leave the files uncompressed. This parameter applies to both .csv and .parquet file formats.
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , you need an AWS Identity and Access Management (IAM) role with permission to allow "arn:aws:s3:::dms-*" to use the following actions:
s3:CreateBucket
s3:ListBucket
s3:DeleteBucket
s3:GetBucketLocation
s3:GetObject
s3:PutObject
s3:DeleteObject
s3:GetObjectVersion
s3:GetBucketPolicy
s3:PutBucketPolicy
s3:DeleteBucketPolicy
ServerSideEncryptionKmsKeyId (string) --
If you are using SSE_KMS for the EncryptionMode , provide the AWS KMS key ID. The key that you use needs an attached policy that enables AWS Identity and Access Management (IAM) user permissions and allows use of the key.
Here is a CLI example: ``aws dms create-endpoint --endpoint-identifier value --endpoint-type target --engine-name s3 --s3-settings ServiceAccessRoleArn=*value* ,BucketFolder=*value* ,BucketName=*value* ,EncryptionMode=SSE_KMS,ServerSideEncryptionKmsKeyId=*value* ``
DataFormat (string) --
The format of the data that you want to use for output. You can choose one of the following:
csv : This is a row-based file format with comma-separated values (.csv).
parquet : Apache Parquet (.parquet) is a columnar storage file format that features efficient compression and provides faster query response.
EncodingType (string) --
The type of encoding you are using:
RLE_DICTIONARY uses a combination of bit-packing and run-length encoding to store repeated values more efficiently. This is the default.
PLAIN doesn't use encoding at all. Values are stored as they are.
PLAIN_DICTIONARY builds a dictionary of the values encountered in a given column. The dictionary is stored in a dictionary page for each column chunk.
DictPageSizeLimit (integer) --
The maximum size of an encoded dictionary page of a column. If the dictionary page exceeds this, this column is stored using an encoding type of PLAIN . This parameter defaults to 1024 * 1024 bytes (1 MiB), the maximum size of a dictionary page before it reverts to PLAIN encoding. This size is used for .parquet file format only.
RowGroupLength (integer) --
The number of rows in a row group. A smaller row group size provides faster reads. But as the number of row groups grows, the slower writes become. This parameter defaults to 10,000 rows. This number is used for .parquet file format only.
If you choose a value larger than the maximum, RowGroupLength is set to the max row group length in bytes (64 * 1024 * 1024).
DataPageSize (integer) --
The size of one data page in bytes. This parameter defaults to 1024 * 1024 bytes (1 MiB). This number is used for .parquet file format only.
ParquetVersion (string) --
The version of the Apache Parquet format that you want to use: parquet_1_0 (the default) or parquet_2_0 .
EnableStatistics (boolean) --
A value that enables statistics for Parquet pages and row groups. Choose true to enable statistics, false to disable. Statistics include NULL , DISTINCT , MAX , and MIN values. This parameter defaults to true . This value is used for .parquet file format only.
IncludeOpForFullLoad (boolean) --
A value that enables a full load to write INSERT operations to the comma-separated value (.csv) output files only to indicate how the rows were added to the source database.
Note
AWS DMS supports the IncludeOpForFullLoad parameter in versions 3.1.4 and later.
For full load, records can only be inserted. By default (the false setting), no information is recorded in these output files for a full load to indicate that the rows were inserted at the source database. If IncludeOpForFullLoad is set to true or y , the INSERT is recorded as an I annotation in the first field of the .csv file. This allows the format of your target records from a full load to be consistent with the target records from a CDC load.
Note
This setting works together with the CdcInsertsOnly and the CdcInsertsAndUpdates parameters for output to .csv files only. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
CdcInsertsOnly (boolean) --
A value that enables a change data capture (CDC) load to write only INSERT operations to .csv or columnar storage (.parquet) output files. By default (the false setting), the first field in a .csv or .parquet record contains the letter I (INSERT), U (UPDATE), or D (DELETE). These values indicate whether the row was inserted, updated, or deleted at the source database for a CDC load to the target.
If CdcInsertsOnly is set to true or y , only INSERTs from the source database are migrated to the .csv or .parquet file. For .csv format only, how these INSERTs are recorded depends on the value of IncludeOpForFullLoad . If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to I to indicate the INSERT operation at the source. If IncludeOpForFullLoad is set to false , every CDC record is written without a first field to indicate the INSERT operation at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the interaction described preceding between the CdcInsertsOnly and IncludeOpForFullLoad parameters in versions 3.1.4 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
TimestampColumnName (string) --
A value that when nonblank causes AWS DMS to add a column with timestamp information to the endpoint data for an Amazon S3 target.
Note
AWS DMS supports the TimestampColumnName parameter in versions 3.1.4 and later.
DMS includes an additional STRING column in the .csv or .parquet object files of your migrated data when you set TimestampColumnName to a nonblank value.
For a full load, each row of this timestamp column contains a timestamp for when the data was transferred from the source to the target by DMS.
For a change data capture (CDC) load, each row of the timestamp column contains the timestamp for the commit of that row in the source database.
The string format for this timestamp column value is yyyy-MM-dd HH:mm:ss.SSSSSS . By default, the precision of this value is in microseconds. For a CDC load, the rounding of the precision depends on the commit timestamp supported by DMS for the source database.
When the AddColumnName parameter is set to true , DMS also includes a name for the timestamp column that you set with TimestampColumnName .
ParquetTimestampInMillisecond (boolean) --
A value that specifies the precision of any TIMESTAMP column values that are written to an Amazon S3 object file in .parquet format.
Note
AWS DMS supports the ParquetTimestampInMillisecond parameter in versions 3.1.4 and later.
When ParquetTimestampInMillisecond is set to true or y , AWS DMS writes all TIMESTAMP columns in a .parquet formatted file with millisecond precision. Otherwise, DMS writes them with microsecond precision.
Currently, Amazon Athena and AWS Glue can handle only millisecond precision for TIMESTAMP values. Set this parameter to true for S3 endpoint object files that are .parquet formatted only if you plan to query or process the data with Athena or AWS Glue.
Note
AWS DMS writes any TIMESTAMP column values written to an S3 file in .csv format with microsecond precision.
Setting ParquetTimestampInMillisecond has no effect on the string format of the timestamp column value that is inserted by setting the TimestampColumnName parameter.
CdcInsertsAndUpdates (boolean) --
A value that enables a change data capture (CDC) load to write INSERT and UPDATE operations to .csv or .parquet (columnar storage) output files. The default setting is false , but when CdcInsertsAndUpdates is set to true or y , only INSERTs and UPDATEs from the source database are migrated to the .csv or .parquet file.
For .csv file format only, how these INSERTs and UPDATEs are recorded depends on the value of the IncludeOpForFullLoad parameter. If IncludeOpForFullLoad is set to true , the first field of every CDC record is set to either I or U to indicate INSERT and UPDATE operations at the source. But if IncludeOpForFullLoad is set to false , CDC records are written without an indication of INSERT or UPDATE operations at the source. For more information about how these settings work together, see Indicating Source DB Operations in Migrated S3 Data in the AWS Database Migration Service User Guide. .
Note
AWS DMS supports the use of the CdcInsertsAndUpdates parameter in versions 3.3.1 and later.
CdcInsertsOnly and CdcInsertsAndUpdates can't both be set to true for the same endpoint. Set either CdcInsertsOnly or CdcInsertsAndUpdates to true for the same endpoint, but not both.
DatePartitionEnabled (boolean) --
When set to true , this parameter partitions S3 bucket folders based on transaction commit dates. The default value is false . For more information about date-based folder partitoning, see Using date-based folder partitioning
DatePartitionSequence (string) --
Identifies the sequence of the date format to use during folder partitioning. The default value is YYYYMMDD . Use this parameter when DatePartitionedEnabled is set to true .
DatePartitionDelimiter (string) --
Specifies a date separating delimiter to use during folder partitioning. The default value is SLASH . Use this parameter when DatePartitionedEnabled is set to true .
DmsTransferSettings (dict) --
The settings in JSON format for the DMS transfer type of source endpoint.
Possible settings include the following:
ServiceAccessRoleArn - The IAM role that has permission to access the Amazon S3 bucket.
BucketName - The name of the S3 bucket to use.
CompressionType - An optional parameter to use GZIP to compress the target files. To use GZIP, set this value to NONE (the default). To keep the files uncompressed, don't use this value.
Shorthand syntax for these settings is as follows: ServiceAccessRoleArn=string,BucketName=string,CompressionType=string
JSON syntax for these settings is as follows: { "ServiceAccessRoleArn": "string", "BucketName": "string", "CompressionType": "none"|"gzip" }
ServiceAccessRoleArn (string) --
The IAM role that has permission to access the Amazon S3 bucket.
BucketName (string) --
The name of the S3 bucket to use.
MongoDbSettings (dict) --
The settings for the MongoDB source endpoint. For more information, see the MongoDbSettings structure.
Username (string) --
The user name you use to access the MongoDB source endpoint.
Password (string) --
The password for the user account you use to access the MongoDB source endpoint.
ServerName (string) --
The name of the server on the MongoDB source endpoint.
Port (integer) --
The port value for the MongoDB source endpoint.
DatabaseName (string) --
The database name on the MongoDB source endpoint.
AuthType (string) --
The authentication type you use to access the MongoDB source endpoint.
When when set to "no" , user name and password parameters are not used and can be empty.
AuthMechanism (string) --
The authentication mechanism you use to access the MongoDB source endpoint.
For the default value, in MongoDB version 2.x, "default" is "mongodb_cr" . For MongoDB version 3.x or later, "default" is "scram_sha_1" . This setting isn't used when AuthType is set to "no" .
NestingLevel (string) --
Specifies either document or table mode.
Default value is "none" . Specify "none" to use document mode. Specify "one" to use table mode.
ExtractDocId (string) --
Specifies the document ID. Use this setting when NestingLevel is set to "none" .
Default value is "false" .
DocsToInvestigate (string) --
Indicates the number of documents to preview to determine the document organization. Use this setting when NestingLevel is set to "one" .
Must be a positive value greater than 0 . Default value is 1000 .
AuthSource (string) --
The MongoDB database name. This setting isn't used when AuthType is set to "no" .
The default is "admin" .
KmsKeyId (string) --
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
KinesisSettings (dict) --
The settings for the Amazon Kinesis target endpoint. For more information, see the KinesisSettings structure.
StreamArn (string) --
The Amazon Resource Name (ARN) for the Amazon Kinesis Data Streams endpoint.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) for the AWS Identity and Access Management (IAM) role that AWS DMS uses to write to the Kinesis data stream.
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kinesis message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kinesis shards. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same shard, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kinesis message output. The default is false .
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
KafkaSettings (dict) --
The settings for the Apache Kafka target endpoint. For more information, see the KafkaSettings structure.
Broker (string) --
The broker location and port of the Kafka broker that hosts your Kafka instance. Specify the broker in the form `` broker-hostname-or-ip :port `` . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345" .
Topic (string) --
The topic to which you migrate the data. If you don't specify a topic, AWS DMS specifies "kafka-default-topic" as the migration topic.
MessageFormat (string) --
The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
IncludeTransactionDetails (boolean) --
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id , previous transaction_id , and transaction_record_id (the record offset within a transaction). The default is false .
IncludePartitionValue (boolean) --
Shows the partition value within the Kafka message output, unless the partition type is schema-table-type . The default is false .
PartitionIncludeSchemaTable (boolean) --
Prefixes schema and table names to partition values, when the partition type is primary-key-type . Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false .
IncludeTableAlterOperations (boolean) --
Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table , drop-table , add-column , drop-column , and rename-column . The default is false .
IncludeControlDetails (boolean) --
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false .
MessageMaxBytes (integer) --
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
IncludeNullAndEmpty (boolean) --
Include NULL and empty columns for records migrated to the endpoint. The default is false .
ElasticsearchSettings (dict) --
The settings for the Elasticsearch source endpoint. For more information, see the ElasticsearchSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) used by service to access the IAM role.
EndpointUri (string) --
The endpoint for the Elasticsearch cluster. AWS DMS uses HTTPS if a transport protocol (http/https) is not specified.
FullLoadErrorPercentage (integer) --
The maximum percentage of records that can fail to be written before a full load operation stops.
To avoid early failure, this counter is only effective after 1000 records are transferred. Elasticsearch also has the concept of error monitoring during the last 10 minutes of an Observation Window. If transfer of all records fail in the last 10 minutes, the full load operation stops.
ErrorRetryDuration (integer) --
The maximum number of seconds for which DMS retries failed API requests to the Elasticsearch cluster.
NeptuneSettings (dict) --
The settings for the Amazon Neptune target endpoint. For more information, see the NeptuneSettings structure.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the service role that you created for the Neptune target endpoint. For more information, see Creating an IAM Service Role for Accessing Amazon Neptune as a Target in the AWS Database Migration Service User Guide.
S3BucketName (string) --
The name of the Amazon S3 bucket where AWS DMS can temporarily store migrated graph data in .csv files before bulk-loading it to the Neptune target database. AWS DMS maps the SQL source data to graph data before storing it in these .csv files.
S3BucketFolder (string) --
A folder path where you want AWS DMS to store migrated graph data in the S3 bucket specified by S3BucketName
ErrorRetryDuration (integer) --
The number of milliseconds for AWS DMS to wait to retry a bulk-load of migrated graph data to the Neptune target database before raising an error. The default is 250.
MaxFileSize (integer) --
The maximum size in kilobytes of migrated graph data stored in a .csv file before AWS DMS bulk-loads the data to the Neptune target database. The default is 1,048,576 KB. If the bulk load is successful, AWS DMS clears the bucket, ready to store the next batch of migrated graph data.
MaxRetryCount (integer) --
The number of times for AWS DMS to retry a bulk load of migrated graph data to the Neptune target database before raising an error. The default is 5.
IamAuthEnabled (boolean) --
If you want AWS Identity and Access Management (IAM) authorization enabled for this endpoint, set this parameter to true . Then attach the appropriate IAM policy document to your service role specified by ServiceAccessRoleArn . The default is false .
RedshiftSettings (dict) --
Settings for the Amazon Redshift endpoint.
AcceptAnyDate (boolean) --
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose true or false (the default).
This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
AfterConnectScript (string) --
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
BucketFolder (string) --
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift COPY command to upload the .csv files to the target table. The files are deleted once the COPY operation has finished. For more information, see Amazon Redshift Database Developer Guide
For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
BucketName (string) --
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
CaseSensitiveNames (boolean) --
If Amazon Redshift is configured to support case sensitive schema names, set CaseSensitiveNames to true . The default is false .
CompUpdate (boolean) --
If you set CompUpdate to true Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other than RAW . If you set CompUpdate to false , automatic compression is disabled and existing column encodings aren't changed. The default is true .
ConnectionTimeout (integer) --
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
DatabaseName (string) --
The name of the Amazon Redshift data warehouse (service) that you are working with.
DateFormat (string) --
The date format that you are using. Valid values are auto (case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Using auto recognizes most strings, even some that aren't supported when you use a date format string.
If your date and time values use formats different from each other, set this to auto .
EmptyAsNull (boolean) --
A value that specifies whether AWS DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of true sets empty CHAR and VARCHAR fields to null. The default is false .
EncryptionMode (string) --
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either SSE_S3 (the default) or SSE_KMS .
Note
For the ModifyEndpoint operation, you can change the existing value of the EncryptionMode parameter from SSE_KMS to SSE_S3 . But you can’t change the existing value from SSE_S3 to SSE_KMS .
To use SSE_S3 , create an AWS Identity and Access Management (IAM) role with a policy that allows "arn:aws:s3:::*" to use the following actions: "s3:PutObject", "s3:ListBucket"
ExplicitIds (boolean) --
This setting is only valid for a full-load migration task. Set ExplicitIds to true to have tables with IDENTITY columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default is false .
FileTransferUploadStreams (integer) --
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview .
FileTransferUploadStreams accepts a value from 1 through 64. It defaults to 10.
LoadTimeout (integer) --
The amount of time to wait (in milliseconds) before timing out of operations performed by AWS DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
MaxFileSize (integer) --
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
Password (string) --
The password for the user named in the username property.
Port (integer) --
The port number for Amazon Redshift. The default value is 5439.
RemoveQuotes (boolean) --
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose true to remove quotation marks. The default is false .
ReplaceInvalidChars (string) --
A list of characters that you want to replace. Use with ReplaceChars .
ReplaceChars (string) --
A value that specifies to replaces the invalid characters specified in ReplaceInvalidChars , substituting the specified characters instead. The default is "?" .
ServerName (string) --
The name of the Amazon Redshift cluster you are using.
ServiceAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.
ServerSideEncryptionKmsKeyId (string) --
The AWS KMS key ID. If you are using SSE_KMS for the EncryptionMode , provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
TimeFormat (string) --
The time format that you want to use. Valid values are auto (case-sensitive), 'timeformat_string' , 'epochsecs' , or 'epochmillisecs' . It defaults to 10. Using auto recognizes most strings, even some that aren't supported when you use a time format string.
If your date and time values use formats different from each other, set this parameter to auto .
TrimBlanks (boolean) --
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose true to remove unneeded white space. The default is false .
TruncateColumns (boolean) --
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose true to truncate data. The default is false .
Username (string) --
An Amazon Redshift user name for a registered user.
WriteBufferSize (integer) --
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
PostgreSQLSettings (dict) --
The settings for the PostgreSQL source and target endpoint. For more information, see the PostgreSQLSettings structure.
AfterConnectScript (string) --
For use with change data capture (CDC) only, this attribute has AWS DMS bypass foreign keys and user triggers to reduce the time it takes to bulk load data.
Example: afterConnectScript=SET session_replication_role='replica'
CaptureDdls (boolean) --
To capture DDL events, AWS DMS creates various artifacts in the PostgreSQL database when the task starts. You can later remove these artifacts.
If this value is set to N , you don't have to create tables or triggers on the source database.
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to PostgreSQL.
Example: maxFileSize=512
DatabaseName (string) --
Database name for the endpoint.
DdlArtifactsSchema (string) --
The schema in which the operational DDL database artifacts are created.
Example: ddlArtifactsSchema=xyzddlschema;
ExecuteTimeout (integer) --
Sets the client statement timeout for the PostgreSQL instance, in seconds. The default value is 60 seconds.
Example: executeTimeout=100;
FailTasksOnLobTruncation (boolean) --
When set to true , this value causes a task to fail if the actual size of a LOB column is greater than the specified LobMaxSize .
If task is set to Limited LOB mode and this option is set to true, the task fails instead of truncating the LOB data.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SlotName (string) --
Sets the name of a previously created logical replication slot for a CDC load of the PostgreSQL source instance.
When used with the AWS DMS API CdcStartPosition request parameter, this attribute also enables using native CDC start points.
MySQLSettings (dict) --
The settings for the MySQL source and target endpoint. For more information, see the MySQLSettings structure.
AfterConnectScript (string) --
Specifies a script to run immediately after AWS DMS connects to the endpoint. The migration task continues running regardless if the SQL statement succeeds or fails.
DatabaseName (string) --
Database name for the endpoint.
EventsPollInterval (integer) --
Specifies how often to check the binary log for new changes/events when the database is idle.
Example: eventsPollInterval=5;
In the example, AWS DMS checks for changes in the binary logs every five seconds.
TargetDbType (string) --
Specifies where to migrate source tables on the target, either to a single database or multiple databases.
Example: targetDbType=MULTIPLE_DATABASES
MaxFileSize (integer) --
Specifies the maximum size (in KB) of any .csv file used to transfer data to a MySQL-compatible database.
Example: maxFileSize=512
ParallelLoadThreads (integer) --
Improves performance when loading data into the MySQLcompatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example: parallelLoadThreads=1
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
ServerTimezone (string) --
Specifies the time zone for the source MySQL database.
Example: serverTimezone=US/Pacific;
Note: Do not enclose time zones in single quotes.
Username (string) --
Endpoint connection user name.
OracleSettings (dict) --
The settings for the Oracle source and target endpoint. For more information, see the OracleSettings structure.
AddSupplementalLogging (boolean) --
Set this attribute to set up table-level supplemental logging for the Oracle database. This attribute enables PRIMARY KEY supplemental logging on all tables selected for a migration task.
If you use this option, you still need to enable database-level supplemental logging.
ArchivedLogDestId (integer) --
Specifies the destination of the archived redo logs. The value should be the same as the DEST_ID number in the v$archived_log table. When working with multiple log destinations (DEST_ID), we recommend that you to specify an archived redo logs location identifier. Doing this improves performance by ensuring that the correct logs are accessed from the outset.
AdditionalArchivedLogDestId (integer) --
Set this attribute with archivedLogDestId in a primary/ standby setup. This attribute is useful in the case of a switchover. In this case, AWS DMS needs to know which destination to get archive redo logs from to read changes. This need arises because the previous primary instance is now a standby instance after switchover.
AllowSelectNestedTables (boolean) --
Set this attribute to true to enable replication of Oracle tables containing columns that are nested tables or defined types.
ParallelAsmReadThreads (integer) --
Set this attribute to change the number of threads that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 2 (the default) and 8 (the maximum). Use this attribute together with the readAheadBlocks attribute.
ReadAheadBlocks (integer) --
Set this attribute to change the number of read-ahead blocks that DMS configures to perform a Change Data Capture (CDC) load using Oracle Automatic Storage Management (ASM). You can specify an integer value between 1000 (the default) and 200,000 (the maximum).
AccessAlternateDirectly (boolean) --
Set this attribute to false in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to not access redo logs through any specified path prefix replacement using direct file access.
UseAlternateFolderForOnline (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This tells the DMS instance to use any specified prefix replacement to access all online redo logs.
OraclePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the default Oracle root used to access the redo logs.
UsePathPrefix (string) --
Set this string attribute to the required value in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This value specifies the path prefix used to replace the default Oracle root to access the redo logs.
ReplacePathPrefix (boolean) --
Set this attribute to true in order to use the Binary Reader to capture change data for an Amazon RDS for Oracle as the source. This setting tells DMS instance to replace the default Oracle root with the specified usePathPrefix setting to access the redo logs.
EnableHomogenousTablespace (boolean) --
Set this attribute to enable homogenous tablespace replication and create existing tables or indexes under the same tablespace on the target.
DirectPathNoLog (boolean) --
When set to true , this attribute helps to increase the commit rate on the Oracle target database by writing directly to tables and not writing a trail to database logs.
ArchivedLogsOnly (boolean) --
When this field is set to Y , AWS DMS only accesses the archived redo logs. If the archived redo logs are stored on Oracle ASM only, the AWS DMS user account needs to be granted ASM privileges.
AsmPassword (string) --
For an Oracle source endpoint, your Oracle Automatic Storage Management (ASM) password. You can set this value from the `` asm_user_password `` value. You set this value as part of the comma-separated value that you set to the Password request parameter when you create the endpoint to access transaction logs using Binary Reader. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmServer (string) --
For an Oracle source endpoint, your ASM server address. You can set this value from the asm_server value. You set asm_server as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
AsmUser (string) --
For an Oracle source endpoint, your ASM user name. You can set this value from the asm_user value. You set asm_user as part of the extra connection attribute string to access an Oracle server with Binary Reader that uses ASM. For more information, see Configuration for change data capture (CDC) on an Oracle source database .
CharLengthSemantics (string) --
Specifies whether the length of a character column is in bytes or in characters. To indicate that the character column length is in characters, set this attribute to CHAR . Otherwise, the character column length is in bytes.
Example: charLengthSemantics=CHAR;
DatabaseName (string) --
Database name for the endpoint.
DirectPathParallelLoad (boolean) --
When set to true , this attribute specifies a parallel load when useDirectPathFullLoad is set to Y . This attribute also only applies when you use the AWS DMS parallel load feature. Note that the target table cannot have any constraints or indexes.
FailTasksOnLobTruncation (boolean) --
When set to true , this attribute causes a task to fail if the actual size of an LOB column is greater than the specified LobMaxSize .
If a task is set to limited LOB mode and this option is set to true , the task fails instead of truncating the LOB data.
NumberDatatypeScale (integer) --
Specifies the number scale. You can select a scale up to 38, or you can select FLOAT. By default, the NUMBER data type is converted to precision 38, scale 10.
Example: numberDataTypeScale=12
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ReadTableSpaceName (boolean) --
When set to true , this attribute supports tablespace replication.
RetryInterval (integer) --
Specifies the number of seconds that the system waits before resending a query.
Example: retryInterval=6;
SecurityDbEncryption (string) --
For an Oracle source endpoint, the transparent data encryption (TDE) password required by AWM DMS to access Oracle redo logs encrypted by TDE using Binary Reader. It is also the `` TDE_Password `` part of the comma-separated value you set to the Password request parameter when you create the endpoint. The SecurityDbEncryptian setting is related to this SecurityDbEncryptionName setting. For more information, see Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
SecurityDbEncryptionName (string) --
For an Oracle source endpoint, the name of a key used for the transparent data encryption (TDE) of the columns and tablespaces in an Oracle source database that is encrypted using TDE. The key value is the value of the SecurityDbEncryption setting. For more information on setting the key name value of SecurityDbEncryptionName , see the information and example for setting the securityDbEncryptionName extra connection attribute in Supported encryption methods for using Oracle as a source for AWS DMS in the AWS Database Migration Service User Guide .
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
SybaseSettings (dict) --
The settings for the SAP ASE source and target endpoint. For more information, see the SybaseSettings structure.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
MicrosoftSQLServerSettings (dict) --
The settings for the Microsoft SQL Server source and target endpoint. For more information, see the MicrosoftSQLServerSettings structure.
Port (integer) --
Endpoint TCP port.
BcpPacketSize (integer) --
The maximum size of the packets (in bytes) used to transfer data using BCP.
DatabaseName (string) --
Database name for the endpoint.
ControlTablesFileGroup (string) --
Specify a filegroup for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created on the specified filegroup.
Password (string) --
Endpoint connection password.
ReadBackupOnly (boolean) --
When this attribute is set to Y , AWS DMS only reads changes from transaction log backups and doesn't read from the active transaction log file during ongoing replication. Setting this parameter to Y enables you to control active transaction log file growth during full load and ongoing replication tasks. However, it can add some source latency to ongoing replication.
SafeguardPolicy (string) --
Use this attribute to minimize the need to access the backup log and enable AWS DMS to prevent truncation using one of the following two methods.
Start transactions in the database: This is the default method. When this method is used, AWS DMS prevents TLOG truncation by mimicking a transaction in the database. As long as such a transaction is open, changes that appear after the transaction started aren't truncated. If you need Microsoft Replication to be enabled in your database, then you must choose this method.
Exclusively use sp_repldone within a single task : When this method is used, AWS DMS reads the changes and then uses sp_repldone to mark the TLOG transactions as ready for truncation. Although this method doesn't involve any transactional activities, it can only be used when Microsoft Replication isn't running. Also, when using this method, only one AWS DMS task can access the database at any given time. Therefore, if you need to run parallel AWS DMS tasks against the same database, use the default method.
ServerName (string) --
Fully qualified domain name of the endpoint.
Username (string) --
Endpoint connection user name.
UseBcpFullLoad (boolean) --
Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
IBMDb2Settings (dict) --
The settings for the IBM Db2 LUW source endpoint. For more information, see the IBMDb2Settings structure.
DatabaseName (string) --
Database name for the endpoint.
Password (string) --
Endpoint connection password.
Port (integer) --
Endpoint TCP port.
ServerName (string) --
Fully qualified domain name of the endpoint.
SetDataCaptureChanges (boolean) --
Enables ongoing replication (CDC) as a BOOLEAN value. The default is true.
CurrentLsn (string) --
For ongoing replication (CDC), use CurrentLSN to specify a log sequence number (LSN) where you want the replication to start.
MaxKBytesPerRead (integer) --
Maximum number of bytes per read, as a NUMBER value. The default is 64 KB.
Username (string) --
Endpoint connection user name.