Commands

class sqlalchemy_redshift.commands.AlterTableAppendCommand(source, target, ignore_extra=False, fill_target=False)[source]

Prepares an ALTER TABLE APPEND statement to efficiently move data from one table to another, much faster than an INSERT INTO … SELECT.

CAUTION: This moves the underlying storage blocks from the source table to the target table, so the source table will be empty after this command finishes.

See the documentation for additional restrictions and other information: https://docs.aws.amazon.com/redshift/latest/dg/r_ALTER_TABLE_APPEND.html

Parameters:

source: sqlalchemy.Table

The table to move data from. Must be an existing permanent table.

target: sqlalchemy.Table

The table to move data into. Must be an existing permanent table.

ignore_extra: bool, optional

If the source table includes columns not present in the target table, discard those columns. Mutually exclusive with fill_target.

fill_target: bool, optional

If the target table includes columns not present in the source table, fill those columns with the default column value or NULL. Mutually exclusive with ignore_extra.

class sqlalchemy_redshift.commands.Compression[source]

An enumeration.

class sqlalchemy_redshift.commands.CopyCommand(to, data_location, access_key_id=None, secret_access_key=None, session_token=None, aws_partition='aws', aws_account_id=None, iam_role_name=None, format=None, quote=None, path_file='auto', delimiter=None, fixed_width=None, compression=None, accept_any_date=False, accept_inv_chars=None, blanks_as_null=False, date_format=None, empty_as_null=False, encoding=None, escape=False, explicit_ids=False, fill_record=False, ignore_blank_lines=False, ignore_header=None, dangerous_null_delimiter=None, remove_quotes=False, roundec=False, time_format=None, trim_blanks=False, truncate_columns=False, comp_rows=None, comp_update=None, max_error=None, no_load=False, stat_update=None, manifest=False, region=None, iam_role_arns=None)[source]

Prepares a Redshift COPY statement.

Parameters:

to : sqlalchemy.Table or iterable of sqlalchemy.ColumnElement

The table or columns to copy data into

data_location : str

The Amazon S3 location from where to copy, or a manifest file if the manifest option is used

access_key_id: str, optional

Access Key. Required unless you supply role-based credentials (aws_account_id and iam_role_name or iam_role_arns)

secret_access_key: str, optional

Secret Access Key ID. Required unless you supply role-based credentials (aws_account_id and iam_role_name or iam_role_arns)

session_token : str, optional

iam_role_arns : str or list of strings, optional

Either a single arn or a list of arns of roles to assume when unloading Required unless you supply key based credentials (access_key_id and secret_access_key) or (aws_account_id and iam_role_name) separately.

aws_partition: str, optional

AWS partition to use with role-based credentials. Defaults to 'aws'. Not applicable when using key based credentials (access_key_id and secret_access_key) or role arns (iam_role_arns) directly.

aws_account_id: str, optional

AWS account ID for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key)

or role arns (iam_role_arns) directly.

iam_role_name: str, optional

IAM role name for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key) or role arns (iam_role_arns) directly.

format : Format, optional

Indicates the type of file to copy from

quote : str, optional

Specifies the character to be used as the quote character when using format=Format.csv. The default is a double quotation mark ( " )

delimiter : Field delimiter, optional

defaults to |

path_file : str, optional

Specifies an Amazon S3 location to a JSONPaths file to explicitly map Avro or JSON data elements to columns. defaults to 'auto'

fixed_width: iterable of (str, int), optional

List of (column name, length) pairs to control fixed-width output.

compression : Compression, optional

indicates the type of compression of the file to copy

accept_any_date : bool, optional

Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded as NULL without generating an error defaults to False

accept_inv_chars : str, optional

Enables loading of data into VARCHAR columns even if the data contains invalid UTF-8 characters. When specified each invalid UTF-8 byte is replaced by the specified replacement character

blanks_as_null : bool, optional

Boolean value denoting whether to load VARCHAR fields with whitespace only values as NULL instead of whitespace

date_format : str, optional

Specified the date format. If you want Amazon Redshift to automatically recognize and convert the date format in your source data, specify 'auto'

empty_as_null : bool, optional

Boolean value denoting whether to load VARCHAR fields with empty values as NULL instead of empty string

encoding : Encoding, optional

Specifies the encoding type of the load data defaults to Encoding.utf8

escape : bool, optional

When this parameter is specified, the backslash character (\) in input data is treated as an escape character. The character that immediately follows the backslash character is loaded into the table as part of the current column value, even if it is a character that normally serves a special purpose

explicit_ids : bool, optional

Override the autogenerated IDENTITY column values with explicit values from the source data files for the tables

fill_record : bool, optional

Allows data files to be loaded when contiguous columns are missing at the end of some of the records. The missing columns are filled with either zero-length strings or NULLs, as appropriate for the data types of the columns in question.

ignore_blank_lines : bool, optional

Ignores blank lines that only contain a line feed in a data file and does not try to load them

ignore_header : int, optional

Integer value of number of lines to skip at the start of each file

dangerous_null_delimiter : str, optional

Optional string value denoting what to interpret as a NULL value from the file. Note that this parameter is not properly quoted due to a difference between redshift’s and postgres’s COPY commands interpretation of strings. For example, null bytes must be passed to redshift’s NULL verbatim as '\0' whereas postgres’s NULL accepts '\x00'.

remove_quotes : bool, optional

Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained.

roundec : bool, optional

Rounds up numeric values when the scale of the input value is greater than the scale of the column

time_format : str, optional

Specified the date format. If you want Amazon Redshift to automatically recognize and convert the time format in your source data, specify 'auto'

trim_blanks : bool, optional

Removes the trailing white space characters from a VARCHAR string

truncate_columns : bool, optional

Truncates data in columns to the appropriate number of characters so that it fits the column specification

comp_rows : int, optional

Specifies the number of rows to be used as the sample size for compression analysis

comp_update : bool, optional

Controls whether compression encodings are automatically applied. If omitted or None, COPY applies automatic compression only if the target table is empty and all the table columns either have RAW encoding or no encoding. If True COPY applies automatic compression if the table is empty, even if the table columns already have encodings other than RAW. If False automatic compression is disabled

max_error : int, optional

If the load returns the max_error number of errors or greater, the load fails defaults to 100000

no_load : bool, optional

Checks the validity of the data file without actually loading the data

stat_update : bool, optional

Update statistics automatically regardless of whether the table is initially empty

manifest : bool, optional

Boolean value denoting whether data_location is a manifest file.

region: str, optional

The AWS region where the target S3 bucket is located, if the Redshift cluster isn’t in the same region as the S3 bucket.

class sqlalchemy_redshift.commands.CreateLibraryCommand(library_name, location, access_key_id=None, secret_access_key=None, session_token=None, aws_account_id=None, iam_role_name=None, replace=False, region=None, iam_role_arns=None)[source]

Prepares a Redshift CREATE LIBRARY statement. https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_LIBRARY.html

Parameters:

library_name: str, required

The name of the library to install.

location: str, required

The location of the library file. Must be either a HTTP/HTTPS URL or an S3 location.

access_key_id: str, optional

Access Key. Required unless you supply role-based credentials (aws_account_id and iam_role_name or iam_role_arns)

secret_access_key: str, optional

Secret Access Key ID. Required unless you supply role-based credentials (aws_account_id and iam_role_name or iam_role_arns)

session_token : str, optional

iam_role_arns : str or list of strings, optional

Either a single arn or a list of arns of roles to assume when unloading Required unless you supply key based credentials (access_key_id and secret_access_key) or (aws_account_id and iam_role_name) separately.

aws_partition: str, optional

AWS partition to use with role-based credentials. Defaults to 'aws'. Not applicable when using key based credentials (access_key_id and secret_access_key) or role arns (iam_role_arns) directly.

aws_account_id: str, optional

AWS account ID for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key)

or role arns (iam_role_arns) directly.

iam_role_name: str, optional

IAM role name for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key) or role arns (iam_role_arns) directly.

replace: bool, optional, default False

Controls the presence of OR REPLACE in the compiled statement. See the command documentation for details.

region: str, optional

The AWS region where the library’s S3 bucket is located, if the Redshift cluster isn’t in the same region as the S3 bucket.

class sqlalchemy_redshift.commands.Encoding[source]

An enumeration.

class sqlalchemy_redshift.commands.Format[source]

An enumeration.

class sqlalchemy_redshift.commands.RefreshMaterializedView(name)[source]

Prepares a Redshift REFRESH MATERIALIZED VIEW statement. SEE: docs.aws.amazon.com/redshift/latest/dg/materialized-view-refresh-sql-command

This reruns the query underlying the view to ensure the materialized data is up to date.

>>> import sqlalchemy as sa
>>> from sqlalchemy_redshift.dialect import RefreshMaterializedView
>>> engine = sa.create_engine('redshift+psycopg2://example')
>>> refresh = RefreshMaterializedView('materialized_view_of_users')
>>> print(refresh.compile(engine))

REFRESH MATERIALIZED VIEW materialized_view_of_users

This can be included in any execute() statement.

class sqlalchemy_redshift.commands.UnloadFromSelect(select, unload_location, access_key_id=None, secret_access_key=None, session_token=None, aws_partition='aws', aws_account_id=None, iam_role_name=None, manifest=False, delimiter=None, fixed_width=None, encrypted=False, gzip=False, add_quotes=False, null=None, escape=False, allow_overwrite=False, parallel=True, header=False, region=None, max_file_size=None, format=None, iam_role_arns=None)[source]

Prepares a Redshift unload statement to drop a query to Amazon S3 https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD_command_examples.html

Parameters:

select: sqlalchemy.sql.selectable.Selectable

The selectable Core Table Expression query to unload from.

unload_location: str

The Amazon S3 location where the file will be created, or a manifest file if the manifest option is used

access_key_id: str, optional

Access Key. Required unless you supply role-based credentials (aws_account_id and iam_role_name or iam_role_arns)

secret_access_key: str, optional

Secret Access Key ID. Required unless you supply role-based credentials (aws_account_id and iam_role_name or iam_role_arns)

session_token : str, optional

iam_role_arns : str or list of strings, optional

Either a single arn or a list of arns of roles to assume when unloading Required unless you supply key based credentials (access_key_id and secret_access_key) or (aws_account_id and iam_role_name) separately.

aws_partition: str, optional

AWS partition to use with role-based credentials. Defaults to 'aws'. Not applicable when using key based credentials (access_key_id and secret_access_key) or role arns (iam_role_arns) directly.

aws_account_id: str, optional

AWS account ID for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key) or role arns (iam_role_arns) directly.

iam_role_name: str, optional

IAM role name for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key) or role arns (iam_role_arns) directly.

manifest: bool, optional

Boolean value denoting whether data_location is a manifest file.

delimiter: File delimiter, optional

defaults to ‘|’

fixed_width: iterable of (str, int), optional

List of (column name, length) pairs to control fixed-width output.

encrypted: bool, optional

Write to encrypted S3 key.

gzip: bool, optional

Create file using GZIP compression.

add_quotes: bool, optional

Quote fields so that fields containing the delimiter can be distinguished.

null: str, optional

Write null values as the given string. Defaults to ‘’.

escape: bool, optional

For CHAR and VARCHAR columns in delimited unload files, an escape character (\) is placed before every occurrence of the following characters: \r, \n, \, the specified delimiter string. If add_quotes is specified, " and ' are also escaped.

allow_overwrite: bool, optional

Overwrite the key at unload_location in the S3 bucket.

parallel: bool, optional

If disabled unload sequentially as one file.

header: bool, optional

Boolean value denoting whether to add header line containing column names at the top of each output file. Text transformation options, such as delimiter, add_quotes, and escape, also apply to the header line. header can’t be used with fixed_width.

region: str, optional

The AWS region where the target S3 bucket is located, if the Redshift cluster isn’t in the same region as the S3 bucket.

max_file_size: int, optional

Maximum size (in bytes) of files to create in S3. This must be between 5 * 1024**2 and 6.24 * 1024**3. Note that Redshift appears to round to the nearest KiB.

format : Format, optional

Indicates the type of file to unload to.

sqlalchemy_redshift.commands.compile_refresh_materialized_view(element, compiler, **kw)[source]

Formats and returns the refresh statement for materialized views.

sqlalchemy_redshift.commands.visit_alter_table_append_command(element, compiler, **kw)[source]

Returns the actual SQL query for the AlterTableAppendCommand class.

sqlalchemy_redshift.commands.visit_copy_command(element, compiler, **kw)[source]

Returns the actual sql query for the CopyCommand class.

sqlalchemy_redshift.commands.visit_create_library_command(element, compiler, **kw)[source]

Returns the actual sql query for the CreateLibraryCommand class.

sqlalchemy_redshift.commands.visit_unload_from_select(element, compiler, **kw)[source]

Returns the actual sql query for the UnloadFromSelect class.