Commands

class sqlalchemy_redshift.commands.CopyCommand(to, data_location, access_key_id=None, secret_access_key=None, session_token=None, aws_account_id=None, iam_role_name=None, format=None, quote=None, path_file='auto', delimiter=None, fixed_width=None, compression=None, accept_any_date=False, accept_inv_chars=None, blanks_as_null=False, date_format=None, empty_as_null=False, encoding=None, escape=False, explicit_ids=False, fill_record=False, ignore_blank_lines=False, ignore_header=None, dangerous_null_delimiter=None, remove_quotes=False, roundec=False, time_format=None, trim_blanks=False, truncate_columns=False, comp_rows=None, comp_update=None, max_error=None, no_load=False, stat_update=None, manifest=False)[source]

Prepares a Redshift COPY statement.

Parameters:

to : sqlalchemy.Table or iterable of sqlalchemy.ColumnElement

The table or columns to copy data into

data_location : str

The Amazon S3 location from where to copy, or a manifest file if the manifest option is used

access_key_id: str, optional

Access Key. Required unless you supply role-based credentials (aws_account_id and iam_role_name)

secret_access_key: str, optional

Secret Access Key ID. Required unless you supply role-based credentials (aws_account_id and iam_role_name)

session_token : str, optional

aws_account_id: str, optional

AWS account ID for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key)

iam_role_name: str, optional

IAM role name for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key)

format : str, optional

CSV, JSON, or AVRO. Indicates the type of file to copy from

quote : str, optional

Specifies the character to be used as the quote character when using format='CSV'. The default is a double quotation mark ( " )

delimiter : File delimiter, optional

defaults to |

path_file : str, optional

Specifies an Amazon S3 location to a JSONPaths file to explicitly map Avro or JSON data elements to columns. defaults to 'auto'

fixed_width: iterable of (str, int), optional

List of (column name, length) pairs to control fixed-width output.

compression : str, optional

GZIP, LZOP, BZIP2, indicates the type of compression of the file to copy

accept_any_date : bool, optional

Allows any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded as NULL without generating an error defaults to False

accept_inv_chars : str, optional

Enables loading of data into VARCHAR columns even if the data contains invalid UTF-8 characters. When specified each invalid UTF-8 byte is replaced by the specified replacement character

blanks_as_null : bool, optional

Boolean value denoting whether to load VARCHAR fields with whitespace only values as NULL instead of whitespace

date_format : str, optional

Specified the date format. If you want Amazon Redshift to automatically recognize and convert the date format in your source data, specify 'auto'

empty_as_null : bool, optional

Boolean value denoting whether to load VARCHAR fields with empty values as NULL instead of empty string

encoding : str, optional

'UTF8', 'UTF16', 'UTF16LE', 'UTF16BE'. Specifies the encoding type of the load data defaults to 'UTF8'

escape : bool, optional

When this parameter is specified, the backslash character (\) in input data is treated as an escape character. The character that immediately follows the backslash character is loaded into the table as part of the current column value, even if it is a character that normally serves a special purpose

explicit_ids : bool, optional

Override the autogenerated IDENTITY column values with explicit values from the source data files for the tables

fill_record : bool, optional

Allows data files to be loaded when contiguous columns are missing at the end of some of the records. The missing columns are filled with either zero-length strings or NULLs, as appropriate for the data types of the columns in question.

ignore_blank_lines : bool, optional

Ignores blank lines that only contain a line feed in a data file and does not try to load them

ignore_header : int, optional

Integer value of number of lines to skip at the start of each file

dangerous_null_delimiter : str, optional

Optional string value denoting what to interpret as a NULL value from the file. Note that this parameter is not properly quoted due to a difference between redshift’s and postgres’s COPY commands interpretation of strings. For example, null bytes must be passed to redshift’s NULL verbatim as '\0' whereas postgres’s NULL accepts '\x00'.

remove_quotes : bool, optional

Removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained.

roundec : bool, optional

Rounds up numeric values when the scale of the input value is greater than the scale of the column

time_format : str, optional

Specified the date format. If you want Amazon Redshift to automatically recognize and convert the time format in your source data, specify 'auto'

trim_blanks : bool, optional

Removes the trailing white space characters from a VARCHAR string

truncate_columns : bool, optional

Truncates data in columns to the appropriate number of characters so that it fits the column specification

comp_rows : int, optional

Specifies the number of rows to be used as the sample size for compression analysis

comp_update : bool, optional

Controls whether compression encodings are automatically applied. If omitted or None, COPY applies automatic compression only if the target table is empty and all the table columns either have RAW encoding or no encoding. If True COPY applies automatic compression if the table is empty, even if the table columns already have encodings other than RAW. If False automatic compression is disabled

max_error : int, optional

If the load returns the max_error number of errors or greater, the load fails defaults to 100000

no_load : bool, optional

Checks the validity of the data file without actually loading the data

stat_update : bool, optional

Update statistics automatically regardless of whether the table is initially empty

manifest : bool, optional

Boolean value denoting whether data_location is a manifest file.

class sqlalchemy_redshift.commands.UnloadFromSelect(select, unload_location, access_key_id=None, secret_access_key=None, session_token=None, aws_account_id=None, iam_role_name=None, manifest=False, delimiter=None, fixed_width=None, encrypted=False, gzip=False, add_quotes=False, null=None, escape=False, allow_overwrite=False, parallel=True)[source]

Prepares a Redshift unload statement to drop a query to Amazon S3 https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD_command_examples.html

Parameters:

select: sqlalchemy.sql.selectable.Selectable

The selectable Core Table Expression query to unload from.

data_location: str

The Amazon S3 location where the file will be created, or a manifest file if the manifest option is used

access_key_id: str, optional

Access Key. Required unless you supply role-based credentials (aws_account_id and iam_role_name)

secret_access_key: str, optional

Secret Access Key ID. Required unless you supply role-based credentials (aws_account_id and iam_role_name)

session_token : str, optional

aws_account_id: str, optional

AWS account ID for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key)

iam_role_name: str, optional

IAM role name for role-based credentials. Required unless you supply key based credentials (access_key_id and secret_access_key)

manifest: bool, optional

Boolean value denoting whether data_location is a manifest file.

delimiter: File delimiter, optional

defaults to ‘|’

fixed_width: iterable of (str, int), optional

List of (column name, length) pairs to control fixed-width output.

encrypted: bool, optional

Write to encrypted S3 key.

gzip: bool, optional

Create file using GZIP compression.

add_quotes: bool, optional

Quote fields so that fields containing the delimiter can be distinguished.

null: str, optional

Write null values as the given string. Defaults to ‘’.

escape: bool, optional

For CHAR and VARCHAR columns in delimited unload files, an escape character (\) is placed before every occurrence of the following characters: \r, \n, \, the specified delimiter string. If add_quotes is specified, " and ' are also escaped.

allow_overwrite: bool, optional

Overwrite the key at unload_location in the S3 bucket.

parallel: bool, optional

If disabled unload sequentially as one file.

sqlalchemy_redshift.commands.visit_copy_command(element, compiler, **kw)[source]

Returns the actual sql query for the CopyCommand class.

sqlalchemy_redshift.commands.visit_unload_from_select(element, compiler, **kw)[source]

Returns the actual sql query for the UnloadFromSelect class.