Note
This plugin is part of the amazon.aws collection (version 1.5.1).
You might already have this collection installed if you are using the ansible
package. It is not included in ansible-core
. To check whether it is installed, run ansible-galaxy collection list
.
To install it, use: ansible-galaxy collection install amazon.aws
.
To use it in a playbook, specify: amazon.aws.aws_s3
.
New in version 1.0.0: of amazon.aws
Note
This module has a corresponding action plugin.
The below requirements are needed on the host that executes this module.
Parameter | Choices/Defaults | Comments |
---|---|---|
aws_access_key string | AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used. If profile is set this parameter is ignored. Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01. aliases: ec2_access_key, access_key | |
aws_ca_bundle path | The location of a CA Bundle to use when validating SSL certificates. Only used for boto3 based modules. Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally. | |
aws_config dictionary | A dictionary to modify the botocore configuration. Parameters can be found at https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html#botocore.config.Config. Only the 'user_agent' key is used for boto modules. See http://boto.cloudhackers.com/en/latest/boto_config_tut.html#boto for more boto configuration. | |
aws_secret_key string | AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used. If profile is set this parameter is ignored. Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01. aliases: ec2_secret_key, secret_key | |
bucket string / required | Bucket name. | |
content string added in 1.3.0 of amazon.aws | The content to PUT into an object. The parameter value will be treated as a string and converted to UTF-8 before sending it to S3. To send binary data, use the content_base64 parameter instead. Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise. | |
content_base64 string added in 1.3.0 of amazon.aws | The base64-encoded binary data to PUT into an object. Use this if you need to put raw binary data, and don't forget to encode in base64. Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise. | |
debug_botocore_endpoint_logs boolean |
| Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used. |
dest path | The destination file path when downloading an object/key with a GET operation. | |
dualstack boolean |
| Enables Amazon S3 Dual-Stack Endpoints, allowing S3 communications using both IPv4 and IPv6. Requires at least botocore version 1.4.45. |
ec2_url string | Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used. aliases: aws_endpoint_url, endpoint_url | |
encrypt boolean |
| When set for PUT mode, asks for server-side encryption. |
encryption_kms_key_id string | KMS key id to use when encrypting objects using encrypting=aws:kms. Ignored if encryption is not aws:kms . | |
encryption_mode string |
| What encryption mode to use if encrypt=true. |
expiry integer | Default: 600 | Time limit (in seconds) for the URL generated and returned by S3/Walrus when performing a mode=put or mode=geturl operation. aliases: expiration |
headers dictionary | Custom headers for PUT operation, as a dictionary of key=value and key=value,key=value . | |
ignore_nonexistent_bucket boolean |
| Overrides initial bucket lookups in case bucket or iam policies are restrictive. Example: a user may have the GetObject permission but no other permissions. In this case using the option mode: get will fail without specifying ignore_nonexistent_bucket=true. |
marker string | Specifies the key to start with when using list mode. Object keys are returned in alphabetical order, starting with key after the marker in order. | |
max_keys integer | Default: 1000 | Max number of results to return in list mode, set this if you want to retrieve fewer than the default 1000 keys. |
metadata dictionary | Metadata for PUT operation, as a dictionary of key=value and key=value,key=value . | |
mode string / required |
| Switches the module behaviour between put (upload), get (download), geturl (return download url, Ansible 1.3+), getstr (download object as string (1.3+)), list (list keys, Ansible 2.0+), create (bucket), delete (bucket), and delobj (delete object, Ansible 2.0+). |
object string | Keyname of the object inside the bucket. Can be used to create "virtual directories", see examples. | |
overwrite string | Default: "always" | Force overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations. Must be a Boolean, always , never or different .true is the same as always .false is equal to never .When this is set to different the MD5 sum of the local file is compared with the 'ETag' of the object/key in S3. The ETag may or may not be an MD5 digest of the object data. See the ETag response header here https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html.aliases: force |
permission list / elements=string | Default: ["private"] | This option lets the user set the canned permissions on the object/bucket that are created. The permissions that can be set are private , public-read , public-read-write , authenticated-read for a bucket or private , public-read , public-read-write , aws-exec-read , authenticated-read , bucket-owner-read , bucket-owner-full-control for an object. Multiple permissions can be specified as a list. |
prefix string | Default: "" | Limits the response to keys that begin with the specified prefix for list mode. |
profile string | Uses a boto profile. Only works with boto >= 2.24.0. Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01. aliases: aws_profile | |
region string | The AWS region to use. If not specified then the value of the AWS_REGION or EC2_REGION environment variable, if any, is used. See http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region
aliases: aws_region, ec2_region | |
retries integer | Default: 0 | On recoverable failure, how many times to retry before actually failing. aliases: retry |
rgw boolean |
| Enable Ceph RGW S3 support. This option requires an explicit url via s3_url. |
s3_url string | S3 URL endpoint for usage with Ceph, Eucalyptus and fakes3 etc. Otherwise assumes AWS. aliases: S3_URL | |
security_token string | AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used. If profile is set this parameter is ignored. Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01. aliases: aws_security_token, access_token | |
src path | The source file path when performing a PUT operation. Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise. | |
validate_certs boolean |
| When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0. |
version string | Version ID of the object inside the bucket. Can be used to get a specific version of a file if versioning is enabled in the target bucket. |
Note
AWS_URL
or EC2_URL
, AWS_PROFILE
or AWS_DEFAULT_PROFILE
, AWS_ACCESS_KEY_ID
or AWS_ACCESS_KEY
or EC2_ACCESS_KEY
, AWS_SECRET_ACCESS_KEY
or AWS_SECRET_KEY
or EC2_SECRET_KEY
, AWS_SECURITY_TOKEN
or EC2_SECURITY_TOKEN
, AWS_REGION
or EC2_REGION
, AWS_CA_BUNDLE
AWS_REGION
or EC2_REGION
can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file- name: Simple PUT operation amazon.aws.aws_s3: bucket: mybucket object: /my/desired/key.txt src: /usr/local/myfile.txt mode: put - name: PUT operation from a rendered template amazon.aws.aws_s3: bucket: mybucket object: /object.yaml content: "{{ lookup('template', 'templates/object.yaml.j2') }}" mode: put - name: Simple PUT operation in Ceph RGW S3 amazon.aws.aws_s3: bucket: mybucket object: /my/desired/key.txt src: /usr/local/myfile.txt mode: put rgw: true s3_url: "http://localhost:8000" - name: Simple GET operation amazon.aws.aws_s3: bucket: mybucket object: /my/desired/key.txt dest: /usr/local/myfile.txt mode: get - name: Get a specific version of an object. amazon.aws.aws_s3: bucket: mybucket object: /my/desired/key.txt version: 48c9ee5131af7a716edc22df9772aa6f dest: /usr/local/myfile.txt mode: get - name: PUT/upload with metadata amazon.aws.aws_s3: bucket: mybucket object: /my/desired/key.txt src: /usr/local/myfile.txt mode: put metadata: 'Content-Encoding=gzip,Cache-Control=no-cache' - name: PUT/upload with custom headers amazon.aws.aws_s3: bucket: mybucket object: /my/desired/key.txt src: /usr/local/myfile.txt mode: put headers: '[email protected]' - name: List keys simple amazon.aws.aws_s3: bucket: mybucket mode: list - name: List keys all options amazon.aws.aws_s3: bucket: mybucket mode: list prefix: /my/desired/ marker: /my/desired/0023.txt max_keys: 472 - name: Create an empty bucket amazon.aws.aws_s3: bucket: mybucket mode: create permission: public-read - name: Create a bucket with key as directory, in the EU region amazon.aws.aws_s3: bucket: mybucket object: /my/directory/path mode: create region: eu-west-1 - name: Delete a bucket and all contents amazon.aws.aws_s3: bucket: mybucket mode: delete - name: GET an object but don't download if the file checksums match. New in 2.0 amazon.aws.aws_s3: bucket: mybucket object: /my/desired/key.txt dest: /usr/local/myfile.txt mode: get overwrite: different - name: Delete an object from a bucket amazon.aws.aws_s3: bucket: mybucket object: /my/desired/key.txt mode: delobj
Common return values are documented here, the following are the fields unique to this module:
Key | Returned | Description |
---|---|---|
contents string | (for getstr operation) | Contents of the object as string. Sample: Hello, world! |
expiry integer | (for geturl operation) | Number of seconds the presigned url is valid for. Sample: 600 |
msg string | always | Message indicating the status of the operation. Sample: PUT operation complete |
s3_keys list / elements=string | (for list operation) | List of object keys. Sample: ['prefix1/', 'prefix1/key1', 'prefix1/key2'] |
url string | (for put and geturl operations) | URL of the object. Sample: https://my-bucket.s3.amazonaws.com/my-key.txt?AWSAccessKeyId=<access-key>&Expires=1506888865&Signature=<signature> |
© 2012–2018 Michael DeHaan
© 2018–2021 Red Hat, Inc.
Licensed under the GNU General Public License version 3.
https://docs.ansible.com/ansible/latest/collections/amazon/aws/aws_s3_module.html