s3cmd install Ubuntu 18.04 and s3cmd install CentOS 7 on both the Flavours CentOS and Ubuntu. Also, we see the issue for “s3cmd download failed for the larger file”.
s3cmd is a tool for managing objects in Amazon S3 storage. It allows for
making and removing “buckets” and uploading, downloading and removing
“objects” from these buckets.
Update the server first by using below command
$ sudo apt update $ sudo apt-get update
Install s3cmd package by using apt or apt-get command
$ sudo apt-get install s3cmd $ sudo apt install s3cmd
Update the server first by using below command
# yum update
Install s3cmd package by using yum command
# yum install epel-release
# yum install s3cmd
Then use the command “s3cmd –configure” to access the S3 bucket, let’s configure the s3cmd tool.
$ s3cmd --configure Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: XXXXTastethelinux Secret Key: XXXXTastethelinux#####[email protected]@!# Default Region [US]: region_of_the_Bucket Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: Press Enter Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: Press Enter Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: Press Enter When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: Press Enter On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: Press Enter New settings: Access Key: XXXXTastethelinux Secret Key: XXXXTastethelinux#####[email protected]@!# Default Region: region_of_the_Bucket S3 Endpoint: s3.amazonaws.com DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.s3.amazonaws.com Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: True HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] n Save settings? [y/N] y Configuration saved to '/home/tastethelinux-ashish/.s3cfg'
So we have successfully configured the s3cmd tool with the Access Key and Secret Key.
Suppose we have to “list all the buckets in AWS s3″ using s3cmd.
# s3cmd ls 2019-12-30 10:14 s3://tla 2019-11-06 11:26 s3://tla-image 2020-07-01 11:07 s3://tastethelinux-backup 2019-11-07 13:59 s3://tla-logs
So we have 4 buckets(tla, tla-image, tastethelinux-backup, and tla-logs)into the AWS s3 Storage.
So if you have to “create a new bucket into AWS S3”, let’s create with the name test-tla.
# s3cmd mb s3://test-tla Bucket 's3://test-tla/' created
Upload a file into s3 bucket.
# s3cmd put tla.txt s3://tla/ tla.txt -> s3://tla/tla.txt [1 of 1] 100619 of 100619 100% in 2s 668.35 kB/s done
So, we have uploaded a file tla.txt into a tla bucket.
Let’s upload a folder or directory in s3 Bucket.
# s3cmd put script s3://tla/ ERROR: Parameter problem: Use --recursive to upload a directory: Script
So, we are getting this error while uploading the Directory or folder into the bucket.
# s3cmd put Script s3://tla/ -r upload: 'Script/s3_Script.sh' -> 's3://tla/Script/s3_Script.sh' [1 of 4] 801 of 801 100% in 0s 7.27 kB/s done upload: 'Script/nodeJs.sh' -> 's3://tla/Script/nodeJs.sh' [2 of 4] 349 of 349 100% in 0s 6.90 kB/s done upload: 'Script/java' -> 's3://tla/Script/java' [3 of 4] 192 of 192 100% in 0s 6.47 kB/s done upload: 'Script/tla-rotate.sh' -> 's3://tla/Script/tla-rotate.sh' [4 of 4] 153 of 153 100% in 0s 1703.20 B/s done
So we have uploaded folder in that there is a 4 files “s3_Script.sh, nodeJs.sh, java, and tla-rotate.sh”.
We have used -r option to Upload the folder also –recursive will work.
Let’s delete the file from S3 bucket.
# s3cmd del s3://tla/Script/java delete: 's3://tla/Script/java'
Now we will delete a folder from s3 bucket.
# s3cmd del s3://tla/Script delete: 's3://tla/Script'
So, let’s delete the Bucket from AWS S3.
# s3cmd rb s3://tla Bucket 's3://tla/' removed
s3cmd download failed for the larger file
So, While Downloading from s3 Bucket I got the Error Download failed after the timeout error.
The size of the file was more than 80 GB and 70 GB got downloaded to the server.
To resume we have used “–continue” option with the s3cmd command
s3cmd --continue get s3://tastethelinux/tastethelinux.tar.gz
Thanks to read this POST. Keep Supporting us!
AWS s3, AWS s3cmd configure, AWS s3cmd install, s3cmd, s3cmd command, s3cmd command not found, s3cmd configure, s3cmd install, s3cmd install centos, s3cmd install Ubuntu, s3cmd large, s3cmd large directory, s3cmd large file upload, s3cmd larger file, s3cmd install Ubuntu 18.04, s3cmd install CentOS 7