Developer
Search…
πŸ“¦
Local S3 Server
Local S3 Server running on top of the Gateway

8️⃣ Multipart

Once the gateway is properly configured and running, the final step is to configure multipart to achieve optimal transfer performance. Multipart is important because after every chunk a metafile is transferred to the metadata-server. A lower number of metafiles will result in less round trips and thus higher performance. However, please note that setting the multipart size too high will imply that if a chunk transfer fails, the S3 server will have to re-upload the entire chunk.
The following two configuration values are of interest:
    multipart_threshold - The size threshold the CLI uses for multipart transfers of individual files, or in simpler terms, at what file size is multipart activated. It is advised to set this value equal to the multipart chunksize.
    multipart_chunksize - When using multipart transfers, this is the chunk size that the CLI uses for multipart transfers of individual files.
To set the multipart threshold and chunksize for the AWS S3 CLI via the terminal, use:
1
$ aws configure set default.s3.multipart_threshold 360MiB # replace with the desired value.
2
$ aws configure set default.s3.multipart_chunksize 360MiB # replace with the desired value.
Copied!
Or alternatively, these parameters can be added to the S3 config file with default location at ~/.aws/config on linux and "%UserProfile%.aws" on windows. Below is an example configuration:
1
[profile development]
2
aws_access_key_id=foo
3
aws_secret_access_key=bar
4
s3 =
5
multipart_threshold = 360MiB
6
multipart_chunksize = 360MiB
Copied!
Size can be set in bytes or with a suffix. For further AWS S3 CLI configuration commands please see here.

9️⃣ Interacting with S3 πŸŽ‰

After configuration, the S3 API can be called from within the local machine to create a bucket, upload and download a file:
1
# create a bucket called storewise
2
$ aws --endpoint http://localhost:9200 s3 mb s3://storewise --cli-read-timeout 0 --cli-connect-timeout 0
3
​
4
# list all buckets
5
$ aws --endpoint http://localhost:9200 s3 ls --cli-read-timeout 0 --cli-connect-timeout 0
6
​
7
# upload a file
8
$ aws --endpoint http://localhost:9200 s3 cp <filename> s3://storewise --cli-read-timeout 0 --cli-connect-timeout 0
9
​
10
# download a file
11
$ aws --endpoint http://localhost:9200 s3 cp s3://storewise/<filename> <path_and_filename> --cli-read-timeout 0 --cli-connect-timeout 0
12
​
13
# Get size of S3 bucket
14
$ aws --endpoint http://localhost:9200 s3 ls --summarize --human-readable --recursive s3://bucket-name/ --cli-read-timeout 0 --cli-connect-timeout 0
Copied!
Or alternatively using a lower level interface:
1
$ aws s3api --endpoint d create-bucket --bucket storewise
2
​
3
$ aws s3api --endpoint http://localhost:9200 put-object --bucket storewise --key sample.json --body ~/sample.json
4
{
5
"ETag": "\"24ef685ac4946614b0241bf08c6e959f\""
6
}
7
​
8
$ aws s3api --endpoint http://localhost:9200 get-object --bucket storewise --key sample.json ~/output.json
9
{
10
"AcceptRanges": "bytes",
11
"ContentType": "",
12
"LastModified": "Wed, 30 Oct 2019 03:59:46 GMT",
13
"ContentLength": 131072,
14
"ETag": "\"24ef685ac4946614b0241bf08c6e959f\"",
15
"Metadata": {}
16
}
17
​
18
# check file integrity
19
$ aws s3api --endpoint http://localhost:9200 head-object --bucket storewise --key sample.json
Copied!
πŸŽ‰ Congratulations! That's it! You're now ready to start storing your files in a data storage system that is more secure, available and completely under your control.

πŸ”ŸUseful Tips

    --addr=127.0.0.1:9200 ➑️ Prevents someone from accessing your gateway.
    --cli-read-timeout 0 --cli-connect-timeout 0 ➑️ Recommended for uploads using the aws-cli when not using multipart in order to avoid cancellation due to timeouts.
For more information on the AWS CLI setup commands please consult the official AWS S3 documentation.
Last modified 9mo ago