rclone und S3
Links
Config
Config Datei (Pfad anzeigen: rclone config file
): z.B. /home/$USER/.config/rclone/rclone.conf oder /root/.config/rclone/rclone.conf .
Ceph RGW:
[MOUNT-NAME] type = s3 provider = Ceph env_auth = false access_key_id = $KEYID secret_access_key = $ACCESSKEY region = endpoint = https://hostname.domain.tld location_constraint = acl = server_side_encryption = storage_class =
S3 sync
ceph osd pool ls
radosgw-admin bucket list [ "pool1", "pool2" ]
radosgw-admin user list [ "test_user", "test_user2" ]
Ziel: → User anlegen: test_user1:
radosgw-admin user create --uid=test_user1 --email=MAIL@DOMAIN.TLD --display-name=test_user1
{ "user_id": "test_user1", "display_name": "test_user1", "email": "MAIL@DOMAIN.TLD", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "test_user1", "access_key": "...", "secret_key": "..." } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "user_quota": { "enabled": false, "check_on_raw": false, "max_size": -1, "max_size_kb": 0, "max_objects": -1 }, "temp_url_keys": [], "type": "rgw" }
Ergebnis:
radosgw-admin user list [ "test_user1", "test_user2" ]
bucket anlegen:
"rclone mkdir ZielS3:s3synctest"
cat ~/.config/rclone/rclone.conf
[QuelleS3] type = s3 provider = Ceph env_auth = false access_key_id = ... secret_access_key = ... region = endpoint = https://QUELLE.DOMAIN.TLD location_constraint = acl = server_side_encryption = storage_class = [ZielS3] type = s3 provider = Ceph env_auth = false access_key_id = ... secret_access_key = ... region = endpoint = https://ZIEL.DOMAIN.TLD location_constraint = acl = server_side_encryption = storage_class =
TEST:
rclone sync -i "QuelleS3:s3synctest" "ZielS3:s3synctest" --dry-run -v
Ausgabe: y) Yes, this is OK (default) n) No, skip this s) Skip all copy operations with no more questions !) Do all copy operations with no more questions q) Exit rclone now.
→ !
Dateien die im Ziel vorhanden sind, aber nicht in der Quelle:
rclone: delete "testdatei"? y) Yes, this is OK (default) n) No, skip this s) Skip all delete operations with no more questions !) Do all delete operations with no more questions q) Exit rclone now. y/n/s/!/q> y 2023/11/13 23:41:52 INFO : testdatei: Deleted 2023/11/13 23:41:52 NOTICE: Transferred: 55.192 MiB / 55.192 MiB, 100%, 185.291 KiB/s, ETA 0s Checks: 424 / 424, 100% Deleted: 1 (files), 0 (dirs) Transferred: 500 / 500, 100% Elapsed time: 1m22.1s
rclone Optimierungen: https://forum.rclone.org/t/rclone-sync-s3-to-s3-runs-for-hours-and-copy-nothing/39687/23 RAM-Bedarf: 1kb per Objekt!
testen (rclone bleibt im Vordergrund:) rclone mount QuelleS3:s3synctest /tmp/s3synctest -v
Später: S3 Multi sync: https://docs.ceph.com/en/quincy/radosgw/multisite-sync-policy/
Shellscript
Beispiel mit Ceph rados:
#!/bin/bash set -e -o pipefail buckets_no=$(radosgw-admin bucket list | jq length) echo "number of buckets $buckets_no" # https://docs.ovh.com/de/storage/object-storage/s3/rclone/ bucket_src="test_quelle" bucket_dest="test_ziel" i=1 j=0 until [ $i -gt $buckets_no ] do echo "bucket $i of $buckets_no:" bucket_name=$(radosgw-admin bucket list | jq -r "(.[$j])") echo "$bucket_name" # bucket anlegen? echo rclone mkdir "$bucket_dest:$bucket_name" echo rclone sync -i "$bucket_src:$bucket_name" "$bucket_dest:$bucket_name" dry-run -vv --checksum --fast-list --s3-list-version 2 --checkers 32 --transfers 16 let i=$i+1 let j=$j+1 done