2 回答

TA貢獻1827條經驗 獲得超8個贊
好的,我做到了。這真的很糟糕,但它有效。我使用了 boto3 和 aws-cli
import subprocess
import boto3
folders = []
with open('folders_list.txt', 'r', newline='') as f:
for line in f:
line = line.rstrip()
folders.append(line)
def download(bucket_name):
s3_client = boto3.client("s3")
result = s3_client.list_objects(Bucket=bucket_name, Prefix="my_path/{}/".format(folder), Delimiter="/")
subfolders = []
for i in result['CommonPrefixes']:
subfolders.append(int(i['Prefix'].split('{}/'.format(folder),1)[1][:-1]))
subprocess.run(['aws', 's3', 'cp', 's3://my_bucket/my_path/{0}/{1}'.format(folder, max(subfolders)),
'C:\\Users\it_is_me\my_local_folder\{}.'.format(folder), '--recursive'])
for folder in folders:
download('my_bucket')

TA貢獻1821條經驗 獲得超6個贊
這是一個簡單的 bash one liner(假設 aws s3 ls 的格式將文件名作為最后一列):
for bucket in $(cat folder.txt); do \
aws s3 ls s3://bucket-prefix/$bucket | awk '{print $NF}' \
| sort -r | head -n1 \
| xargs -I {} aws s3 cp s3://bucket-prefix/$bucket/{} $bucket/{} --recursive \
; done
如果目錄丟失,aws-cli 會負責創建目錄。(在 Ubuntu 上測試)
添加回答
舉報