Den ons 11 nov. 2020 kl 21:42 skrev Adrian Nicolae < adrian.nicolae@xxxxxxxxxx>: > Hey guys, > - 6 OSD servers with 36 SATA 16TB drives each and 3 big NVME per server > (1 big NVME for every 12 drives so I can reserve 300GB NVME storage for > every SATA drive), 3 MON, 2 RGW with Epyc 7402p and 128GB RAM. So in the > end we'll have ~ 3PB of raw data and 216 SATA drives. > My main concern is the speed of delete operations. We have around > 500k-600k delete ops every 24 hours so quite a lot. Our current storage > is not deleting all the files fast enough (it's always 1 week-10 days > behind) , I guess is not only a software issue and probably the delete > speed will get better if we add more drives (we now have 108). > I did some tests on a mimic cluster of mine where the data is on 100+ spin-drives, but all rgw metadata and index pools are on SSDs, and I think we could create 1M 0-byte objects and delete them at a rate of 1M objs in 24h, so having the index pools on fast OSDs is probably important for doing large index operations like creating and deleting small files. Also I think our cluster was quite silent and idle at the time, so doing this while the cluster is being in full use would probably make it lots slower or affect other clients. Our host specs are lower than yours, except we have fewer OSDs per host (8-10 spin-drives and one ssd per host) so perhaps our boxes could spread the load better. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx