> Hi Everyone, > > I'm putting together a HDD cluster with an ECC pool dedicated to the backup > environment. Traffic via s3. Version 18.2, 7 OSD nodes, 12 * 12TB HDD + > 1NVME each, QLC, man. QLC. That said, I hope you're going to use that single NVMe SSD for at least the index pool. Is this a chassis with universal slots, or is that NVMe device maybe M.2 or rear-cage? > Wondering if there is some general guidance for startup setup/tuning in > regards to s3 object size. Small objects are the devil of any object storage system. > Files are read from fast storage (SSD/NVME) and > written to s3. Files sizes are 10MB-1TB, so it's not standard s3. traffic. Nothing nonstandard about that, though your 1TB objects presumably are going to be MPU. Having the .buckets.non-ec pool on HDD with objects that large might be really slow to assemble them, you might need to increase timeouts but I'm speculating. > Backup for big files took hours to complete. Spinners gotta spin. They're a false economy. > My first shot would be to increase default bluestore_min_alloc_size_hdd, to > reduce the number of stored objects, but I'm not sure if it's a > good direccion? With that workload you *could* increase that to like 64KB, but I don't think it'd gain you much. > Any other parameters worth checking to support such a > traffic pattern? `ceph df` `ceph osd dump | grep pool` So we can see what's going on HDD and what's on NVMe. > > Thanks! > > -- > Łukasz > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx