Dear Ceph users, I’d like to get some feedback for the following thought: Currently I run some 24*4TB bluestore OSD nodes. The main focus is on storage space over IOPS. We use erasure code and cephfs, and things look good right now. The „but“ is, I do need more disk space and don’t have so much more rack space available, so I was thinking of adding some 8TB or even 12TB OSDs and/or exchange over time 4TB OSDs with bigger disks. My question is: How are your experiences with the current >=8TB SATA disks are some very bad models out there which I should avoid? The current OSD nodes are connected by 4*10Gb bonds, so for replication/recovery speed is a 24 Chassis with bigger disks useful, or should I go with smaller chassis? Or dose the chassi sice does not matter at all that much in my setup. I know, EC is quit computing intense, so may be bigger disks hav also there an impact? Lot’s of questions, may be you can help answering some. Best regards and Thanks a lot for feedback . Götz
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com