Re: New best practices for osds???

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> This is worse than I feared, but very much in the realm of concerns I 
> had with using single-disk RAID0 setups.? Thank you very much for 
> posting your experience!? My money would still be on using *high write 
> endurance* NVMes for DB/WAL and whatever I could afford for block.?


yw.  Of course there are all manner of use-cases and constraints, so others have different experiences.  Perhaps with the freedom to not use a certain HBA vendor things would be somewhat better but in said past life the practice cost hundreds of thousands of dollars.

I personally have a low tolerance for fuss, and management / mapping of WAL/DB devices still seems like a lot of fuss especially when drives fail or have to be replaced for other reasons.

For RBD clusters/pools at least I really enjoy not having to mess with multiple devices; I’d rather run colo with SATA SSDs than spinners with NVMe WAL+DB. 

- aad

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux