Re: HDD-only CephFS cluster with EC and without SSD/NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not 3+2, but we run 4+2, 6+2, 6+3, 5+3, and 8+3 with cephfs in
production. Most of them are HDDs without separate DB devices.



Paul

2018-08-22 14:27 GMT+02:00 Kevin Olbrich <ko@xxxxxxx>:
> Hi!
>
> I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to
> CephFS.
> This storage is used for backup images (large sequential reads and writes).
>
> To save space and have a RAIDZ2 (RAID6) like setup, I am planning the
> following profile:
>
> ceph osd erasure-code-profile set myprofile \
>    k=3 \
>    m=2 \
>    ruleset-failure-domain=rack
>
> Performance is not the first priority, this is why I do not plan to
> outsource WAL/DB (broken NVMe = broken OSDs is more administrative overhead
> then single OSDs).
> Disks are attached by SAS multipath, throughput in general is no problem but
> I did not test with ceph yet.
>
> Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is it
> working well?
>
> Thank you.
>
> Kevin
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux