Re: HDD-only CephFS cluster with EC and without SSD/NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 22, 2018 at 1:28 PM Kevin Olbrich <ko@xxxxxxx> wrote:
>
> Hi!
>
> I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to CephFS.
> This storage is used for backup images (large sequential reads and writes).
>
> To save space and have a RAIDZ2 (RAID6) like setup, I am planning the following profile:
>
> ceph osd erasure-code-profile set myprofile \
>    k=3 \
>    m=2 \
>    ruleset-failure-domain=rack
>
> Performance is not the first priority, this is why I do not plan to outsource WAL/DB (broken NVMe = broken OSDs is more administrative overhead then single OSDs).
> Disks are attached by SAS multipath, throughput in general is no problem but I did not test with ceph yet.
>
> Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is it working well?

I have a very small home cluster that's 6x OSDs over 3 nodes, using EC
on bluestore on spinning disks.  I don't have benchmarks, but it was
usable for a few TB of backups.

John

>
> Thank you.
>
> Kevin
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux