Re: HDD-only CephFS cluster with EC and without SSD/NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

I also have 2+1 (still only 3 nodes), and 3 replicated. I also moved the 
meta datapool to ssds.
What is nice with the cephfs, you can have folders in your filesystem on 
the ec21 pool for not so important data and the rest will be 3x 
replicated. 

I think the single session performance is not going to give you same 
performance as the raid. But you can compensate that by doing your 
backup in parallel.





-----Original Message-----
From: Kevin Olbrich [mailto:ko@xxxxxxx] 
Sent: woensdag 22 augustus 2018 14:28
To: ceph-users
Subject:  HDD-only CephFS cluster with EC and without 
SSD/NVMe

Hi!

I am in the progress of moving a local ("large", 24x1TB) ZFS RAIDZ2 to 
CephFS.
This storage is used for backup images (large sequential reads and 
writes).

To save space and have a RAIDZ2 (RAID6) like setup, I am planning the 
following profile:

ceph osd erasure-code-profile set myprofile \
   k=3 \
   m=2 \
   ruleset-failure-domain=rack


Performance is not the first priority, this is why I do not plan to 
outsource WAL/DB (broken NVMe = broken OSDs is more administrative 
overhead then single OSDs).
Disks are attached by SAS multipath, throughput in general is no problem 
but I did not test with ceph yet.

Is anyone using CephFS + bluestore + ec 3/2 + without WAL/DB-dev and is 
it working well?

Thank you.

Kevin


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux