Re: best practices for cephfs on hard drives mimic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Janne!
Chad.

On 1/10/20 1:55 AM, Janne Johansson wrote:
> Den tors 9 jan. 2020 kl 17:16 skrev Chad W Seys <cwseys@xxxxxxxxxxxxxxxx 
> <mailto:cwseys@xxxxxxxxxxxxxxxx>>:
> 
>     Hi all,
>         In the era of Mimic, what are best practices for setting up
>     cephfs on
>     a hard drive only cluster?
>         Our old cluster which began life in Emperor and has been upgraded
>     until now running Mimic.  21 hard drives ranging from 1 to 4 TB. (Yeah,
>     ceph doesn't like that.)  It has a triply replicated cache tier in
>     front
>     of a k2m2 erasure coded pool.  Our main usage these days is cephfs.
>         I vaguely remember a few ideas, which might be out of date or
>     never true:
>         Bluestore is slower than filestore on hard drives.  ?
> 
> 
> I guess one will have to measure with the specific hardware, but from 
> all the filestore->bluestore migration docs, it seems bluestore will 
> skip a lot of extra writes added by having filestore on top of another 
> filesystem (xfs mostly), so it should give more iops per OSD for writes 
> at least. Also, bluestore will need more manual tuning in terms of RAM, 
> since it (as said) doesn't talk normal filesystem so the "use all free 
> ram on OSD hosts for read caches" doesn't apply, but you can set per-OSD 
> memory cache size on bluestore to get more or less similar caching.
> 
>         Before release X replicated caches were needed to interface between
>     writers and the erasure coded pool, but after release X one could write
>     directly to the erasure coded pool. ?
> 
> 
> There is still a penalty for making certain kinds of writes to EC, so 
> you might still want to have a replicated cache pool in front if it 
> works out for you now.
> -- 
> May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux