Re: Bluestore vs. Filestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03.10.2018 20:10, jesper@xxxxxxxx wrote:
Your use case sounds it might profit from the rados cache tier
feature. It's a rarely used feature because it only works in very
specific circumstances. But your scenario sounds like it might work.
Definitely worth giving it a try. Also, dm-cache with LVM *might*
help.
But if your active working set is really just 400GB: Bluestore cache
should handle this just fine. Don't worry about "unequal"
distribution, every 4mb chunk of every file will go to a random OSD.
I tried it out - and will do it more but Initial tests didnt really
convince me - but I'll try more.

One very powerful and simple optimization is moving the metadata pool
to SSD only. Even if it's just 3 small but fast SSDs; that can make a
huge difference to how fast your filesystem "feels".
They are ordered and will hopefully arrive very soon.

Can I:
1) Add disks
2) Create pool
3) stop all MDS's
4) rados cppool
5) Start MDS

.. Yes, thats a cluster-down on CephFS but shouldn't take long. Or is
there a better guide?

this post
https://ceph.com/community/new-luminous-crush-device-classes/
and this document
http://docs.ceph.com/docs/master/rados/operations/pools/

explains how the osd class is used to define a crush placement rule.
and then you can set the crush_rule on the pool and ceph will move the data. No downtime needed.

kind regards
Ronny Aasen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux