Re: local mode -- a new tier mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 12 Oct 2019 at 04:58, Maged Mokhtar <mmokhtar@xxxxxxxxxxx> wrote:
>
> Looks quite interesting, i do however think local caching is better done
> at the block level (bcache, dm-cache, dm-writecache) rather than in
> Ceph. In theory they can deal with a smaller granularity than a Ceph
> object + go through the kernel block layer which is more optimized than
> a Ceph OSD.
>
yes, fine granularity and low migration cost.

But, based on local mode tier, we can implement more flexible
migration strategies.
Take cephfs as an example, there are a lot of files stored in the system, but
we only need to edit some of them in a certain period of time. We can migrate
the data part of the related files to SSD before the editing process
through the hint
operation. This ensures that the most important data is in SSD, while
other files
still stay on HDD.

It is not easy to do this kind of work based on block level cache
implementation, because they
don't know the up layer logical objects.

> Your results do show favorable comparison with bcache, it will be good
> to try to know why this is the case (at least at a high level), i know
> cache testing/simulation is not easy to compare two caching methods, but
> i think it is important to know why local Ceph caching would be better.
>
> It will also be interesting to compare it with dm-writecache, which is
> optimized for writes (using pmem or ssd devices) which is in many cases
> the main performance bottleneck as reads can be cached in memory
> (assuming you have enough ram).
>
> So i think more tests need to be done, which for caching is not a simple
> matter. I believe fio does have a random_distribution=zipf:[theta]

I made a comparison with CAS:
https://tracker.ceph.com/issues/42286?next_issue_id=42285#I-also-compared-local-mode-tier-with-intel-CAS

> parameter trying to simulate semi real io, as pure serial or pure random
> io is not suitable for testing cache.
>
As for local's performance, it's highly related to how to figure out hot object.
Now I reuse pool tier's hitset to do this work.
If the migration overhead can be compensated by subsequent read and
write hits, performance is not a problem.

> /Maged
>
> On 11/10/2019 18:04, Honggang(Joseph) Yang wrote:
> > Hi,
> >
> > We implemented a new cache tier mode - local mode. In this mode, an
> > osd is configured to manage two data devices, one is fast device, one
> > is slow device. Hot objects are promoted from slow device to fast
> > device, and demoted from fast device to slow device when they become
> > cold.
> >
> > The introduction of tier local mode in detail is
> > https://tracker.ceph.com/issues/42286
> >
> > tier local mode: https://github.com/yanghonggang/ceph/commits/wip-tier-new
> >
> > This work is based on ceph v12.2.5. I'm glad to port it to master
> > branch if needed.
> >
> > Any advice and suggestions will be greatly appreciated.
> >
> > thx,
> >
> > Yang Honggang
> > _______________________________________________
> > Dev mailing list -- dev@xxxxxxx
> > To unsubscribe send an email to dev-leave@xxxxxxx
>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux