Re: Interest in Implementing Cache layer on top of NVME Driver [GSoC]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 1, 2016 at 4:26 PM, Rajath Shashidhara
<rajath.shashidhara@xxxxxxxxx> wrote:
> Hello,
>
> I am a GSoC' 2016 aspirant. I have browsed through your ideas page and I
> find that the above project best aligns with my experience and interest.
>
> During the course of my academics, I have developed an interest in
> development of performance critical software development. I am familiar with
> the parallel and distributed computing paradigms and I would like to gain
> further experience in this field.
>
> I have experience in the development of Page buffers for efficient I/O from
> secondary storage. I lead the development of External Memory (secondary
> storage) data structures [1] for C++ over the last few months. We have
> implemented custom buffering (modifications of the LRU strategy) specific to
> each data structure. Some features of the page buffer include - support for
> page pinning, priority paging (pages with higher priority are harder to
> replace as compared to pages with low priority of comparable recent access
> time), prefetching (strategy specific to data structure). (It is an ongoing
> project and we are expanding the number of Data structures and corresponding
> optimizations)
>
> I will familiarize myself with the architecture of Ceph, the userspace NVME
> driver and the cache requirements. Please point me to the right resources
> and the documentation related to this problem. It would be great if you can
> suggest me a bug to fix or any contribution that would help me understand
> the underlying source code.

Hmm, I suppose you have a basic overview on ceph osd side.
Currently bluestore will directly manage block device, so pagecache(as
well as buffer cache) is used to cache data and metadata. But
userspace NVME driver is used to bypass kernel layer as well as
pagecache. So we can't make use of any kernel side cache.

In general, metadata cache is very important to io latency, if lack of
cache, IO mostly will hit the persist device which will cause
outstanding latency compared to cache metadata in memory even if the
backend if nvme ssd.

So we need to implement a cache layer above nvme driver to accelerate
metadata or data get.

I think it's a initial idea background, but since we also have
ObjectCacher in client side, if we can unify this it will be good. But
obviously ObjectCacher is complexity now, you may not take care of it
at first...

@sage @sam do you have any other points about this idea?

>
> Note : I have successfully completed GSoC' 2013 with Apache OpenOffice.
>
> Thank you,
> Rajath Shashidhara
>
> [1] https://github.com/ExternalMemoryDS/external-mem-ds
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux