Re: Ceph librbd caching implementation development

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you again for the help Jason.

Previously we were mounting the filesystem of the block device and
reading a file (still using the O_DIRECT and O_SYNC flags), but
reading directly from the block device itself with the same flags
seems to have resolved the issue.

--------
Spencer Melnick
On Thu, Oct 11, 2018 at 1:58 PM Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>
> On Wed, Oct 10, 2018 at 8:55 PM Spencer Melnick <smelnick97@xxxxxxxxx> wrote:
> >
> > Hi Jason,
> >
> > Thank you for the reply. Based on testing several different options, I
> > believe that the modification is occurring due to the page cache.
> >
> > Running our test program initially produces read requests at the
> > ImageCache level.
> > A second run of the test program produces no read requests in
> > ImageCache; however, by flushing the page cache using
> >
> > sync; echo 3 > /proc/sys/vm/drop_caches
> >
> > and re-running the test program we again see read requests in ImageCache.
> > Checking the scheduler for our virtual block device, nbd0 via
> >
> > cat /sys/block/nbd0/queue/scheduler
> >
> > produces [none].
> > Likewise,
> >
> > cat /sys/block/nbd0/queue/read_ahead_kb
> >
> > produces 0.
> > Because of this I am assuming that the sole modification of the
> > requests is due to the page cache.
> > To fix this, we've attempted to mount the virtual block device both
> > with and without the -osync flag, and modifying the test program to
> > read a file using the O_DIRECT flag, but neither seems to have any
> > effect.
> >
> > I apologize, as this is somewhat unrelated to Ceph, but do you know of
> > any other methods to disable the page cache, if possible solely for
> > the virtual block device?
>
> I would have expected that if you open the "/dev/nbdX" block device w/
> the O_DIRECT flag, all subsequent reads and writes would bypass the
> page cache.
>
> >
> > Thanks,
> > -----------
> > Spencer Melnick
> >
> > On Sat, Oct 6, 2018 at 9:04 AM Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
> > >
> > > On Fri, Oct 5, 2018 at 7:00 PM Spencer Melnick <smelnick97@xxxxxxxxx> wrote:
> > > >
> > > > Hello everyone,
> > > >
> > > > I am currently working on implementing a custom caching algorithm in
> > > > Ceph's librbd using the ImageCache interface, but I have found that
> > > > not all of the requests we expect are being processed by the
> > > > ImageCache::aio_read() function.
> > > >
> > > > Specifically, when we create a rbd image, map it to an nbd device, and
> > > > mount this device on a Linux filesystem, we run a series of file read
> > > > requests to test the algorithm. However, we find that the number of
> > > > requests dispatched to ImageCache::aio_read() is not the same as the
> > > > number of requests made by our test program.
> > > >
> > > > Normally this would not be a problem; however, our algorithm relies on
> > > > having a complete data stream to perform some predictive caching. Is
> > > > there some kind of caching happening at a higher level of Ceph that
> > > > must be turned off first?
> > >
> > > If you are using rbd-nbd (via the nbd block device), then I suspect
> > > you are just seeing the Linux kernel IO scheduler and/or page cache
> > > altering your requests. From the point of view of rbd-nbd, any read
> > > request received from the kernel will be passed unmodified to
> > > "ImageCache::aio_read".
> > >
> > > > Thanks,
> > > > -----------
> > > > Spencer Melnick
> > >
> > >
> > >
> > > --
> > > Jason
>
>
>
> --
> Jason



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux