Re: RBD cache being filled up in small increases instead of 4MB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 15, 2017 at 5:35 PM, Ruben Rodriguez <ruben@xxxxxxx> wrote:
>
>
> On 15/07/17 15:33, Jason Dillaman wrote:
>> On Sat, Jul 15, 2017 at 9:43 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
>>> Unless you tell the rbd client to not disable readahead after reading the 1st x number of bytes (rbd readahead disable after bytes=0), it will stop reading ahead and will only cache exactly what is requested by the client.
>>
>> The default is to disable librbd readahead caching after reading 50MB
>> -- since we expect the OS to take over and do a much better job.
>
> I understand having the expectation that the client would do the right
> thing, but from all I can tell it is not the case. I've run out of ways
> to try to make virtio-scsi (or any other driver) *always* read in 4MB
> increments. "minimum_io_size" seems to be ignored.
> BTW I just sent this patch to Qemu (and I'm open to any suggestions on
> that side!): https://bugs.launchpad.net/qemu/+bug/1600563
>
> But this expectation you mention still has a problem: if you would only
> put in the RBD cache what the OS specifically requested, the chances of
> that data being requested twice would be pretty low, since the OS page
> cache would take care of it better than the RBD cache anyway. So why
> bother having a read cache if it doesn't fetch anything extra?

You are correct that the read cache is of little value -- the OS will
always do a better job than we can at caching the necessary data. The
main use-case for the read-cache, in general, is to just service the
readahead or in the cases where the librbd client application isn't
providing its own cache (e.g. direct IO).

> Incidentally, if the RBD cache were to include the whole object instead
> of just the requested portion, RBD readahead would be unnecessary.

Not necessarily since readahead can fetch the next N objects when it
detects a sequential read, with the impact of slowing down all read IO
for the other (vast majority) of IO requests.

> --
> Ruben Rodriguez | Senior Systems Administrator, Free Software Foundation
> GPG Key: 05EF 1D2F FE61 747D 1FC8 27C3 7FAC 7D26 472F 4409
> https://fsf.org | https://gnu.org



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux