Re: udev rule to set readahead on Ceph RBD's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Alex Gorbachev
> Sent: 23 August 2016 16:43
> To: Wido den Hollander <wido@xxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>; Nick Fisk <nick@xxxxxxxxxx>
> Subject: Re:  udev rule to set readahead on Ceph RBD's
> 
> On Mon, Aug 22, 2016 at 3:29 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
> >
> >> Op 22 augustus 2016 om 21:22 schreef Nick Fisk <nick@xxxxxxxxxx>:
> >>
> >>
> >> > -----Original Message-----
> >> > From: Wido den Hollander [mailto:wido@xxxxxxxx]
> >> > Sent: 22 August 2016 18:22
> >> > To: ceph-users <ceph-users@xxxxxxxxxxxxxx>; nick@xxxxxxxxxx
> >> > Subject: Re:  udev rule to set readahead on Ceph RBD's
> >> >
> >> >
> >> > > Op 22 augustus 2016 om 15:17 schreef Nick Fisk <nick@xxxxxxxxxx>:
> >> > >
> >> > >
> >> > > Hope it's useful to someone
> >> > >
> >> > > https://gist.github.com/fiskn/6c135ab218d35e8b53ec0148fca47bf6
> >> > >
> >> >
> >> > Thanks for sharing. Might this be worth adding it to ceph-common?
> >>
> >> Maybe, Ilya kindly set the default for krbd to 4MB last year in the kernel, but maybe having this available would be handy if
people
> ever want a different default. It could be set to 4MB as well, with a note somewhere to point people at its direction if they need
to
> change it.
> >>
> >
> > I think it might be handy to have the udev file as redundancy. That way it can easily be changed by users. The udev file is
already
> present, they just have to modify it.
> >
> >> >
> >> > And is 16MB something we should want by default or does this apply to your situation better?
> >>
> >> It sort of applies to me. With a 4MB readahead you will probably struggle to get much more than around 50-80MB/s sequential
> reads, as the read ahead will only ever hit 1 object at a time. If you want to get nearer 200MB/s then you need to set either 16
or
> 32MB readahead. I need it to stream to LTO6 tape. Depending on what you are doing this may or may not be required.
> >>
> >
> > Ah, yes. I a kind of similar use-case I went for using 64MB objects underneath a RBD device. We needed high sequential Write and
> Read performance on those RBD devices since we were storing large files on there.
> >
> > Different approach, kind of similar result.
> 
> Question: what scheduler were you guys using to facilitate the readahead on the RBD client?  Have you noticed any difference
> between different elevators and have you tried blk-mq/scsi-mq?

I thought since kernel 3.19 you didn't have a choice and RBD always used blk-mq? But that's what I'm using as default.

> 
> Thank you.
> --
> Alex Gorbachev
> Storcium
> 
> 
> >
> > Wido
> >
> >> >
> >> > Wido
> >> >
> >> > >
> >> > > _______________________________________________
> >> > > ceph-users mailing list
> >> > > ceph-users@xxxxxxxxxxxxxx
> >> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux