> Op 23 augustus 2016 om 18:32 schreef Ilya Dryomov <idryomov@xxxxxxxxx>: > > > On Mon, Aug 22, 2016 at 9:22 PM, Nick Fisk <nick@xxxxxxxxxx> wrote: > >> -----Original Message----- > >> From: Wido den Hollander [mailto:wido@xxxxxxxx] > >> Sent: 22 August 2016 18:22 > >> To: ceph-users <ceph-users@xxxxxxxxxxxxxx>; nick@xxxxxxxxxx > >> Subject: Re: udev rule to set readahead on Ceph RBD's > >> > >> > >> > Op 22 augustus 2016 om 15:17 schreef Nick Fisk <nick@xxxxxxxxxx>: > >> > > >> > > >> > Hope it's useful to someone > >> > > >> > https://gist.github.com/fiskn/6c135ab218d35e8b53ec0148fca47bf6 > >> > > >> > >> Thanks for sharing. Might this be worth adding it to ceph-common? > > > > Maybe, Ilya kindly set the default for krbd to 4MB last year in the kernel, but maybe having this available would be handy if people ever want a different default. It could be set to 4MB as well, with a note somewhere to point people at its direction if they need to change it. > > I remember you running tests and us talking about it, but I didn't > actually do it - the default is still a standard kernel-wide 128k. > I hesitated because it's obviously a trade off and we didn't have > a clear winner. Whatever (sensible) default we pick, users with > demanding all sequential workloads would want to crank it up anyway. > > I don't have an opinion on the udev file. > I would vote for adding it to ceph-common so that it's there and users can easily change it. We can still default it to 128k which makes it just a file change for users. Wido > > > >> > >> And is 16MB something we should want by default or does this apply to your situation better? > > > > It sort of applies to me. With a 4MB readahead you will probably struggle to get much more than around 50-80MB/s sequential reads, as the read ahead will only ever hit 1 object at a time. If you want to get nearer 200MB/s then you need to set either 16 or 32MB readahead. I need it to stream to LTO6 tape. Depending on what you are doing this may or may not be required. > > Thanks, > > Ilya _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com