On 07/04/2012 05:22 PM, Nikolaus Rath wrote: > On 07/04/2012 03:11 AM, Pavel Emelyanov wrote: >> On 07/04/2012 07:01 AM, Nikolaus Rath wrote: >>> Hi Pavel, >>> >>> I think it's great that you're working on this! I've been waiting for >>> FUSE being able to supply write data in bigger chunks for a long time, >>> and I'm very excited to see some progress on this. I'm not a kernel >>> developer, but I'll be happy to try the patches. >> >> Just to make it clear. I didn't increase the 32 pages per request limit. What >> I did is made FUSE submit more than one request at a time while serving massive >> writes. So yes, bigger chunks can be now seen by the daemon, but it should read >> several requests for that. > > Ah, I thought that your patch would do both. So with the patch an > userspace client can now writes data in say 4 kb chunks, and the FUSE > daemon will still receive it from the kernel in 128 kb chunks? In 128 kb per request. > But if the client writes a say 1 MB chunk, the FUSE daemon will still > see 8 128kb write requests? Yes, but this time daemon can see all 8 requests at once. Before the patch you had to ack the 1st request before seeing the 2nd one. > Would it be very hard to raise the 32 pages per request limit at the > same time? I haven't looked at it yet, but it doesn't seem too hard. The only problem that can arise is compatibility with the existing binaries. I don't know whether every single fuse daemon is capable of handling this change... OTOH this parameter can be done as configurable at init time with the default value of 32, but this is a separate task. >>>> A good solution of this is switching the FUSE page cache into a write-back policy. >>>> With this file data are pushed to the userspace with big chunks (depending on the >>>> dirty memory limits, but this is much more than 128k) which lets the FUSE daemons >>>> handle the size updates in a more efficient manner. >>>> >>>> The writeback feature is per-connection and is explicitly configurable at the >>>> init stage (is it worth making it CAP_SOMETHING protected?) >>> >>> From your description it sounds as if the only effect of write-back is >>> to increase the chunk size. Why the need to require special >>> privileges for this? >> >> Provided I understand the code correctly: if FUSE daemon turns writeback on and sets >> per-bdi dirty limit too high it can cause a deadlock on the box. Thus then daemon >> should be trusted by the kernel, i.e. -- privileged. > > Wouldn't it be more reasonable to enforce that the bdi dirty limit is > not set too high then? Hardly, since if you have several mounts with small bdi limit each the sum of them can be still high. > > Thanks, > > -Nikolaus > -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html