Re: How to do DIRECT IO on kernel buffer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Remember to bottom post on lkml lists.
 see below

On Mon, Oct 18, 2010 at 4:14 PM, Rajat Sharma <fs.rajat@xxxxxxxxx> wrote:
> Greg,
>
> okay I give you an example, currently ecryptfs does not support DIRECT IO
> operation, if it wants to encrypt data in kernel buffers and since file was
> opened in DIRECT IO mode, the lower filesystem should not complain about
> kernel buffers
>
> Thanks,
> Rajat
>
> On Tue, Oct 19, 2010 at 12:56 AM, Greg Freemyer <greg.freemyer@xxxxxxxxx>
> wrote:
>>
>> On Mon, Oct 18, 2010 at 3:19 AM, Rajat Sharma <fs.rajat@xxxxxxxxx> wrote:
>> >
>> >> Doesn't that mean user space right to block device? Simply based on
>> >> that, I can  see you fail all these times... because you do it from
>> >> kernel space..and that's against the very  basic meaning of direct I/O
>> >
>> > You can visualize direct I/O for two purpose:
>> >
>> > 1. Zero copy directly from user buffers to block device as you pointed
>> > out.
>> > 2. Don't make an extra effort to cache the data in page-cache, as it
>> > might
>> > have already be done by applications.
>> >
>> > In some cases, you may not afford to solve 1., e.g. you may want to
>> > manipulate data before writing it do target. Although this can be done
>> > on
>> > the same user space buffer, but you may not want to do that because it
>> > may
>> > change the working data set of application.
>> >
>> > Anyways, to cut it short again, I had put my requirements in my original
>> > mail, and I have one solution where I map pages in user space, I am
>> > looking
>> > for a better solution, or the verification that there is none, where I
>> > can
>> > directly user kernel buffers for doing non-cached I/O.
>> >
>> > Rajat
>>
>> Rajat,
>>
>> I'm not specifically familiar with a use-case in the kernel where the
>> data based in via directio call is copied to kernel buffers,
>> manipulated, then passed on.  Thus I seriously doubt your use-case is
>> supported.
>>
>> Why do you want to do that in the first place?
>>
>> Greg

So using encryption as an example, it looks like you have a very valid
use case.  ie. I can easily envision Oracle running on a encrypted
filesystem and it is optimized for direct io aiui.

So it seems to me the best option is to
see if you can put together a patch to create direct-io-no-cache.  (A
better name would be much appreciated I'm sure.)

Then if userspace opens a file with O_DIRECT but the file is on a
filesytem that can't support full O_DIRECT functionality for the
reasons you give, the kernel could automatically fall back to
O_DIRECT_NO_CACHE.

I don't really know, but that doesn't seem that hard to write.  And
given that it's generic and has some real use cases, I can see it
getting into vanilla.

Greg

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ




[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux