Re: [PATCH 05/15] Add io_uring IO interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/17/19 7:34 AM, Roman Penyaev wrote:
> On 2019-01-17 14:54, Jens Axboe wrote:
>> On 1/17/19 5:02 AM, Roman Penyaev wrote:
>>> Hi Jens,
>>>
>>> On 2019-01-16 18:49, Jens Axboe wrote:
>>>
>>> [...]
>>>
>>>> +static void *io_mem_alloc(size_t size)
>>>> +{
>>>> +	gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | 
>>>> __GFP_COMP
>>>> |
>>>> +				__GFP_NORETRY;
>>>> +
>>>> +	return (void *) __get_free_pages(gfp_flags, get_order(size));
>>>
>>> Since these pages are shared between kernel and userspace, do we need
>>> to care about d-cache aliasing on armv6 (or other "strange" archs
>>> which I've never seen) with vivt or vipt cpu caches?
>>>
>>> E.g. vmalloc_user() targets this problem by aligning kernel address
>>> on SHMLBA, so no flush_dcache_page() is required.
>>
>> I'm honestly not sure, it'd be trivial enough to stick a
>> flush_dcache_page() into the few areas we'd need it. The rings are
>> already page (SHMLBA) aligned.
> 
> For arm SHMLBA is not a page, it is 4x page.  So for userspace vaddr
> which mmap() returns is aligned, but for kernel not.  So indeed
> flush_dcache_page() should be used.

Oh indeed, my bad.

> The other question which I can't answer myself is the order of
> flush_dcache_page() and smp_wmb().  Does flush_scache_page() implies
> flush of the cpu write buffer?   Or firstly smp_wmb() should be done
> in order to flush everything to cache.  Here is what arm spec says
> about write-back cache:
> 
> "Writes that miss in the cache are placed in the write buffer and
> appear on the AMBA ASB interface. The CPU continues execution as
> soon as the write is placed in the write buffer."
> 
> So if you firstly do flush_dcache_page() will it flush write buffer?
> Because it seems that firstly smp_wmb() and then flush_dcache_page(),
> or I am going mad?

I don't think you're going mad! We'd first need smp_wmb() to order the
writes, then the flush_dcache_page(). For filling the CQ ring, we'd also
need to flush the page the cqe belongs to.

Question is if we care enough about performance on vivt to do something
about that. I know what my answer will be... If others care, they can
incrementally improve upon that.

-- 
Jens Axboe




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux