Re: [PATCH v2 3/3] x86: Support local_flush_tlb_kernel_range

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/15/2012 12:39 PM, Dan Magenheimer wrote:
>> From: Seth Jennings [mailto:sjenning@xxxxxxxxxxxxxxxxxx]
>>> The compression code already compresses to a per-cpu page-pair
>>> already and then that "zpage" is copied into the space allocated
>>> for it by zsmalloc.  For that final copy, if the copy code knows
>>> the target may cross a page boundary, has both target pages
>>> kmap'ed, and is smart about doing the copy, the "pair mapping"
>>> can be avoided for compression.
>>
>> The problem is that by "smart" you mean "has access to zsmalloc
>> internals".  zcache, or any user, would need the know the kmapped
>> address of the first page, the offset to start at within that page, and
>> the kmapped address of the second page in order to do the smart copy
>> you're talking about.  Then the complexity to do the smart copy that
>> would have to be implemented in each user.
> 
> Or simply add a zsmalloc_copy in zsmalloc and require that
> it be used by the caller (instead of a memcpy).
> 
>>> The decompression path calls lzo1x directly and it would be
>>> a huge pain to make lzo1x smart about page boundaries.  BUT
>>> since we know that the decompressed result will always fit
>>> into a page (actually exactly a page), you COULD do an extra
>>> copy to the end of the target page (using the same smart-
>>> about-page-boundaries copying code from above) and then do
>>> in-place decompression, knowing that the decompression will
>>> not cross a page boundary.  So, with the extra copy, the "pair
>>> mapping" can be avoided for decompression as well.
>>
>> This is an interesting thought.
>>
>> But this does result in a copy in the decompression (i.e. page fault)
>> path, where right now, it is copy free.  The compressed data is
>> decompressed directly from its zsmalloc allocation to the page allocated
>> in the fault path.
> 
> The page fault occurs as soon as the lzo1x compression code starts anyway,
> a

s do all the cache faults... both just occur earlier, so the only
> additional cost is the actual cpu instructions to move the sequence of
> (compressed) bytes from the zsmalloc-allocated area to the end
> of the target page.
> 
> TLB operations can be very expensive, not to mention (as the
> subject of this thread attests) non-portable.
>  

Even if you go for copying chunks followed by decompression, it still
requires two kmaps and kunmaps. Each of these require one local TLB
invlpg. So, a total of 2 local maps + unmaps even with this approach.

The only additional requirement of zsmalloc is that it requires two
mappings which are virtually contiguous. The cost is the same in both
approaches but the current zsmalloc approach presents a much cleaner
interface.

Thanks,
Nitin

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]