Re: FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On (04/19/17 15:20), Minchan Kim wrote:
[..]
> > after a quick look I haven't found a clear_page() that would impose
> > alignment requirements. but ok, `mem' _can_ be kmalloc-ed there, yes.
> > 
> > a side question:
> > how bad would it hurt to switch to page_size aligned allocator partial
> > IO instead of slab allocator? and just use potentially faster *_page()
> > functions instead of mem* functions? I think people normally don't see
> 
> You meant alloc_page or wanted to create new own slab cache with
> PAGE_SIZE alighment requirement?

alloc_page. yes.

> Either way, it makes more changes so it would be not proper for
> -stable material, IMHO. As well, this path(zero-page) would be rare
> so it wouldn't hurt performance, either.

oh, sure. I meant upstream, not -stable. sorry for confusion.

> > partial IOs (well, unless they use some 'weird' filesystems). while
> > copy_page()->memcpy() happens to everyone.
> > 
> > 
> > >  	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
> > >  	if (size == PAGE_SIZE)
> > > -		copy_page(mem, cmem);
> > > +		memcpy(mem, cmem, PAGE_SIZE);
> > 
> > I think here we have both page_size aligned `mem' and page_size aligned
> > `cmem'. `mem' is guaranteed to be kmapp-ed ->bv_page, which is allocated
> > by alloc_page(), and zs_map_object() returns a kmapp-ed alloc_page()-ed
> > page for huge)class object (which is the case here, isn't it?). so we can
> > keep copy_page()? am I wrong? I'm afraid we can slow down a regularly
> 
> No, you're right but as I wrote in description, I don't want to rely
> on zsmalloc's internal implementation.

well, zsmalloc() won't change that much in -stable ;)
but up to you.

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx>



> And this bad compress ratio path would be rare, too.

"bad compression" is not uncommon. I wish it was, but it's not. we can't
really guarantee/expect anything, it's up to compression algorithm and data.


I added two zram.stat counters -- bad_compression and good_compression.
and did

+       if (unlikely(comp_len > max_zpage_size)) {
+               atomic64_inc(&zram->stats.bad_compression);
                comp_len = PAGE_SIZE;
+       } else {
+               atomic64_inc(&zram->stats.good_compression);
+       }

in zram_compress().

and in mm_stat_show()

	        ret = scnprintf(buf, PAGE_SIZE,
-                       "%8llu %8llu %8llu %8lu %8ld %8llu %8lu\n",
+                       "%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu %8llu\n",

...

+                       (u64)atomic64_read(&zram->stats.bad_compression),
+                       (u64)atomic64_read(&zram->stats.good_compression));


so bad_compression first, good_compression last.


and here are some of the results (LZ0):

cat /sys/block/zram0/mm_stat 
224010240 186613588 188694528        0 188694528    15958        0    37818    16878

'bad' compression twice as common as 'good' compression on some of my workloads.

cat /sys/block/zram0/mm_stat 
482918400 404083224 409276416        0 409276416    15138        0    76789    45188

# cat /sys/block/zram0/mm_stat
919818240 581986600 596762624        0 596762624    14296        0   119052   187966



=== a different test:

# cat /sys/block/zram0/mm_stat 
2976886784 2061379701 2087194624        0 2087194624     5752        0   387983   341593



=== another test:

# cat /sys/block/zram0/mm_stat 
975462400 341683056 356196352        0 356196352     9172        0    15833   224226

ok, not too bad.

# cat /sys/block/zram0/mm_stat
1841291264 933397356 958345216        0 958345216     5940        0   116175   336128

# cat /sys/block/zram0/mm_stat
3100012544 2183029827 2208866304        0 2208866304     5447        0   417539   342450

# cat /sys/block/zram0/mm_stat 
3338133504 2265031857 2294886400        0 2294886400     1199        0   420382   402257

meh.

# cat /sys/block/zram0/mm_stat 
3057053696 2128156096 2153943040        0 2295697408      104     3903   420530   420055

close to 50/50.

# cat /sys/block/zram0/mm_stat 
3094814720 2142308405 2168918016        0 2295697408      296     3903   420727   431247

# cat /sys/block/zram0/mm_stat
3177267200 2172831422 2201022464        0 2295697408      346     3903   421139   455748

# cat /sys/block/zram0/mm_stat 
3338510336 2269447044 2299387904        0 2299387904     1179     3983   434806   483659

end of test.


[..]
> > here `src' is kmapp-ed ->bv_page, which is always page_size aligned.
> > it's allocated by alloc_page(). `cmem' is also page_size aligned,
> > isn't it? seems that we can use copy_page(). am I missing something?
> > I'm afraid we can slow down a regularly executed (semi-hot) path here.
> 
> I understand your concern about slow down but I belive thre are
> not performance sensitive paths because they are rare so I wanted to
> bias to safety for -stable material and we can make it fast
> in mainline. :)

ok.

	-ss



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]