Re: [PATCH 0/1] Possible bug in zram on ppc64le on vfat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On (23/08/07 14:44), Ian Wienand wrote:
[..]
> 
> At this point, because this test fills from /dev/zero, the zsmalloc
> pool doesn't actually have anything in it.  The filesystem metadata is
> in-use from the writes, and is not written out as compressed data.
> The zram page de-duplication has kicked in, and instead of handles to
> zsmalloc areas for data we just have "this is a page of zeros"
> recorded.  So this is correctly reflecting that fact that we don't
> actually have anything compressed stored at this time.
> 
> >  >> If we do a "sync" then redisply the mm_stat after, we get
> >  >>   26214400     2842    65536 26214400   196608      399        0        0
> 
> Now we've finished writing all our zeros and have synced, we would
> have finished updating vfat allocations, etc.  So this gets compressed
> and written, and we're back to have some small FS metadata compressed
> in our 1 page of zsmalloc allocations.
> 
> I think what is probably "special" about this reproducer system is
> that it is slow enough to allow the zero allocation to persist between
> the end of the test writes and examining the stats.
> 
> I'd be happy for any thoughts on the likelyhood of this!

Thanks for looking into this.

Yes, the fact that /dev/urandom shows non-zero values in mm_stat means
that we don't have any fishy going on in zram but instead very likely
have ZRAM_SAME pages, which don't reach zsmalloc pool and don't use any
physical pages.

And this is what 145 is in mm_stat that was posted earlier. We have 145
pages that are filled with the same bytes pattern:

> >  >> however, /sys/block/zram1/mm_stat shows
> >  >>   9502720        0        0 26214400   196608      145        0        0

> If we think this is right; then the point of the end of this test [1]
> is ensure a high reported compression ratio on the device, presumably
> to ensure the compression is working.  Filling it with urandom would
> be unreliable in this regard.  I think what we want to do is something
> highly compressable like alternate lengths of 0x00 and 0xFF.

So var-lengths 0x00/0xff should work, just make sure that you don't have
a pattern of sizeof(unsigned long) length.

I think fio had an option to generate bin data with a certain level
of compress-ability. If that option works then maybe you can just use
fio with some static buffer_compress_percentage configuration.



[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux