On 11. 11. 22 1:48, Sergey Senozhatsky wrote:
On (22/11/10 15:29), Martin Doucha wrote:
I've tried to debug the issue and collected some interesting data (all
values come from zram device with 25M size limit and zstd compression
algorithm):
- mm_stat values are correct after mkfs.vfat:
65536 220 65536 26214400 65536 0 0 0
- mm_stat values stay correct after mount:
65536 220 65536 26214400 65536 0 0 0
- the bug is triggered by filling the filesystem to capacity (using dd):
4194304 0 0 26214400 327680 64 0 0
Can you try using /dev/urandom for dd, not /dev/zero?
Do you still see zeroes in sysfs output or some random values?
After 50 test runs on a kernel where the issue is confirmed, I could not
reproduce the failure while filling the device from /dev/urandom instead
of /dev/zero. The test reported compression ratio around 1.8-2.5 which
means the memory usage reported by mm_stat was 10-13MB.
Note that I had to disable the other filesystems in the test because
some of them kept failing with compression ratio <1.
--
Martin Doucha mdoucha@xxxxxxx
QA Engineer for Software Maintenance
SUSE LINUX, s.r.o.
CORSO IIa
Krizikova 148/34
186 00 Prague 8
Czech Republic