Hi all, Debian is using Loongson 2E machines as part of the buildd network. From time to time we observed a corruption of the amount of dirty memory as it can be seen below: | # cat /proc/meminfo | MemTotal: 1033008 kB | MemFree: 81504 kB | MemAvailable: 781552 kB | Buffers: 133104 kB | Cached: 660752 kB | SwapCached: 13152 kB | Active: 513680 kB | Inactive: 348016 kB | Active(anon): 20288 kB | Inactive(anon): 48048 kB | Active(file): 493392 kB | Inactive(file): 299968 kB | Unevictable: 96 kB | Mlocked: 96 kB | SwapTotal: 2097136 kB | SwapFree: 2046624 kB | Dirty: 18446744073709288640 kB | Writeback: 0 kB | AnonPages: 65296 kB | Mapped: 16992 kB | Shmem: 496 kB | Slab: 79312 kB | SReclaimable: 69872 kB | SUnreclaim: 9440 kB | KernelStack: 1664 kB | PageTables: 2752 kB | NFS_Unstable: 0 kB | Bounce: 0 kB | WritebackTmp: 0 kB | CommitLimit: 2613632 kB | Committed_AS: 178464 kB | VmallocTotal: 1069547488 kB | VmallocUsed: 656 kB | VmallocChunk: 1069538528 kB | AnonHugePages: 0 kB | HugePages_Total: 0 | HugePages_Free: 0 | HugePages_Rsvd: 0 | HugePages_Surp: 0 | Hugepagesize: 32768 kB The consequences is that all write accesses to disk are very very slow, while read access are running at normal speed. My guess is that the kernel is trying to flush dirty pages in priority, but there are none. It usually happens after 3 to 6 days of continuous work, but we haven't found any pattern triggering the issue so far. We first thought it could be a bad interaction of transparent hugepages, but even setting them to "never" does not fix the issue. Do you have an idea about what could be the issue, or if not how can we debug it? Thanks, Aurelien -- Aurelien Jarno GPG: 4096R/1DDD8C9B aurelien@xxxxxxxxxxx http://www.aurel32.net