Hi, On (11/25/16 17:35), Minchan Kim wrote: [..] > Unfortunately, zram has used per-cpu stream feature from v4.7. > It aims for increasing cache hit ratio of scratch buffer for > compressing. Downside of that approach is that zram should ask > memory space for compressed page in per-cpu context which requires > stricted gfp flag which could be failed. If so, it retries to > allocate memory space out of per-cpu context so it could get memory > this time and compress the data again, copies it to the memory space. > > In this scenario, zram assumes the data should never be changed > but it is not true unless stable page supports. So, If the data is > changed under us, zram can make buffer overrun because second > compression size could be bigger than one we got in previous trial > and blindly, copy bigger size object to smaller buffer which is > buffer overrun. The overrun breaks zsmalloc free object chaining > so system goes crash like above. very interesting find! didn't see this coming. > Unfortunately, reuse_swap_page should be atomic so that we cannot wait on > writeback in there so the approach in this patch is simply return false if > we found it needs stable page. Although it increases memory footprint > temporarily, it happens rarely and it should be reclaimed easily althoug > it happened. Also, It would be better than waiting of IO completion, > which is critial path for application latency. wondering - how many pages can it hold? we are in low memory, that's why we failed to zsmalloc in fast path, so how likely this to worsen memory pressure? just asking. in async zram the window between zram_rw_page() and actual write of a page even bigger, isn't it? we *probably* and *may be* can try handle it in zram: -- store the previous clen before re-compression -- check if new clen > saved_clen and if it is - we can't use previously allocate handle and need to allocate a new one again. if it's less or equal than the saved one - store the object (wasting some space, yes. but we are in low mem). -- we, may be, also can try harder in zsmalloc. once we detected that zsmllaoc has failed, then we can declare it as an emergency and store objects of size X in higher classes (assuming that there is a bigger size class available with allocated and unused object). -ss -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>