On Tue, Jun 1, 2010 at 5:01 PM, Nitin Gupta <ngupta@xxxxxxxxxx> wrote: > Currently, ramzwap devices (/dev/ramzswapX) can only > be used as swap disks since it was hard-coded to consider > only the first request in bio vector. > > Now, we iterate over all the segments in an incoming > bio which allows us to handle all kinds of I/O requests. > > ramzswap devices can still handle PAGE_SIZE aligned and > multiple of PAGE_SIZE sized I/O requests only. To ensure > that we get always get such requests only, we set following > request_queue attributes to PAGE_SIZE: > - physical_block_size > - logical_block_size > - io_min > - io_opt > > Note: physical and logical block sizes were already set > equal to PAGE_SIZE and that seems to be sufficient to get > PAGE_SIZE aligned I/O. > > Since we are no longer limited to handling swap requests > only, the next few patches rename ramzswap to zram. So, > the devices will then be called /dev/zram{0, 1, 2, ...} > > Usage/Examples: > 1) Use as /tmp storage > - mkfs.ext4 /dev/zram0 > - mount /dev/zram0 /tmp > > 2) Use as swap: > - mkswap /dev/zram0 > - swapon /dev/zram0 -p 10 # give highest priority to zram0 > > Performance: > > - I/O benchamark done with 'dd' command. Details can be > found here: > http://code.google.com/p/compcache/wiki/zramperf > Summary: > - Maximum read speed (approx): > - ram disk: 1200 MB/sec > - zram disk: 600 MB/sec > - Maximum write speed (approx): > - ram disk: 500 MB/sec > - zram disk: 160 MB/sec > > Issues: > > - Double caching: We can potentially waste memory by having > two copies of a page -- one in page cache (uncompress) and > second in the device memory (compressed). However, during > reclaim, clean page cache pages are quickly freed, so this > does not seem to be a big problem. > > - Stale data: Not all filesystems support issuing 'discard' > requests to underlying block devices. So, if such filesystems > are used over zram devices, we can accumulate lot of stale > data in memory. Even for filesystems to do support discard > (example, ext4), we need to see how effective it is. > > - Scalability: There is only one (per-device) de/compression > buffer stats. This can lead to significant contention, especially > when used for generic (non-swap) purposes. > > Signed-off-by: Nitin Gupta <ngupta@xxxxxxxxxx> Reviewed-by: Minchan Kim <minchan.kim@xxxxxxxxx> I saw mutex lock's usage as rather coarse-grained. But I decides enhancing it with per-cpu stat after this series are merged. Thanks for nice feature, Nitin. P.S) Why don't you send this series to -mm? I don't know any patches have to go linux-next and any patches have to go --mmotm. I thought zram is related to memory management a little bit. What's the criteria? -- Kind regards, Minchan Kim _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/devel