Hi Nitin,
On 10.8.2010 7.47, Nitin Gupta wrote:
On 08/10/2010 12:27 AM, Pekka Enberg wrote:
On Mon, Aug 9, 2010 at 8:26 PM, Nitin Gupta<ngupta@xxxxxxxxxx> wrote:
@@ -303,38 +307,41 @@ static int zram_write(struct zram *zram, struct bio *bio)
zram_test_flag(zram, index, ZRAM_ZERO))
zram_free_page(zram, index);
- mutex_lock(&zram->lock);
+ preempt_disable();
+ zbuffer = __get_cpu_var(compress_buffer);
+ zworkmem = __get_cpu_var(compress_workmem);
+ if (unlikely(!zbuffer || !zworkmem)) {
+ preempt_enable();
+ goto out;
+ }
The per-CPU buffer thing with this preempt_disable() trickery looks
overkill to me. Most block device drivers seem to use mempool_alloc()
for this sort of thing. Is there some reason you can't use that here?
Other block drivers are allocating relatively small structs using
mempool_alloc(). However, in case of zram, these buffers are quite
large (compress_workmem is 64K!). So, allocating them on every write
would probably be much slower than using a pre-allocated per-cpu buffer.
The mempool API is precisely for that - using pre-allocated buffers
instead of allocating every time. The preempt_disable() games make the
code complex and have the downside of higher scheduling latencies so why
not give mempools a try?
Pekka
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>