Re: [BUG] dm-writeboost: too big nr_rambuf_pool causes massive memory pressure or panic.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Satoru,

I totally agree with removing the BUG() at (2).

However, I don't agree with setting the upper limit of nr_rambuf_pool.
You are saying that any memory allocation failure on initialization should be avoided
but that sounds going too far. Removing that possibility from all kernel codes is impossible in practice.
Carefully terminating the initialization on allocation failure is sufficient.

For your guidance,
the amount of memory we need depends on how fast the SSD is.
In practice, few MB for RAM buffer is sufficient for
commodity SSD as caching device. However, it could be higher with much more faster SSD
or one may want to set it up for higher value in experiments.

- Akira

On Sat, 12 Jul 2014 18:06:58 +0900
Satoru Takeuchi <satoru.takeuchi@xxxxxxxxx> wrote:

> Hi,
> 
> Since the upper limit of nr_rambuf_pool argument, user can set too big value
> to this argument.
> 
> drivers/md/dm-writeboost-target.c:
> ===============================================================================
> static int consume_optional_argv(struct wb_device *wb, struct dm_arg_set *as)
> {
>         int r = 0;
>         struct dm_target *ti = wb->ti;
> 
>         static struct dm_arg _args[] = {
>                 {0, 4, "Invalid optional argc"},
>                 {4, 10, "Invalid segment_size_order"},
>                 {1, UINT_MAX, "Invalid nr_rambuf_pool"},
>         };
> ===============================================================================
> 
> It can cause the massive memory pressure to kernel by user's mistake
> and results in
> 
>  - ENOMEM at the (1) or somewhere, and
>  - panic at (2).
> 
> drivers/md/dm-writeboost-metadata.c:
> ===============================================================================
> ...
> static int init_rambuf_pool(struct wb_device *wb)
> {
>         int r = 0;
>         size_t i;
> 
>         wb->rambuf_pool = kmalloc(sizeof(struct rambuffer) * wb->nr_rambuf_pool,
>                                   GFP_KERNEL);
>         if (!wb->rambuf_pool)
>                 return -ENOMEM; # ................. (1)
> 
>         wb->rambuf_cachep = kmem_cache_create("dmwb_rambuf",
>                         1 << (wb->segment_size_order + SECTOR_SHIFT),
>                         1 << (wb->segment_size_order + SECTOR_SHIFT),
>                         SLAB_RED_ZONE, NULL);
>         if (!wb->rambuf_cachep) {
>                 r = -ENOMEM;
>                 goto bad_cachep;
>         }
> 
>         for (i = 0; i < wb->nr_rambuf_pool; i++) {
>                 size_t j;
>                 struct rambuffer *rambuf = wb->rambuf_pool + i;
> 
>                 rambuf->data = kmem_cache_alloc(wb->rambuf_cachep, GFP_KERNEL);
>                 if (!rambuf->data) {
>                         WBERR("Failed to allocate rambuf->data");
>                         for (j = 0; j < i; j++) {
>                                 rambuf = wb->rambuf_pool + j;
>                                 kmem_cache_free(wb->rambuf_cachep, rambuf->data);
>                         }
>                         r = -ENOMEM;
>                         goto bad_alloc_data;
>                 }
>                 check_buffer_alignment(rambuf->data);
>         }
> 
>         return r;
> 
> bad_alloc_data:
>         kmem_cache_destroy(wb->rambuf_cachep);
> bad_cachep:
>         kfree(wb->rambuf_pool);
>         BUG();                         # ........(2)
>         return r;
> }
> ...
> ===============================================================================
> 
> Please decide the appropriate upper limit of nr_rambuf_pool (unfortunately
> I don't know about it), and remove BUG() at (2) if it's unnecessary.
> 
> Thanks,
> Satoru
> 
> --
> dm-devel mailing list
> dm-devel@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux