Re: [PATCH v2] blktrace: put bounds on BLKTRACESETUP buf_size and buf_nr

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-06-10 18:33, Harshad Shirwadkar wrote:
> +static struct queue_sysfs_entry queue_blktrace_max_alloc_entry = {
> +	.attr = {.name = "blktrace_max_alloc", .mode = 0644 },
> +	.show = queue_blktrace_max_alloc_show,
> +	.store = queue_blktrace_max_alloc_store,
> +};

I may have sent you in the wrong direction with my comment about
introducing sysfs attributes to control the blktrace_max_alloc limit.
That comment was made before I fully understood the issue this patch
addresses. There is a concern with introducing any limit for the amount
of memory allocated for blktrace, namely that any such limit may be seen
as a regression in the behavior of the BLKTRACESETUP ioctl. How about
not limiting the memory allocated for blktrace buffers at all?

If triggering an out-of-memory complaint is a big concern, how about
modifying relay_open() such that it becomes possible to pass a flag like
__GFP_ACCOUNT to the memory allocations relay_open() performs? From
memory-allocations.rst:

  * Untrusted allocations triggered from userspace should be a subject
    of kmem accounting and must have ``__GFP_ACCOUNT`` bit set. There
    is the handy ``GFP_KERNEL_ACCOUNT`` shortcut for ``GFP_KERNEL``
    allocations that should be accounted.

> @@ -322,6 +326,7 @@ struct queue_limits {
>  	unsigned long		bounce_pfn;
>  	unsigned long		seg_boundary_mask;
>  	unsigned long		virt_boundary_mask;
> +	unsigned long		blktrace_max_alloc;
>  
>  	unsigned int		max_hw_sectors;
>  	unsigned int		max_dev_sectors;

Is this the wrong structure to add a limit like blktrace_max_alloc? As
far as I can see struct queue_limits is only used for limits that are
affected by stacking block devices. My understanding is that the
blktrace_max_alloc limit is not affected by device stacking. See also
blk_stack_limits().

> diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
> index ea47f2084087..de028bdbb148 100644
> --- a/kernel/trace/blktrace.c
> +++ b/kernel/trace/blktrace.c
> @@ -477,11 +477,19 @@ static int do_blk_trace_setup(struct request_queue *q, char *name, dev_t dev,
>  {
>  	struct blk_trace *bt = NULL;
>  	struct dentry *dir = NULL;
> +	u32 alloc_siz;
>  	int ret;
>  
>  	if (!buts->buf_size || !buts->buf_nr)
>  		return -EINVAL;
>  
> +	if (check_mul_overflow(buts->buf_size, buts->buf_nr, &alloc_siz))
> +		return -E2BIG;
> +
> +	if (q->limits.blktrace_max_alloc &&
> +	    alloc_siz > q->limits.blktrace_max_alloc)
> +		return -E2BIG;
> +
>  	if (!blk_debugfs_root)
>  		return -ENOENT;

Please add the check_mul_overflow() check in the code that performs the
multiplication, namely relay_open(), such that all relay_open() callers
are protected against multiplication overflows.

Thanks,

Bart.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux