Re: [RFC PATCH 1/3] mm: teach mm by current context info to not do I/O during memory allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 16, 2012 at 9:09 PM, Minchan Kim <minchan@xxxxxxxxxx> wrote:
>
> Good point. You can check it in __zone_reclaim and change gfp_mask of scan_control
> because it's never hot path.
>
>>
>> So could you make sure it is safe to move the branch into
>> __alloc_pages_slowpath()?  If so, I will add the check into
>> gfp_to_alloc_flags().
>
> How about this?

It is quite smart change, :-)

Considered that other part(sched.h) of the patch need update, I
will merge your change into -v1 for further review with your
Signed-off-by if you have no objection.

>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d976957..b3607fa 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2614,10 +2614,16 @@ retry_cpuset:
>         page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask, order,
>                         zonelist, high_zoneidx, alloc_flags,
>                         preferred_zone, migratetype);
> -       if (unlikely(!page))
> +       if (unlikely(!page)) {
> +               /*
> +                * Resume path can deadlock because block device
> +                * isn't active yet.
> +                */

Not only resume path, I/O transfer or its error handling path may deadlock too.

> +               if (unlikely(tsk_memalloc_no_io(current)))
> +                       gfp_mask &= ~GFP_IOFS;
>                 page = __alloc_pages_slowpath(gfp_mask, order,
>                                 zonelist, high_zoneidx, nodemask,
>                                 preferred_zone, migratetype);
> +       }
>
>         trace_mm_page_alloc(page, order, gfp_mask, migratetype);
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b5e45f4..6c2ccdd 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3290,6 +3290,16 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
>         };
>         unsigned long nr_slab_pages0, nr_slab_pages1;
>
> +       if (unlikely(tsk_memalloc_no_io(current))) {
> +               sc.gfp_mask &= ~GFP_IOFS;
> +               shrink.gfp_mask = sc.gfp_mask;
> +               /*
> +                * We allow to reclaim only clean pages.
> +                * It can affect RECLAIM_SWAP and RECLAIM_WRITE mode
> +                * but this is really rare event and allocator can
>                  * fallback to other zones.
> +                */
> +               sc.may_writepage = 0;
> +               sc.may_swap = 0;
> +       }
> +
>         cond_resched();
>         /*
>          * We need to be able to allocate from the reserves for RECLAIM_SWAP
>

Thanks,
--
Ming Lei

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]