Re: [PATCH v2] PCI: vmd: Add indirection layer to vmd irq lists

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2019-10-31 at 16:11 -0700, Paul E. McKenney wrote:
> On Thu, Oct 31, 2019 at 07:08:53AM -0600, Jon Derrick wrote:
> > With CONFIG_MAXSMP and CONFIG_PROVE_LOCKING, the size of an srcu_struct can
> > grow quite large. In one compilation instance it produced a 74KiB data
> > structure. These are embedded in the vmd_irq_list struct, and a N=64 allocation
> > can exceed MAX_ORDER, violating reclaim rules.
> > 
> >   struct srcu_struct {
> >           struct srcu_node   node[521];                    /*     0 75024 */
> >           /* --- cacheline 1172 boundary (75008 bytes) was 16 bytes ago --- */
> >           struct srcu_node *         level[4];             /* 75024    32 */
> >           struct mutex       srcu_cb_mutex;                /* 75056   128 */
> >           /* --- cacheline 1174 boundary (75136 bytes) was 48 bytes ago --- */
> >           spinlock_t                 lock;                 /* 75184    56 */
> >           /* --- cacheline 1175 boundary (75200 bytes) was 40 bytes ago --- */
> >           struct mutex       srcu_gp_mutex;                /* 75240   128 */
> >           /* --- cacheline 1177 boundary (75328 bytes) was 40 bytes ago --- */
> >           unsigned int               srcu_idx;             /* 75368     4 */
> > 
> >           /* XXX 4 bytes hole, try to pack */
> > 
> >           long unsigned int          srcu_gp_seq;          /* 75376     8 */
> >           long unsigned int          srcu_gp_seq_needed;   /* 75384     8 */
> >           /* --- cacheline 1178 boundary (75392 bytes) --- */
> >           long unsigned int          srcu_gp_seq_needed_exp; /* 75392     8 */
> >           long unsigned int          srcu_last_gp_end;     /* 75400     8 */
> >           struct srcu_data *         sda;                  /* 75408     8 */
> >           long unsigned int          srcu_barrier_seq;     /* 75416     8 */
> >           struct mutex       srcu_barrier_mutex;           /* 75424   128 */
> >           /* --- cacheline 1180 boundary (75520 bytes) was 32 bytes ago --- */
> >           struct completion  srcu_barrier_completion;      /* 75552    80 */
> >           /* --- cacheline 1181 boundary (75584 bytes) was 48 bytes ago --- */
> >           atomic_t                   srcu_barrier_cpu_cnt; /* 75632     4 */
> > 
> >           /* XXX 4 bytes hole, try to pack */
> > 
> >           struct delayed_work work;                        /* 75640   152 */
> > 
> >           /* XXX last struct has 4 bytes of padding */
> > 
> >           /* --- cacheline 1184 boundary (75776 bytes) was 16 bytes ago --- */
> >           struct lockdep_map dep_map;                      /* 75792    32 */
> > 
> >           /* size: 75824, cachelines: 1185, members: 17 */
> >           /* sum members: 75816, holes: 2, sum holes: 8 */
> >           /* paddings: 1, sum paddings: 4 */
> >           /* last cacheline: 48 bytes */
> >   };
> > 
> > With N=64 VMD IRQ lists, this would allocate 4.6MiB in a single call. This
> > violates MAX_ORDER reclaim rules when PAGE_SIZE=4096 and
> > MAX_ORDER_NR_PAGES=1024, and invokes the following warning in mm/page_alloc.c:
> > 
> >   /*
> >    * There are several places where we assume that the order value is sane
> >    * so bail out early if the request is out of bound.
> >    */
> >   if (unlikely(order >= MAX_ORDER)) {
> >   	WARN_ON_ONCE(!(gfp_mask & __GFP_NOWARN));
> >   	return NULL;
> >   }
> > 
> > This patch changes the irq list array into an array of pointers to irq
> > lists to avoid allocation failures with greater msix counts.
> > 
> > This patch also reverts commit b31822277abcd7c83d1c1c0af876da9ccdf3b7d6.
> > The index_from_irqs() helper was added to calculate the irq list index
> > from the array of irqs, in order to shrink vmd_irq_list for performance.
> > 
> > Due to the embedded srcu_struct within the vmd_irq_list struct having a
> > varying size depending on a number of factors, the vmd_irq_list struct
> > no longer guarantees optimal data structure size and granularity.
> > 
> > Signed-off-by: Jon Derrick <jonathan.derrick@xxxxxxxxx>
> > ---
> > Added Paul to make him aware of srcu_struct size with these options
> 
> There was some discussion of making the srcu_struct structure's ->node[]
> array be separately allocated, which would allow this array to be
> rightsize for the system in question.  However, I believe they ended up
> instead separately allocating the srcu_struct structure itself.
> 
> Without doing something like that, I am kind of stuck.  After all,
> at compile time, the kernel build system tells SRCU that it needs to
> be prepared to run on systems with thousands of CPUs.  Which requires
> substantial memory to keep track of all those CPUs.  Which are not
> present on most systems.
> 
> Thoughts?
> 
> 							Thanx, Paul
> 

Yes I haven't seen an elegant solution other than making users aware of
the situation.

Thanks for your input




[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux