Hi, David Laight <David.Laight@xxxxxxxxxx> writes: > [ text/plain ] > From: Felipe Balbi >> Bin Liu <binmlist@xxxxxxxxx> writes: >> > [ text/plain ] >> > Hi, >> > >> > On Fri, Mar 11, 2016 at 6:54 AM, Felipe Balbi >> > <felipe.balbi@xxxxxxxxxxxxxxx> wrote: >> >> previously we were using a maximum of 32 TRBs per >> >> endpoint. With each TRB being 16 bytes long, we were >> >> using 512 bytes of memory for each endpoint. >> >> >> >> However, SLAB/SLUB will always allocate PAGE_SIZE >> >> chunks. In order to better utilize the memory we >> >> allocate and to allow deeper queues for gadgets >> >> which would benefit from it (g_ether comes to mind), >> >> let's increase the maximum to 256 TRBs which rounds >> >> up to 4096 bytes for each endpoint. >> > >> > Do we want to increase the same for event ring buffers as >> > while, which is allocated by dma_alloc_coherent(), which >> > is also at PAGE_SIZE chunks, right? >> >> I could, but that's much less important. Currently we have up to 2 >> events per endpoint which is very, very unlikely to happen. Plus, in >> that case there's no tangible benefit. $subject, however, I've been >> using on some performance optimization I've been trying to achieve (not >> ready for submission just yet). > > Is it worth considering using a single mapped page for the small rings > of multiple devices? this is a peripheral controller, we _are_ the device. Here we have one TRB pool for each endpoint and there's no easy way to use the same TRB pool for multiple TRBs because of certain buffer rules when it comes to starting several TRBs. They must be physically continugous which I cannot guarantee if several endpoints allocate from the same pool. -- balbi
Attachment:
signature.asc
Description: PGP signature