Re: [PATCH 4/5] mpt3sas: Handle RDPQ DMA allocation in same 4g region

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christoph,

We will simplify the logic as below, let us know your comments.

#use one dma pool for RDPQ's, thus removes the logic of using second
dma pool with align.
The requirement is, RDPQ memory blocks starting & end address should
have the same
higher 32 bit address.

1) At driver load, set DMA Mask to 64 and allocate memory for RDPQ's.

2) Check if allocated resources are in the same 4GB range.

3) If #2 is true, continue with 64 bit DMA and go to #6

4) If #2 is false, then free all the resources from #1.

5) Set DMA mask to 32 and allocate RDPQ's.

6) Proceed with driver loading and other allocations.

Thanks,
Suganath


On Wed, Mar 18, 2020 at 12:21 PM Suganath Prabu Subramani
<suganath-prabu.subramani@xxxxxxxxxxxx> wrote:
>
> Hi Christoph,
>
> We will simplify the logic as below, let us know your comments.
>
> #use one dma pool for RDPQ's, thus removes the logic of using second dma pool with align.
> The requirement is, RDPQ memory blocks starting & end address should have the same
> higher 32 bit address.
>
> 1) At driver load, set DMA Mask to 64 and allocate memory for RDPQ's.
>
> 2) Check if allocated resources are in the same 4GB range.
>
> 3) If #2 is true, continue with 64 bit DMA and go to #6
>
> 4) If #2 is false, then free all the resources from #1.
>
> 5) Set DMA mask to 32 and allocate RDPQ's.
>
> 6) Proceed with driver loading and other allocations.
>
> Thanks,
> Suganath
>
> On Thu, Mar 5, 2020 at 2:40 PM Sreekanth Reddy <sreekanth.reddy@xxxxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> Any update over my previous reply?
>>
>> Thanks,
>> Sreekanth
>>
>> On Thu, Feb 27, 2020 at 6:11 PM Sreekanth Reddy
>> <sreekanth.reddy@xxxxxxxxxxxx> wrote:
>> >
>> > On Wed, Feb 26, 2020 at 12:12 AM Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
>> > >
>> > > On Tue, Feb 11, 2020 at 05:18:12AM -0500, suganath-prabu.subramani@xxxxxxxxxxxx wrote:
>> > > > From: Suganath Prabu S <suganath-prabu.subramani@xxxxxxxxxxxx>
>> > > >
>> > > > For INVADER_SERIES each set of 8 reply queues (0 - 7, 8 - 15,..)and
>> > > > VENTURA_SERIES each set of 16 reply queues (0 - 15, 16 - 31,..)should
>> > > > be within 4 GB boundary.Driver uses limitation of VENTURA_SERIES
>> > > > to manage INVADER_SERIES as well. So here driver is allocating the DMA
>> > > > able memory for RDPQ's accordingly.
>> > > >
>> > > > For RDPQ buffers, driver creates two separate pci pool.
>> > > > "reply_post_free_dma_pool" and "reply_post_free_dma_pool_align"
>> > > > First driver tries allocating memory from the pool
>> > > > "reply_post_free_dma_pool", if the requested allocation are
>> > > > within same 4gb region then proceeds for next allocations.
>> > > > If not, allocates from reply_post_free_dma_pool_align which is
>> > > > size aligned and if success, it will always meet same 4gb region
>> > > > requirement
>> > >
>> > > I don't fully understand the changelog here, and how having two
>> > > dma pools including one aligned is all that good.
>> >
>> > The requirement is that driver needs a set of memory blocks of size
>> > ~106 KB and this block should not cross the 4gb boundary (i.e.
>> > starting & end address of this block should have the same higher 32
>> > bit address). So what we are doing is that first we allocate a block
>> > from generic pool 'reply_post_free_dma_pool' and we check whether this
>> > block cross the 4gb boundary or not, if it is yes then we free this
>> > block and we try to allocate block once gain from pool
>> > 'reply_post_free_dma_pool_align' where we alignment of this pool is
>> > set to power of two from block size. Hoping that second time
>> > allocation block will not cross the 4 gb boundary.
>> >
>> > Is there any interface or API which make sures that it always
>> > allocates the required size memory block and also satisfies 4bg
>> > boundary condtion?
>> >
>> > >
>> > > Why not do a single dma_alloc_coherent and then subdvide it given
>> > > that all the allocations from the DMA pool seem to happen at HBA
>> > > initialization time anyway, invalidating the need for the dynamic
>> > > nature of the dma pools.
>> >
>> > we need 8 blocks of block size ~106 KB, so total will be ~848 KB and
>> > most of the times we may not get this much size continuous single
>> > block memory and also this block should satisfy the 4gb boundary
>> > requirement. And hence driver is allocating each block individually.
>> >
>> > Regards,
>> > Sreekanth



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux