RE: [PATCH v3 2/2] ufs: ufs-exynos: set dma_alignment to 4095

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 2020-12-22 10:21, Kiwoong Kim wrote:
> > Exynos requires one scatterlist entry for smaller than page size, i.e.
> > 4KB. For the cases of dispatching commands with more than one
> > scatterlist entry and under 4KB size, Exynos behaves as follows:
> >
> > Given that a command to read something from device is dispatched with
> > two scatterlist entries that are named AAA and BBB. After dispatching,
> > host builds two PRDT entries and during transmission, device sends
> > just one DATA IN because device doesn't care on host dma.
> 
> If my understanding is correct, above is same to all hosts, only below
> part is Exynos's behavior. Please correct me if I am wrong.
You're correct.

> 
> > The host then tranfers
> > the whole data from start address of the area named AAA.
> > In consequebnce, the area that follows AAA would be corrupted.
> 
> In consequence
> 
> >
> >     |<------------->|
> >     +-------+------------         +-------+
> >     +  AAA  + (corrupted)   ...   +  BBB  +
> >     +-------+------------         +-------+
> >
> 
> AFAIK, queue->dma_alignment is only used in the case of direct-io, i.e. in
> blk_rq_map_user/kern(), which are mainly used in IOCTL.
> If a request's buffer len and/or buffer start addr is not aligned with
> queue->dma_alignment, bio.c will make a bounce bio such that the request
> get a new buffer which starts on a new page. After the bounce bio is
ended,
> the data in the bound bio will be copied to the initial buffer.
> 
> So in this fix, you are making sure the AAA and BBB are all mapped to one
> bounce bio and stay in one bi_vec, so when we do map_sg they come in one
> sglist, please correct me if I am wrong.
> 
> If my understanding is correct, what is the real use case here - why/how
> user starts a request which can generate more than one sglists whose sizes
> are all under 4KB? I am just curious.
> 
> Thanks,
> 
> Can Guo. 

You nearly exactly got what I’m thinking.
And I think there could be various cases making those situations,
which are definitely up to user programs. That is the case using
different memory areas to contain something.

Thanks.
Kiwoong Kim
> 
> > Signed-off-by: Kiwoong Kim <kwmad.kim@xxxxxxxxxxx>
> > ---
> >  drivers/scsi/ufs/ufs-exynos.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/drivers/scsi/ufs/ufs-exynos.c
> > b/drivers/scsi/ufs/ufs-exynos.c index a8770ff..8635d9d 100644
> > --- a/drivers/scsi/ufs/ufs-exynos.c
> > +++ b/drivers/scsi/ufs/ufs-exynos.c
> > @@ -14,6 +14,7 @@
> >  #include <linux/of_address.h>
> >  #include <linux/phy/phy.h>
> >  #include <linux/platform_device.h>
> > +#include <linux/blkdev.h>
> >
> >  #include "ufshcd.h"
> >  #include "ufshcd-pltfrm.h"
> > @@ -1193,6 +1194,13 @@ static int exynos_ufs_resume(struct ufs_hba
> > *hba, enum ufs_pm_op pm_op)
> >  	return 0;
> >  }
> >
> > +static void exynos_ufs_slave_configure(struct scsi_device *sdev) {
> > +	struct request_queue *q = sdev->request_queue;
> > +
> > +	blk_queue_update_dma_alignment(q, PAGE_SIZE - 1); }
> > +
> >  static struct ufs_hba_variant_ops ufs_hba_exynos_ops = {
> >  	.name				= "exynos_ufs",
> >  	.init				= exynos_ufs_init,
> > @@ -1204,6 +1212,7 @@ static struct ufs_hba_variant_ops
> > ufs_hba_exynos_ops = {
> >  	.hibern8_notify			= exynos_ufs_hibern8_notify,
> >  	.suspend			= exynos_ufs_suspend,
> >  	.resume				= exynos_ufs_resume,
> > +	.slave_configure		= exynos_ufs_slave_configure,
> >  };
> >
> >  static int exynos_ufs_probe(struct platform_device *pdev)






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux