3.2.100-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Huacai Chen <chenhc@xxxxxxxxxx> commit 90addc6b3c9cda0146fbd62a08e234c2b224a80c upstream. In non-coherent DMA mode, kernel uses cache flushing operations to maintain I/O coherency, so scsi's block queue should be aligned to the value returned by dma_get_cache_alignment(). Otherwise, If a DMA buffer and a kernel structure share a same cache line, and if the kernel structure has dirty data, cache_invalidate (no writeback) will cause data corruption. Signed-off-by: Huacai Chen <chenhc@xxxxxxxxxx> [hch: rebased and updated the comment and changelog] Signed-off-by: Christoph Hellwig <hch@xxxxxx> Signed-off-by: Martin K. Petersen <martin.petersen@xxxxxxxxxx> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx> --- drivers/scsi/scsi_lib.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -1688,11 +1688,13 @@ struct request_queue *__scsi_alloc_queue q->limits.cluster = 0; /* - * set a reasonable default alignment on word boundaries: the - * host and device may alter it using - * blk_queue_update_dma_alignment() later. + * Set a reasonable default alignment: The larger of 32-byte (dword), + * which is a common minimum for HBAs, and the minimum DMA alignment, + * which is set by the platform. + * + * Devices that require a bigger alignment can increase it later. */ - blk_queue_dma_alignment(q, 0x03); + blk_queue_dma_alignment(q, max(4, dma_get_cache_alignment()) - 1); return q; }