Re: [PATCH 04/23] scsi: initialize scsi midlayer limits before allocating the queue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael Ellerman <mpe@xxxxxxxxxxxxxx> writes:
> Christoph Hellwig <hch@xxxxxx> writes:
>> On Fri, May 31, 2024 at 12:28:21AM +1000, Michael Ellerman wrote:
>>> No that's wrong. The actual hardware page size is 4K, but
>>> CONFIG_PAGE_SIZE and PAGE_SHIFT etc. is 64K.
>>> 
>>> So at least for this user the driver used to work with 64K pages, and
>>> now doesn't.
>>
>> Which suggested that the communicated max_hw_sectors is wrong, and
>> previously we were saved by the block layer increasing it to
>> PAGE_SIZE after a warning.  Should we just increment it to 64k?
>
> It looks like that user actually only has the CDROM hanging off
> pata_macio, so it's possible it has been broken previously and they
> didn't notice. I'll see if they can confirm the CDROM has been working
> up until now.
>
> I can test the CDROM on my G5 next week.

I can confirm that the driver does work with 64K pages prior to the
recent changes. I'm able to boot and read CDs with no errors.

However AFAICS that's because the driver splits large requests in
pata_macio_qc_prep():

static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc)
{
       ...
       for_each_sg(qc->sg, sg, qc->n_elem, si) {
              u32 addr, sg_len, len;
              ...
              addr = (u32) sg_dma_address(sg);
              sg_len = sg_dma_len(sg);

              while (sg_len) {
                     ...
                     len = (sg_len < MAX_DBDMA_SEG) ? sg_len : MAX_DBDMA_SEG;
                     table->command = cpu_to_le16(write ? OUTPUT_MORE: INPUT_MORE);
                     table->req_count = cpu_to_le16(len);
                     ...
                     addr += len;
                     sg_len -= len;
                     ++table;
              }
  }

 
If I increase MAX_DBMA_SEG from 0xff00 to 64K I see IO errors at boot:

  [   24.989755] sr 4:0:0:0: [sr0] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=6s
  [   25.007310] sr 4:0:0:0: [sr0] tag#0 Sense Key : Medium Error [current]
  [   25.020502] sr 4:0:0:0: [sr0] tag#0 ASC=0x10 <<vendor>>ASCQ=0x90
  [   25.032655] sr 4:0:0:0: [sr0] tag#0 CDB: Read(10) 28 00 00 00 00 00 00 00 20 00
  [   25.047232] I/O error, dev sr0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0


On the other hand increasing max_segment_size to 64K while leaving MAX_DBDMA_SEG
at 0xff00 seems to work fine. And that's effectively what's been happening on
existing kernels until now.

The only question is whether that violates some assumption elsewhere in the
SCSI layer?

Anyway patch below that works for me on v6.10-rc2.

cheers


diff --git a/drivers/ata/pata_macio.c b/drivers/ata/pata_macio.c
index 817838e2f70e..3cb455a32d92 100644
--- a/drivers/ata/pata_macio.c
+++ b/drivers/ata/pata_macio.c
@@ -915,10 +915,13 @@ static const struct scsi_host_template pata_macio_sht = {
 	.sg_tablesize		= MAX_DCMDS,
 	/* We may not need that strict one */
 	.dma_boundary		= ATA_DMA_BOUNDARY,
-	/* Not sure what the real max is but we know it's less than 64K, let's
-	 * use 64K minus 256
+	/*
+	 * The SCSI core requires the segment size to cover at least a page, so
+	 * for 64K page size kernels this must be at least 64K. However the
+	 * hardware can't handle 64K, so pata_macio_qc_prep() will split large
+	 * requests.
 	 */
-	.max_segment_size	= MAX_DBDMA_SEG,
+	.max_segment_size	= SZ_64K,
 	.device_configure	= pata_macio_device_configure,
 	.sdev_groups		= ata_common_sdev_groups,
 	.can_queue		= ATA_DEF_QUEUE,




[Index of Archives]     [Linux Filesystems]     [Linux SCSI]     [Linux RAID]     [Git]     [Kernel Newbies]     [Linux Newbie]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Samba]     [Device Mapper]

  Powered by Linux