Hi ! While doing a pmac libata PATA driver, I've been looking at some of the info we have from Apple (mostly the Darwin code) and some interesting stuff pops out. The main one is that one their latest cell, they do the following "workarounds" which I never implemented in drivers/ide/ppc/pmac.c but I'd like to implement in the libata driver, unless you believe that is unnecessary: a) For any ATAPI DMA, If the transfer size is not a multiple of 16 bytes, switch to PIO for this command. b) Double buffer all ATAPI DMA reads. They allocate a 128K DMA buffer and limit all requests to 128K. Then, they route all incoming DMAs to that buffer and copy back to the original buffer on completion. The comments in the code seem to indicate this has to do with "alignment restrictions", unfortunately, I have no more infos so I don't know what the actual underlying HW issues are. I could use some tips as to how to implement these in the libata driver. For a) I'm not sure about either overriding qc_prep or qc_issue and what would be the consequence of changing tf.protocol there ? Among others, qc_prep() would be after dma_map_sg has been performed, which is a concern. I suppose I can ignore the DMA mapping, that would just be a peformance issue. Do you see anything else that could shoke on a change of protocol at that stage ? For b) I should be able to completely hide that within my qc_prep and completion, by silently using a different DMA target. I suppose I can use tf.command == ATA_CMD_ID_ATAPI to differenciate with ATA commands. How do I set the max request size with the block layer tho from a libata sub-driver ? Thanks, Cheers, Ben. -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html