Jordan Crouse wrote: > This patch is not specific to the AU1200 SD driver, but thats what > we used to debug and verify this, so thats why it is applied against > the linux-mips tree. Pierre, I'm sending this to you too, because I thought > you may be interested. Much appreciated. :) > > Large SD cards (>=2GB) report a physical block size greater then 512 bytes > (2GB reports 1024, and 4GB reports 2048). However, a sample of different > brands of USB attached SD readers have shown that the logical block size > is still forced to 512 bytes. > > The original mmc_block code was setting the block size to whatever the > card was reporting, thereby causing much pain and suffering when using > a card initialized elsewhere (bad partition tables, invalid FAT tables, etc). > > This patch forces the block size to be 512 bytes, and adjusts the > capacity accordingly. With this you should be able to happily use very > large cards interchangeably between platforms. At least, it has worked for > us. > > > @@ -373,7 +383,7 @@ mmc_blk_set_blksize(struct mmc_blk_data > > mmc_card_claim_host(card); > cmd.opcode = MMC_SET_BLOCKLEN; > - cmd.arg = 1 << card->csd.read_blkbits; > + cmd.arg = 1 << ((card->csd.read_blkbits > 9) ? 9 : card->csd.read_blkbits); > cmd.flags = MMC_RSP_R1; > err = mmc_wait_for_cmd(card->host, &cmd, 5); > mmc_card_release_host(card); This will not work. Some cards do not accept block sizes larger than the one they've specified. This issue has been discussed on the arm kernel ml and Russell has begun producing patches to resolve the issue. To solve the issue you're seeing we should lie to the block layer, not the card. Which will cause problems when the block layer issues request that cannot be mapped to a integer number of card blocks. The issue is more complex than your patch suggests and I do not know enough about the block layer to propose a way out. Rgds Pierre