>>> Currently the RPMB partition spawns a separate block device >>> named /dev/mmcblkNrpmb for each device with an RPMB partition, >>> including the creation of a block queue with its own kernel >>> thread and all overhead associated with this. On the Ux500 >>> HREFv60 platform, for example, the two eMMCs means that two >>> block queues with separate threads are created for no use >>> whatsoever. >> >> Yikes! What an amazingly stupid design decision. > > Unfortunate, there is more. :-) > > We are actually registering at least three more block devices per eMMC > card (two boot partitions, and one general purpose partition). Except > for the main partition of course. Little correction, there are 4 general purpose partition, not one. > > The difference compared to rpmb from the above, is that those are > actually general read/write partitions. > > So all these partitions are on the same eMMC card, but being I/O > scheduled separately because there are separate block devices. Yeah, > starvation, latency, etc - all bad things comes with it. :-) Actually the worst issue is that emmc is single headed and all the security devices and VMs has to go via host for their data. > > My point is, this is only the first step in re-working and fixing this > - and we really appreciate your review! -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html