Hello! I am trying to implement some special use-case. On our platform the CPU will be notified with an interrupt if the power fails. After that the system has ~100ms time to do some tear-down tasks. One important step is to store some information (~512KB) on some non volatile storage. EMMC seems to be a good choice because of the high throughput. But the drawbacks are the non deterministic latencies caused by the EMMC itself, the complex sdhci mmc host driver, the blocklayer, ... Because time is critical I tried to implement this in the kernel space. Most likely EMMC hardware latencies can be reduced by execution of a block discard during boot-up. So in case of a power-fail no sectors have to be erased, because they are pre-erased. I looked at the mmc_test.c kernel module to execute raw mmc block writes. For 512KB the write take ~60ms => looks promising But the biggest concern is the influence of other tasks. The emmc has multiple partitions. So if i start a benchmark on another partition the write could also take longer, because the mmc host driver has a lot of requests in its queue. I measured up to 6s. This is caused by the blocking call to mmc_claim_host which waits for the queues to drain. So i looked for a way to stop all queued mmc requests on the mmc host before calling claim_host. There exists some pstore implementations which have a similar use-case. => store data in block before panic reboot. pstore_blk is mainline and can configured in two ways: 1. call block->panic_write if the block driver supports it. (Currently only some mtd drivers and no emmc?) 2. Use "best_effort" and write to /dev/mmcblk via kernel_write() => possible with emmc https://github.com/torvalds/linux/commit/f8feafeaeedbf0a324c373c5fa29a2098a69c458#diff-d3fb8bf94c21d538c62beccd243ca6266b4dec19c6d60a581aa6d71ba9874a53 This second option is also heavily influenced by other io, because it uses the same system io scheduler like userspace? But there exists also some more raw patch sets: https://patchwork.kernel.org/project/linux-mmc/patch/1425015219-19849-1-git-send-email-jh80.chung@xxxxxxxxxxx/ https://patchwork.kernel.org/project/linux-mmc/patch/20201207115753.21728-2-bbudiredla@xxxxxxxxxxx/#23849559 And even some which tap into the sdhci driver to abort all ongoing emmc requests: https://lkml.org/lkml/2012/10/23/335 Because I want to keep it simple I started with some more basic commands: The first try was to call _mmc_blk_suspend(mmc_card); before mmc_claim_host() => also slow Next blk_mq_freeze_queue(mmc_card->queue.queue) before mmc_claim_host() => also slow Next mmc_blk_hw_queue_stop(mmc_card->queue.queue); before mmc_claim_host() => better By changing the scheduler to none and reducing max_sectors_kb. The writes take up to ~100ms with parallel userspace write test running. Without the write test it only takes ~60ms. So the scheduling of userpace io still has some influence. Do you know any way to forcefully stop all pending mmc host requests to improve this further? I think mmc_blk_hw_queue_stop still waits for the pending requests to be finished. Or is there a way to queue my mmc block requests before all other pending requests? Thanks in advance!