Hello, I found that if an mdadm RAID1 is created with a write-mostly device, each discard request to the array ends up being split into 1MB pieces. My SSD is capable of processing at most 200 of these per second, so as a result discarding a 512GB array takes 42 minutes. The kernel is 6.1.9. In both cases: # grep . /sys/block/md0/queue/discard_* /sys/block/md0/queue/discard_granularity:512 /sys/block/md0/queue/discard_max_bytes:2199023255040 /sys/block/md0/queue/discard_max_hw_bytes:2199023255040 /sys/block/md0/queue/discard_zeroes_data:0 Steps to reproduce: # wipefs -a /dev/nvme1n1p2 /dev/nvme2n1p2 /dev/nvme1n1p2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9 /dev/nvme2n1p2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9 # mdadm --create --level 1 -n2 --assume-clean --metadata=1.2 /dev/md0 /dev/nvme1n1p2 /dev/nvme2n1p2 mdadm: array /dev/md0 started. # mdadm --grow --array-size 10G /dev/md0 # (so that the test doesn't take forever) # time blkdiscard -f /dev/md0 real 0m0.335s user 0m0.000s sys 0m0.003s # mdadm --stop /dev/md0 mdadm: stopped /dev/md0 # wipefs -a /dev/nvme1n1p2 /dev/nvme2n1p2 /dev/nvme1n1p2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9 /dev/nvme2n1p2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9 # mdadm --create --level 1 -n2 --assume-clean --metadata=1.2 /dev/md0 /dev/nvme1n1p2 --write-mostly /dev/nvme2n1p2 mdadm: array /dev/md0 started. # mdadm --grow --array-size 10G /dev/md0 # time blkdiscard -f /dev/md0 real 0m48.744s user 0m0.000s sys 0m0.019s -- With respect, Roman