Martin, Sorry about delay. I don't agree with you about T10 PI reference tag in current code. t10_pi_generate works with virtual block numbers and virtual reference tags. Virtual tag mapped into real tag later in the static void t10_pi_type1_prepare(struct request *rq) ... if (be32_to_cpu(pi->ref_tag) == virt) pi->ref_tag = cpu_to_be32(ref_tag); ... So, we need just a pair between these functions to have a good mapping and good real reference tag Once t10_pi_generate have shift a "virtual" ref tag for 4 it make a bio_integrity_advance to be happy. And t10_pi_type1_prepare also happy but it need to be shift with 4 as similar to the generate function. This patch tested with software raid (raid 1 / raid 6) over over NMVe devices with 4k block size. In lustre case it caused a bio integrity prepare called before bio_submit so integrity will be splits before sends to the nvme devices. Without patch it caused an T10 write errors for each write over 4k, with patch - no errors. Alex On 20/12/2021, 19:29, "Martin K. Petersen" <martin.petersen@xxxxxxxxxx> wrote: Alexey, > t10_pi_generate / t10_pi_type1_prepare have just a increment by “1” for > the integrity internal which is 4k in my case, > so any bio_integrity_advance call will be move an iterator outside of > generated sequence and t10_pi_type1_prepare can’t be found a good virtual > sector for the mapping. > Changing an increment by “1” to be related to the real integrity size > solve a problem completely. By definition the T10 PI reference tag is incremented by one per interval (typically the logical block size). If you implement it by a different value than one then it is no longer valid protection information. Seems like the splitting logic is broken somehow although I haven't seen any failures with 4K on SCSI. What does your storage stack look like? -- Martin K. Petersen Oracle Linux Engineering