On Sat, 4 Jan 2020, Patrick Dung wrote: > Thanks for reply. After performing an additional testing with SSD. I have more questions. > > Firstly, about the additional testing with SSD: > I tested it with SSD (in Linux software raid level 10 setup). The result shown using dm-integrity is faster than using XFS directly. For using dm-integrity, fio shows > lots of I/O merges by the scheduler. Please find the attachment for the result. > > Finally, please find the questions below: > 1) So after the dm-integrity journal is written to the actual back end storage (hard drive), then fsync would then report completed? Yes. > 2) To my understanding, for using dm-integrity with journal mode. Data has to written into the storage device twice (one part is the dm-integrity journal, the other > one is the actual data). For the fio test, the data should be random and sustained for 60 seconds. But using dm-integrity with journal mode is still faster. > > Thanks, > Patrick With ioengine=sync, fio sends one I/O, waits for it to finish, send another I/O, wait for it to finish, etc. With dm-integrity, I/Os will be written to the journal (that is held in memory, no disk I/O is done), and when fio does the sync(), fsync() or fdatasync() syscall, the journal is written to the disk. After the journal is flushed, the blocks are written concurrently to the disk locations. The SSD has better performance for concurrent write then for block-by-block write, so that's why you see performance improvement with dm-integrity. Mikulas -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel