Re: Discussion: is it time to remove dioread_nolock?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12/29/19 8:33 PM, Xiaoguang Wang wrote:
hi,

With inclusion of Ritesh's inode lock scalability patches[1], the
traditional performance reasons for dioread_nolock --- namely,
removing the need to take an exclusive lock for Direct I/O read
operations --- has been removed.

[1] https://lore.kernel.org/r/20191212055557.11151-1-riteshh@xxxxxxxxxxxxx

So... is it time to remove the code which supports dioread_nolock?
Doing so would simplify the code base, and reduce the test matrix.
This would also make it easier to restructure the write path when
allocating blocks so that the extent tree is updated after writing out
the data blocks, by clearing away the underbrush of dioread nolock
first.

If we do this, we'd leave the dioread_nolock mount option for
backwards compatibility, but it would be a no-op and not actually do
anything.

Any objections before I look into ripping out dioread_nolock?

The one possible concern that I considered was for Alibaba, which was
doing something interesting with dioread_nolock plus nodelalloc.  But
looking at Liu Bo's explanation[2], I believe that their workload
would be satisfied simply by using the standard ext4 mount options
(that is, the default mode has the performance benefits when doing
parallel DIO reads, and so the need for nodelalloc to mitigate the
tail latency concerns which Alibaba was seeing in their workload would
not be needed).  Could Liu or someone from Alibaba confirm, perhaps
with some benchmarks using their workload?
Currently we don't use dioread_nolock & nodelalloc in our internal
servers, and we use dioread_nolock & delalloc widely, it works well.

The initial reason we use dioread_nolock is that it'll also allocate
unwritten extents for buffered write, and normally the corresponding
inode won't be added to jbd2 transaction's t_inode_list, so while
commiting transaction, it won't flush inodes' dirty pages, then
transaction will commit quickly, otherwise in extream case, the time

I do notice this in ext4_map_blocks(). We add inode to t_inode_list only
in case if we allocate written blocks. I guess this was done to avoid
stale data exposure problem. So now due to ordered mode, we may end up
flushing all dirty data pages in committing transaction before the
metadata is flushed.

Do you have any benchmarks or workload where we could see this problem?
And could this actually be a problem with any real world workload too?

Actually speaking, dioread_nolock mount option was not meant to solve the problem you mentioned.
Atleast as per the Documentation it is meant to help with the inode lock
scalablity issue in DIO reads case, which mostly should be fixed
with recent fixes pointed by Ted in above link.



taking to flush dirty inodes will be very big, especially cgroup writeback
is enabled. A previous discussion: https://marc.info/?l=linux-fsdevel&m=151799957104768&w=2
I think this semantics hidden behind diread_nolock is also important,
so if planning to remove this mount option, we should keep same semantics.


Jan/Ted, your opinion on this pls?

I do see that there was a proposal by Ted @ [1] which should
also solve this problem. I do have plans to work on Ted's proposal, but
meanwhile, should we preserve this mount option for above mentioned use
case? Or should we make it a no-op now?

Also as an update, I am working on a patch which should fix the
stale data exposure issue which we currently have with ext4_page_mkwrite
& ext4 DIO reads. With that fixed it was discussed @ [2], that we don't
need dioread_nolock, so I was also planning to make this mount option a
no-op. Will soon post the RFC version of this patch too.


[1] - https://marc.info/?l=linux-ext4&m=157244559501734&w=2

[2] - https://lore.kernel.org/linux-ext4/20191120143257.GE9509@xxxxxxxxxxxxxx/


-ritesh



Regards,
Xiaoguang Wang


[2] https://lore.kernel.org/linux-ext4/20181121013035.ab4xp7evjyschecy@US-160370MP2.local/

                                         - Ted





[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux