On Tue, 29 Jun 2021, Tkaczyk, Mariusz wrote: > > Hello Neil, > I have some questions related to max_queued_requests implemented by you > for raid1 and raid10. See code below: > > /* When there are this many requests queue to be written by > * the raid thread, we become 'congested' to provide back-pressure > * for writeback. > */ > static int max_queued_requests = 1024; > > > It was added years ago: > https://git.kernel.org/pub/scm/linux/kernel/git/song/md.git/commit > /?id=34db0cd60f8a1f4ab73d118a8be3797c20388223 > > I've reached out scenario with cache in write-only mode where > this limiter degrades performance significantly (around 4 times). > I used Open-CAS: > https://github.com/Open-CAS/open-cas-linux > > So, at this point I have some basic questions: > Is "back-pressure" still a case? Do you know any scenario where it > brings benefits? As you say, it was years ago. Things have probably changed. At the time, the mm system would write to a device until it got marked "congested". If there wasn't some sort of limit on the device queue size, you would end up with an enormous queue that would take a long time to flush and so high-priority reads would get stuck behind low-priority writes and weird things like that. The writeback now has a much more sophisticated approach, measuring the actually throughput of each device and adjusting writes accordingly. > If yes, I'll move this parameter to sysfs, to make it configurable > via mdadm config file (using SYSFS line) per array. > What do you think? > > From other hand, shall be consider to bump this value up? It seems to > be small today. I suspect that the best thing to do would be to remove the limit completely. Certainly that is the first thing I would try. Try removing the limit, but monitor the count of queued requests and see if something else stops it from consuming all memory. NeilBrown