Re: False waker detection in BFQ

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 20-05-21 17:05:45, Paolo Valente wrote:
> > Il giorno 5 mag 2021, alle ore 18:20, Jan Kara <jack@xxxxxxx> ha scritto:
> > 
> > Hi Paolo!
> > 
> > I have two processes doing direct IO writes like:
> > 
> > dd if=/dev/zero of=/mnt/file$i bs=128k oflag=direct count=4000M
> > 
> > Now each of these processes belongs to a different cgroup and it has
> > different bfq.weight. I was looking into why these processes do not split
> > bandwidth according to BFQ weights. Or actually the bandwidth is split
> > accordingly initially but eventually degrades into 50/50 split. After some
> > debugging I've found out that due to luck, one of the processes is decided
> > to be a waker of the other process and at that point we loose isolation
> > between the two cgroups. This pretty reliably happens sometime during the
> > run of these two processes on my test VM. So can we tweak the waker logic
> > to reduce the chances for false positives? Essentially when there are only
> > two processes doing heavy IO against the device, the logic in
> > bfq_check_waker() is such that they are very likely to eventually become
> > wakers of one another. AFAICT the only condition that needs to get
> > fulfilled is that they need to submit IO within 4 ms of the completion of
> > IO of the other process 3 times.
>
> as I happened to tell you moths ago, I feared some likely cover case
> to show up eventually.  Actually, I was even more pessimistic than how
> reality proved to be :)

:)

> I'm sorry for my delay, but I've had to think about this issue for a
> while.  Being too strict would easily run out journald as a waker for
> processes belonging to a different group.
> 
> So, what do you think of this proposal: add the extra filter that a
> waker must belong to the same group of the woken, or, at most, to the
> root group?

I thought you will suggest that :) Well, I'd probably allow waker-wakee
relationship if the two cgroups are in 'ancestor' - 'successor'
relationship. Not necessarily only root cgroup vs some cgroup. That being
said in my opinion it is just a poor mans band aid fixing this particular
setup. It will not fix e.g. a similar problem when those two processes are
in the same cgroup but have say different IO priorities.

The question is how we could do better. But so far I have no great idea
either.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux