Re: [PATCH 4/9] blkcg: implement REQ_CGROUP_PUNT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Jan.

On Thu, Jun 20, 2019 at 05:37:33PM +0200, Jan Kara wrote:
> > +bool __blkcg_punt_bio_submit(struct bio *bio)
> > +{
> > +	struct blkcg_gq *blkg = bio->bi_blkg;
> > +
> > +	/* consume the flag first */
> > +	bio->bi_opf &= ~REQ_CGROUP_PUNT;
> > +
> > +	/* never bounce for the root cgroup */
> > +	if (!blkg->parent)
> > +		return false;
> > +
> > +	spin_lock_bh(&blkg->async_bio_lock);
> > +	bio_list_add(&blkg->async_bios, bio);
> > +	spin_unlock_bh(&blkg->async_bio_lock);
> > +
> > +	queue_work(blkcg_punt_bio_wq, &blkg->async_bio_work);
> > +	return true;
> > +}
> > +
> 
> So does this mean that if there is some inode with lots of dirty data for a
> blkcg that is heavily throttled, that blkcg can occupy a ton of workers all
> being throttled in submit_bio()? Or what is constraining a number of
> workers one blkcg can consume?

There's only one work item per blkcg-device pair, so the maximum
number of kthreads a blkcg can occupy on a filesystem would be one.
It's the same scheme as writeback work items.

Thanks.

-- 
tejun



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux