Re: can we reduce bio_set_dev overhead due to bio_associate_blkg?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, Mar 30, 2022 at 09:39:55PM -0700, Christoph Hellwig wrote:
> On Wed, Mar 30, 2022 at 08:28:28AM -0400, Dennis Zhou wrote:
> > I think cloning is a special case that I might have gotten wrong. If
> > there is a bio_set_dev() call after each clone(), then the
> > bio_clone_blkg_association() is excess work. We'd need to audit how
> > bio_alloc_clone() is being used to be safe. Alternatively, we could opt
> > for a bio_alloc_clone_noblkg(), but that's a little bit uglier.
> 
> As of Linux 5.18, the cloning interfaces have changed and take
> a block devie that the clone is intended to be used for, and bio_set_dev
> is mostly (there is a few more sports to be cleaned up in
> dm/md/bcache/btrfs) only used for remapping to a new device.
> 

I took a quick look. It seems with the new interface,
bio_clone_blkg_association() is unnecessary given the correct
association should be derived from the bio_alloc*() calls with the
passed in bdev. Also, blkcg_bio_issue_init() in clone seems wrong.

Maybe the right thing to do here for md-linear and btrfs (what I've
looked at) is to delay cloning until the map occurs and the right device
is already in hand?

> That being said I've eyed the code in bio_associate_blkg a bit and
> I've been wondering about some of how it is implemented as well.
> 

I'm sure stuff has evolved since I've last been involved, but here is a
brief explanation of the initial story. I suspect most of it holds true.
Apologies if this isn't helpful.

For others, a blkcg is a block cgroup. A blkcg_gq, blkg for short, is
the marrying of a blkcg and a request_queue. It takes a reference on
both so IO associated with the cgroup is tracked to the appropriate
cgroup and prevents the request_queue from going away. Punted IOs go
here and writeback is managed here as well. On the hot path, this is the
tagging that blk-rq-qos stuff might depend on.

The lookup itself is handled by blkg_lookup() which is a radix tree
lookup of the request_queue. There is also a last hint which helps.
blkg's are percpu-refcounted.

In terms of lifetimes and pinning. child_blkcg pins parent_blkcgs in a
tree hierarchy up to the root_blkcg. blkgs pin the blkcg it's associated
to, the request_queue, and the blkg_parent (parent_blkcg and
request_queue). They die in hierarchical order, alive until all children
have passed.

If there's anything else I can try to help answer please let me know.

> Is recursive throttling really a thing?  i.e. we can have cgroup
> policies on the upper (e.g. dm) device and then again on the lower
> (e.g. nvme device)?  I think the code currently supports that, and
> if we want to keep that I don't really see much of a way to avoid
> the lookup, but maybe we cn make it faster.

I'm not sure. I've primarily dealt with physical devices. However, I'm
sure there are more complex setups that use it. Is it a good idea is
probably debatable.

Backing up though, I feel like the abstraction naturally alludes to this
multiple association because you don't necessarily know when you hit
physical devices until you finally submit through.

Thanks,
Dennis

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux