Hi Lenz,
That PR will need a lot of rebasing, as there's been later changes to the rbd controller.
Nevertheless, while working on that I found a few quick wins that could be easily implemented (I'll try to come back at this in the next weeks):
- Caching object instances and using flyweight objects for ioctx, rbd.Images, stat, etc.
- Removing redundant (heavyweight) call to RBDConfiguration.
- Moving the actual disk usage calculation out of the 60-second loop. IMHO that info should be provided by RBD, perhaps calculated and cached in the rbd_support mgr module (@Jason)?
However that endpoint, if used with multiple RBD pools, namespaces, clones and snapshots, is gonna have a hard time (O(N^4)-like) as it's fully serial.
Any other ideas?
@Matt: just curious, apart from the number of images, what's the amount of rbd pools/clones/snapshots/... on your deployment?
Kind regards,
On Mon, Jan 6, 2020 at 6:08 PM Lenz Grimmer <lgrimmer@xxxxxxxx> wrote:
Hi Matt,
On 1/6/20 4:33 PM, Matt Dunavant wrote:
> I was hoping there was some update on this bug:
> https://tracker.ceph.com/issues/39140
>
> In all recent versions of the dashboard, the RBD image page takes
> forever to populate due to this bug. All our images have fast-diff
> enabled, so it can take 15-20 min to populate this page with about
> 20-30 images.
Thanks for bringing this up and the reminder. I've just updated the
tracker issue by pointing it to the current pull request that intends to
address this: https://github.com/ceph/ceph/pull/28387 - looks like this
approach needs further testing/review before we can merge it, it
currently is still marked as "Draft".
@Ernesto - any news/thoughts about this from your POV?
Thanks,
Lenz
--
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com