Re: Unable to list rbd block > images in nautilus dashboard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Lenz,

Thanks for responding.  I suspected that the number of rbd images might have had something to do with it so I cleaned up old disposable VM images I am no longer using, taking the list down from ~30 to 16, 2 in the EC pool on hdds and the rest on the replicated ssd pool.  They vary in size from 50GB to 200GB, I don't have the # of objects per rbd on hand right now but maybe this is a factor as well, particularly with 'du'.  This doesn't appear to have made a difference in the time and number of attempts required to list them in the dashboard.

I suspect it might be a case of 'du on all images is always going to take longer than the current dashboard timeout', in which case the behaviour of the dashboard might possibly need to change to account for this, maybe fetch and listt the images in parallel and asynchronously or something.  As it stand it means the dashboard isn't really usable for managing existing images, which is a shame because having that ability makes ceph accessible to our clients who are considering it and begins affording some level of self-service for them - one of the reasons we've been really excited for Mimic's release actually.  I really hope I've just done something wrong :)

I'll try to isolate which process the delay is coming from tonight as well as collecting other useful metrics when I'm back on that network tonight.

Thanks,
Wes


----- On 5 Apr, 2019, at 2:59 PM, Lenz Grimmer lgrimmer@xxxxxxxx wrote:

> Hi Wes,
> 
> On 4/4/19 9:23 PM, Wes Cilldhaire wrote:
> 
>> Can anyone at all please confirm whether this is expected behaviour /
>> a known issue, or give any advice on how to diagnose this? As far as
>> I can tell my mon and mgr are healthy. All rbd images have
>> object-map and fast-diff enaabled.
> 
> My gut reaction, not exactly knowing the inner workings of how this
> information is gathered: if it takes quite some time on the command line
> as well, this might be due to some internal collection and calculation
> of data, likely in the Ceph Manager itself. Could you check the CPU
> utilization on the active manager node and which process is causing the
> load? I assume that this is acutally expected behaviour, even though I
> would have expected that cached information would be returned noticeably
> faster. How many RBDs are we talking about here?
> 
> Lenz
> 
> --
> SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
> GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
(null)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux