Re: Speeding Up "rbd ls -l <pool>" output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Wido, you are the best :)

On Thu, Feb 9, 2017 at 11:50 AM, Wido den Hollander <wido@xxxxxxxx> wrote:

> Op 9 februari 2017 om 9:41 schreef Özhan Rüzgar Karaman <oruzgarkaraman@xxxxxxxxx>:
>
>
> Hi Wido;
> Thanks for fast response rbd ls -l reads all images header for its sizes
> yes it makes sense you are right.
>
> My main problem is when i refresh a rbd storage pool using virsh over
> kvm(Ubuntu 14.04.5) it takes too much time then the old days and i suspect
> that virsh makes "rbd ls -l" over Ceph storage so thats why i asked.
>
> Does virsh use same "rbd ls -l" for pool refresh?
>

Yes, it does: http://libvirt.org/git/?p=libvirt.git;a=blob;f=src/storage/storage_backend_rbd.c;h=45beb107aa2a5c85b7d65b8687c2b65751871595;hb=HEAD#l425

In short, this C code does (pseudo):

images = []
for image in rbd_list():
  images.append(rbd_stat(image))

The more images you have, the longer it takes.

> So in this case below 22 second is normal for virsh rbd pool refresh?
>

Yes. One of my goals for libvirt is still to make this refresh a async operation inside libvirt, but that's a bit difficult inside libvirt and have never gotten to actually implementing that.

Wido

> root@kvmt1:~# time virsh pool-refresh 01b375db-d3f5-33c1-9389-8bf226c887e8
> Pool 01b375db-d3f5-33c1-9389-8bf226c887e8 refreshed
>
>
> real 0m22.504s
> user 0m0.012s
> sys 0m0.004s
>
> Thanks
> Özhan
>
>
> On Thu, Feb 9, 2017 at 11:30 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>
> >
> > > Op 9 februari 2017 om 9:13 schreef Özhan Rüzgar Karaman <
> > oruzgarkaraman@xxxxxxxxx>:
> > >
> > >
> > > Hi;
> > > I am using Hammer 0.49.9 release on my Ceph Storage, today i noticed that
> > > listing an rbd pool takes to much time then the old days. If i have more
> > > rbd images on pool it takes much more time.
> > >
> >
> > It is the -l flag that you are using in addition. That flag opens each RBD
> > image and stats the header of it to get the size.
> >
> > A regular 'rbd ls' will only read the RADOS object rbd_directory, but it
> > is the -l flag which causes the RBD tool to iterate over all the images and
> > query their header.
> >
> > > My clusters health is ok and currently there is no load on the cluster.
> > > Only rbd images are used to serve to vm's.
> > >
> > > I am sending some information below. My level.db size is also 280 mb, i
> > > also compacted level.db to 40 mb size but again "rbd ls -l" output is too
> > > slow.
> > >
> > > This timing is important for my vm deploy time to complete because when i
> > > refresh a pool/datastore it takes nearly to 20 seconds or more for 350
> > rbd
> > > images+snapshots.
> > >
> > > Thanks for all help
> > >
> > > Regards
> > > Ozhan Ruzgar
> > >
> > > root@mont3:/var/lib/ceph/mon/ceph-mont3/store.db# ceph -s
> > >     cluster 6b1cb3f4-85e6-4b70-b057-ba7716f823cc
> > >      health HEALTH_OK
> > >      monmap e1: 3 mons at
> > > {mont1=172.16.x.x:6789/0,mont2=172.16.x.x:6789/0,mont3=
> > 172.16.x.x:6789/0}
> > >             election epoch 126, quorum 0,1,2 mont1,mont2,mont3
> > >      osdmap e20509: 40 osds: 40 up, 40 in
> > >       pgmap v20333442: 1536 pgs, 3 pools, 235 GB data, 63442 objects
> > >             700 GB used, 3297 GB / 3998 GB avail
> > >                 1536 active+clean
> > >   client io 0 B/s rd, 3785 kB/s wr, 314 op/s
> > >
> > > root@mont1:~# time rbd ls -l cst2|wc -l
> > > 278
> > >
> > > real 0m11.970s
> > > user 0m0.572s
> > > sys 0m0.316s
> > > root@mont1:~# time rbd ls -l cst3|wc -l
> > > 15
> > >
> > > real 0m0.396s
> > > user 0m0.020s
> > > sys 0m0.032s
> > > root@mont1:~# time rbd ls -l cst4|wc -l
> > > 330
> > >
> > > real 0m16.630s
> > > user 0m0.668s
> > > sys 0m0.336s
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux