Re: ceph rbd volumes/images IO details

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 8, 2020 at 5:13 PM M Ranga Swami Reddy <swamireddy@xxxxxxxxx> wrote:
>
> Iam using the Luminous 12.2.11 version with prometheus.
>
> On Sun, Mar 8, 2020 at 12:28 PM XuYun <yunxu@xxxxxx> wrote:
>
> > You can enable prometheus module of mgr if you are running Nautilus.
> >
> > > 2020年3月8日 上午2:15,M Ranga Swami Reddy <swamireddy@xxxxxxxxx> 写道:
> > >
> > > On Fri, Mar 6, 2020 at 1:06 AM M Ranga Swami Reddy <swamireddy@xxxxxxxxx
> > >
> > > wrote:
> > >
> > >> Hello,
> > >> Can we get the IOPs of any rbd image/volume?
> > >>
> > >> For ex: I have created volumes via OpenStack Cinder. Want to know
> > >> the IOPs of these volumes.
> > >>
> > >> In general - we can get pool stats, but not seen the per volumes stats.
> > >>
> > >> Any hint here? Appreciated.

Per image/volume stats are available since nautilus [1].  There is
no automated way to do it in luminous.  Since you are using openstack,
you can probably set up some generic I/O monitoring at instance level
(i.e. external to ceph) and possibly apply disk I/O limits (again at
instance level, enforced in qemu).

[1] https://ceph.io/rbd/new-in-nautilus-rbd-performance-monitoring/

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux