Re: typical snapmapper size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



17838

ID CLASS WEIGHT   REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
24   hdd  1.00000  1.00000  419GiB  185GiB  234GiB 44.06 1.46  85

light snapshot use


On Thu, Jun 6, 2019 at 2:00 PM Sage Weil <sage@xxxxxxxxxx> wrote:
Hello RBD users,

Would you mind running this command on a random OSD on your RBD-oriented
cluster?

ceph-objectstore-tool \
 --data-path /var/lib/ceph/osd/ceph-NNN \
 '["meta",{"oid":"snapmapper","key":"","snapid":0,"hash":2758339587,"max":0,"pool":-1,"namespace":"","max":0}]' \
 list-omap | wc -l

...and share the number of lines along with the overall size and
utilization % of the OSD?  The OSD needs to be stopped, then run that
command, then start it up again.

I'm trying to guage how much snapmapper metadata there is in a "typical"
RBD environment.  If you have some sense of whether your users make
relatively heavy or light use of snapshots, that would be helpful too!

Thanks!
sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Shawn Iverson, CETL
Director of Technology
Rush County Schools
765-932-3901 option 7

Cybersecurity
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux