Re: typical snapmapper size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 6, 2019 at 8:00 PM Sage Weil <sage@xxxxxxxxxx> wrote:
>
> Hello RBD users,
>
> Would you mind running this command on a random OSD on your RBD-oriented
> cluster?
>
> ceph-objectstore-tool \
>  --data-path /var/lib/ceph/osd/ceph-NNN \
>  '["meta",{"oid":"snapmapper","key":"","snapid":0,"hash":2758339587,"max":0,"pool":-1,"namespace":"","max":0}]' \
>  list-omap | wc -l
>
> ...and share the number of lines along with the overall size and
> utilization % of the OSD?  The OSD needs to be stopped, then run that
> command, then start it up again.
>

6872

ID   CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
769   hdd 5.45798  1.00000 5.46TiB 2.89TiB 2.57TiB 52.98 1.00  34

Not sure how to classify heavy or light use of snapshots, ceph osd
pool ls detail output is here: https://pastebin.com/CpPwUQgR

-- Dan

> I'm trying to guage how much snapmapper metadata there is in a "typical"
> RBD environment.  If you have some sense of whether your users make
> relatively heavy or light use of snapshots, that would be helpful too!
>
> Thanks!
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux