# ceph -v ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) (officiel Ceph packages for Jessie) Yes I recently adjusted pg_num, but all objects were correctly rebalanced. Then a manually deleted some objects from this pool. On 02/04/2017 06:31 PM, Shinobu Kinjo wrote: > On Sun, Feb 5, 2017 at 1:15 AM, John Spray <jspray@xxxxxxxxxx> wrote: >> On Fri, Feb 3, 2017 at 5:28 PM, Florent B <florent@xxxxxxxxxxx> wrote: >>> Hi everyone, >>> >>> On a Jewel test cluster I have : > please, `ceph -v` > >>> # ceph df >>> GLOBAL: >>> SIZE AVAIL RAW USED %RAW USED >>> 6038G 6011G 27379M 0.44 >>> POOLS: >>> NAME ID USED %USED MAX AVAIL OBJECTS >>> data 0 0 0 2986G 0 >>> metadata 1 58955k 0 2986G 115 >>> pve01-rbd01 5 2616M 0.09 2986G 862 >>> cephfs01 6 15E 0 2986G -315 >>> >>> >>> # rados -p cephfs01 ls >>> 10000034339.00000000 >>> >>> >>> Maybe I hit a bug ? >> I wonder if you had recently adjusted pg_num? Those were the >> situations where we've seen this sort of issue before. >> >> John >> >>> Flo >>> >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com