Re: 100.00 Usage for ssd-pool (maybe after: ceph osd crush move .. root=default)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I overseen also this:

==================================
root@fc-r02-ceph-osd-01:[~]: ceph -s
  cluster:
    id:     cfca8c93-f3be-4b86-b9cb-8da095ca2c26
    health: HEALTH_OK

  services:
mon: 5 daemons, quorum fc-r02-ceph-osd-01,fc-r02-ceph-osd-02,fc-r02-ceph-osd-03,fc-r02-ceph-osd-05,fc-r02-ceph-osd-06 (age 2w) mgr: fc-r02-ceph-osd-06(active, since 2w), standbys: fc-r02-ceph-osd-02, fc-r02-ceph-osd-03, fc-r02-ceph-osd-01, fc-r02-ceph-osd-05, fc-r02-ceph-osd-04
    osd: 54 osds: 54 up (since 2w), 54 in (since 2w); 2176 remapped pgs

  data:
    pools:   3 pools, 2177 pgs
    objects: 1.14M objects, 4.3 TiB
    usage:   13 TiB used, 11 TiB / 23 TiB avail
    pgs:     5684410/3410682 objects misplaced (166.665%)
             2176 active+clean+remapped
             1    active+clean

  io:
    client:   1.8 MiB/s rd, 13 MiB/s wr, 40 op/s rd, 702 op/s wr
==================================

pretty bad:

pgs:     5684410/3410682 objects misplaced (166.665%)

I did not removed any bucket, just executed the "ceph osd crush move" command ...

cu denny
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux