Re: Multiple kernel RBD clients failures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Thank you for the reply

-28 == -ENOSPC (No space left on device). I think it's is due to the
fact that some osds are near full.

Yan, Zheng

I thought that may be the case, but I would expect that ceph health would tell me I had a full OSDs, but it is only saying they are near full:

# ceph health detail
HEALTH_WARN 9 near full osd(s)
osd.9 is near full at 85%
osd.29 is near full at 85%
osd.43 is near full at 91%
osd.45 is near full at 88%
osd.47 is near full at 88%
osd.55 is near full at 94%
osd.59 is near full at 94%
osd.67 is near full at 94%
osd.83 is near full at 94%

As I still have lots of space:

# ceph df
GLOBAL:
   SIZE     AVAIL     RAW USED     %RAW USED
   249T     118T      131T         52.60

POOLS:
   NAME         ID     USED       %USED     OBJECTS
   data               0      0                     0         0
   metadata      1      0                     0         0
   rbd                 2      8                     0         1
   rbd-pool        3      67187G     26.30     17713336

And I setup lots of Placement Groups:

# ceph osd dump | grep 'rep size' | grep rbd-pool
pool 3 'rbd-pool' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 4500 pgp_num 4500 last_change 360 owner 0

Why did the OSDs fill up long before I ran out of space?

Thanks,

Eric

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux