Re: 0B OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 25 Oct 2019, DHilsbos@xxxxxxxxxxxxxx wrote:
> All;
> 
> We're setting up our second cluster, using version 14.2.4, and we've run into a weird issue: all of our OSDs are created with a size of 0 B.  Weights are appropriate for the size of the underlying drives, but ceph -s shows this:
> 
>   cluster:
>     id:     <id>
>     health: HEALTH_WARN
>             Reduced data availability: 256 pgs inactive
>             too few PGs per OSD (28 < min 30)
> 
>   services:
>     mon: 3 daemons, quorum s700041,s700042,s700043 (age 4d)
>     mgr: s700041(active, since 3d), standbys: s700042, s700043
>     osd: 9 osds: 9 up (since 21m), 9 in (since 44m)
> 
>   data:
>     pools:   1 pools, 256 pgs
>     objects: 0 objects, 0 B
> -->usage:   0 B used, 0 B / 0 B avail<-- (emphasis added)
>     pgs:     100.000% pgs unknown
>              256 unknown

The 'unknown' state suggests the mgr daemon isn't reporting things to the 
mon, which is surprising since its been uor 3d (usually this lasts for <5 
seconds).  Try restarting the mgr?

sage
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux