0B OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All;

We're setting up our second cluster, using version 14.2.4, and we've run into a weird issue: all of our OSDs are created with a size of 0 B.  Weights are appropriate for the size of the underlying drives, but ceph -s shows this:

  cluster:
    id:     <id>
    health: HEALTH_WARN
            Reduced data availability: 256 pgs inactive
            too few PGs per OSD (28 < min 30)

  services:
    mon: 3 daemons, quorum s700041,s700042,s700043 (age 4d)
    mgr: s700041(active, since 3d), standbys: s700042, s700043
    osd: 9 osds: 9 up (since 21m), 9 in (since 44m)

  data:
    pools:   1 pools, 256 pgs
    objects: 0 objects, 0 B
-->usage:   0 B used, 0 B / 0 B avail<-- (emphasis added)
    pgs:     100.000% pgs unknown
             256 unknown

Thoughts?

I have ceph-volumne.log, and the log from one of the OSD daemons, though it looks like the auth keys get printed to the ceph-volume.log.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx 
300 S. Hamilton Pl. 
Gilbert, AZ 85233 
Phone: (480) 610-3500 
Fax: (480) 610-3501 
www.PerformAir.com


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux