ceph osd metadata fails if any osd is down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In 10.2.2 when running "ceph osd metadata" (defaulting to get metdata for
"all" OSDs), if even 1 OSD is currently marked "down", the entire command
fails and returns an error:

$ ceph osd metadata
Error ENOENT:

- One OSD in the cluster was "down", I removed that OSD and re-ran the
command successfully.

It seems that the "metadata" command should be able to dump the data for
the OSDs that are up and ignore the ones that are down.  Is this a known
bug?

-Wyllys Ingersoll
 Keeper Technology, LLC
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux