Re: ceph osd metadata fails if any osd is down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 21, 2016 at 6:29 PM, Wyllys Ingersoll
<wyllys.ingersoll@xxxxxxxxxxxxxx> wrote:
> In 10.2.2 when running "ceph osd metadata" (defaulting to get metdata for
> "all" OSDs), if even 1 OSD is currently marked "down", the entire command
> fails and returns an error:
>
> $ ceph osd metadata
> Error ENOENT:
>
> - One OSD in the cluster was "down", I removed that OSD and re-ran the
> command successfully.
>
> It seems that the "metadata" command should be able to dump the data for
> the OSDs that are up and ignore the ones that are down.  Is this a known
> bug?

Probably fixed by
https://github.com/ceph/ceph/commit/f5db5a4b0bb52fed544f277c28ab5088d1c3fc79
which is in 10.2.3

John

>
> -Wyllys Ingersoll
>  Keeper Technology, LLC
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux