Re: PGs stuck active+remapped and osds lose data?!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looking at ``ceph -s`` you originally provided, all OSDs are up.

> osdmap e3114: 9 osds: 9 up, 9 in; 4 remapped pgs

But looking at ``pg query``, OSD.0 / 1 are not up. Are they something
like related to ?:

> Ceph1, ceph2 and ceph3 are vms on one physical host

Are those OSDs running on vm instances?

# 9.7
 <snip>
>    "state": "active+remapped",
>    "snap_trimq": "[]",
>    "epoch": 3114,
>    "up": [
>        7,
>        3
>    ],
>    "acting": [
>        7,
>        3,
>        0
>    ],
 <snip>

# 7.84
 <snip>
>    "state": "active+remapped",
>    "snap_trimq": "[]",
>    "epoch": 3114,
>   "up": [
>        4,
>        8
>    ],
>    "acting": [
>        4,
>        8,
>        1
>    ],
 <snip>

# 8.1b
 <snip>
>    "state": "active+remapped",
>    "snap_trimq": "[]",
>    "epoch": 3114,
>    "up": [
>        4,
>        7
>    ],
>    "acting": [
>        4,
>        7,
>        2
>    ],
 <snip>

# 7.7a
 <snip>
>    "state": "active+remapped",
>    "snap_trimq": "[]",
>    "epoch": 3114,
>    "up": [
>        7,
>        4
>    ],
>    "acting": [
>        7,
>        4,
>        2
>    ],
 <snip>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux