Re: pulled a disk out, ceph still thinks its in

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When you pull a drive out what is the status of the the daemon:

systemctl status ceph-osd@ID

/Maged

On 2018-06-27 21:51, pixelfairy wrote:

even pulling a few more out didnt show up in osd tree. had to actually try to use them. ceph tell osd.N bench works.

On Sun, Jun 24, 2018 at 2:23 PM pixelfairy <pixelfairy@xxxxxxxxx> wrote:
15, 5 in each node. 14 currently in.
 
is there another way to know if theres a problem with one? or to make the threshold higher?

On Sun, Jun 24, 2018 at 2:14 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
How many OSDs do you have? How many of them are currently in?

outing OSDs is only enabled if more than 75% of the OSDs are in by default.

Paul

> Am 24.06.2018 um 23:04 schrieb pixelfairy <pixelfairy@xxxxxxxxx>:
>
> installed mimic on an empty cluster. yanked out an osd about 1/2hr ago and its still showing as in with ceph -s, ceph osd stat, and ceph osd tree.
>
> is the timeout long?
>
> hosts run ubuntu 16.04. ceph installed using ceph-ansible branch stable-3.1 the playbook didnt make the default rbd pool.
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux