Re: combined ceph roles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David,

 

I have had a few weird issues when shutting down a node, although I can replicate it by doing a “stop ceph-all” as well. It seems that OSD failure detection takes a lot longer when a monitor goes down at the same time, sometimes I have seen the whole cluster grind to a halt for several minutes before it works out whats happened.

 

If I stop the either role and wait for it to be detected as failed and then do the next role, I don’t see the problem. So it might be something to keep in mind when doing maintenance.

 

Nick

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of David Graham
Sent: 10 February 2015 17:07
To: ceph-users@xxxxxxxx
Subject: combined ceph roles

 

Hello, I'm giving thought to a minimal footprint scenario with full redundancy. I realize it isn't ideal--and may impact overall performance --  but wondering if the below example would work, supported, or known to cause issue?

Example, 3x hosts each running:
-- OSD's
-- Mon
-- Client

I thought I read a post a while back about Client+OSD on the same host possibly being an issue -- but i am having difficulty finding that reference.

I would appreciate if anyone has insight into such a setup,

thanks!






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux