Re: ceph journal failed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 23 Dec 2015 11:46:58 +0800 yuyang wrote:

> ok, You give me the answer, thanks a lot.
>

Assume that a journal SSD failure means a loss of all associated OSDs.

So in your case a single SSD failure will cause the data loss of a whole
node.

If you have 15 or more of those nodes, your cluster should be able to
handle the resulting I/O storm from recovering 9 OSDs, but with just a few
nodes you will have a severe performance impact and also risk data loss if
other failures occur during recovery.

Lastly, a 1:9 SSD journal to SATA ratio sounds also wrong when it comes to
performance, your SSD would need be able to handle about 900MB/s sync
writes, that's very expensive territory.

Christan
 
> But, I don't know the answer to your questions.
> 
> Maybe someone else can answer.
> 
> ------------------ Original ------------------
> From: &nbsp;"Loris Cuoghi";<lc@xxxxxxxxxxxxxxxxx>;
> Date: &nbsp;Tue, Dec 22, 2015 07:31 PM
> To: &nbsp;"ceph-users"<ceph-users@xxxxxxxxxxxxxx>;
> Subject: &nbsp;Re:  ceph journal failed?
> 
> Le 22/12/2015 09:42, yuyang a écrit :
> &gt; Hello, everyone,
> [snip snap]
> 
> Hi
> 
> &gt; If the SSD failed or down, can the OSD work?
> &gt; Is the osd down or only can be read?
> 
> If you don't have a journal anymore, the OSD has already quit, as it 
> can't continue writing, nor it can assure data consistency, since writes 
> have probably been interrupted.
> 
> The Ceph's community general assumption for a dead journal, is a dead
> OSD.
> 
> But.
> 
> http://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
> 
> How does this apply in reality?
> Is the solution that Sébastien is proposing viable?
> In most/all cases?
> Will the OSD continue chugging along after this kind of surgery?
> Is it necessary/suggested to deep scrub ASAP the OSD's placement groups?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> <br></ceph-users@xxxxxxxxxxxxxx></lc@xxxxxxxxxxxxxxxxx>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux