Re: cephfs Kernel panic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 12 Apr 2016 12:21:51 +0200 Simon Ferber wrote:

> Am 12.04.2016 um 12:09 schrieb Florian Haas:
> > On Tue, Apr 12, 2016 at 11:53 AM, Simon Ferber
> > <ferber@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> >> Thank you! That's it. I have installed the Kernel from the Jessie
> >> backport. Now the crashes are gone.
> >> How often do these things happen? It would be a worst case scenario,
> >> if a system update breaks a productive system.
> > 
> > For what it's worth, what you saw is kernel (i.e. client) side
> > breakage. You didn't mess up your Ceph cluster, nor your CephFS
> > metadata, nor any data. Also, anything you do in CephFS using a
> > release before Jewel must be considered experimental, and while things
> > will generally not break even on the client, you shouldn't be
> > surprised if they do. Thirdly, my recommendation for any Ceph
> > client-side kernel functionality (both rbd.ko and CephFS) would be to
> > use nothing older than a 4.x kernel.
> 
> Thank you for clarification, Florian.
> 
> > 
> > A good update on the current state of CephFS is this tech talk, which
> > John Spray did in February:
> > 
> > https://www.youtube.com/watch?v=GbdHxL0vc9I
> > slideshare.net/JohnSpray1/cephfs-update-february-2016
> > 
> > Also, please don't ever do this:
> > 
> >     cluster 2a028d5e-5708-4fc4-9c0d-3495c1a3ef3d
> >      health HEALTH_OK
> >      monmap e2: 2 mons at
> > {ollie2=129.217.207.207:6789/0,stan2=129.217.207.206:6789/0}
> >             election epoch 12, quorum 0,1 stan2,ollie2
> >      mdsmap e10: 1/1/1 up {0=ollie2=up:active}, 1 up:standby
> >      osdmap e72: 8 osds: 8 up, 8 in
> >             flags sortbitwise
> >       pgmap v137: 428 pgs, 4 pools, 2396 bytes data, 20 objects
> >             281 MB used, 14856 GB / 14856 GB avail
> >                  428 active+clean
> > 
> > 2 mons. Never, and I repeat never, run your Ceph cluster with 2 mons.
> > You want to run 3.
> 
> Thus if there are two servers only (which used to use drdb) what would
> be the best solution? Just grab another Linux server and install a ceph
> cluster node without OSDs and a monitor only?
> 
Yes, even an independent VM will do. 
The busiest MON will be the leader, which is always the one with the
lowest IP address, so keep that in mind.

Christian
> Best
> Simon
> 
> > 
> > Cheers,
> > Florian
> > 
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux