Re: cephfs Kernel panic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 12.04.2016 um 12:09 schrieb Florian Haas:
> On Tue, Apr 12, 2016 at 11:53 AM, Simon Ferber
> <ferber@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>> Thank you! That's it. I have installed the Kernel from the Jessie
>> backport. Now the crashes are gone.
>> How often do these things happen? It would be a worst case scenario, if
>> a system update breaks a productive system.
> 
> For what it's worth, what you saw is kernel (i.e. client) side
> breakage. You didn't mess up your Ceph cluster, nor your CephFS
> metadata, nor any data. Also, anything you do in CephFS using a
> release before Jewel must be considered experimental, and while things
> will generally not break even on the client, you shouldn't be
> surprised if they do. Thirdly, my recommendation for any Ceph
> client-side kernel functionality (both rbd.ko and CephFS) would be to
> use nothing older than a 4.x kernel.

Thank you for clarification, Florian.

> 
> A good update on the current state of CephFS is this tech talk, which
> John Spray did in February:
> 
> https://www.youtube.com/watch?v=GbdHxL0vc9I
> slideshare.net/JohnSpray1/cephfs-update-february-2016
> 
> Also, please don't ever do this:
> 
>     cluster 2a028d5e-5708-4fc4-9c0d-3495c1a3ef3d
>      health HEALTH_OK
>      monmap e2: 2 mons at
> {ollie2=129.217.207.207:6789/0,stan2=129.217.207.206:6789/0}
>             election epoch 12, quorum 0,1 stan2,ollie2
>      mdsmap e10: 1/1/1 up {0=ollie2=up:active}, 1 up:standby
>      osdmap e72: 8 osds: 8 up, 8 in
>             flags sortbitwise
>       pgmap v137: 428 pgs, 4 pools, 2396 bytes data, 20 objects
>             281 MB used, 14856 GB / 14856 GB avail
>                  428 active+clean
> 
> 2 mons. Never, and I repeat never, run your Ceph cluster with 2 mons.
> You want to run 3.

Thus if there are two servers only (which used to use drdb) what would
be the best solution? Just grab another Linux server and install a ceph
cluster node without OSDs and a monitor only?

Best
Simon

> 
> Cheers,
> Florian
> 


-- 
Simon Ferber
Techniker

Technische Universität Dortmund
Fakultät Statistik
Vogelpothsweg 87
44227 Dortmund

Tel.: +49 231-755 3188
Fax: +49 231-755 5305
simon.ferber@xxxxxxxxxxxxxx
www.tu-dortmund.de


Wichtiger Hinweis: Die Information in dieser E-Mail ist vertraulich. Sie
ist ausschließlich für den Adressaten bestimmt. Sollten Sie nicht der
für diese E-Mail bestimmte Adressat sein, unterrichten Sie bitte den
Absender und vernichten Sie diese Mail. Vielen Dank.
Unbeschadet der Korrespondenz per E-Mail, sind unsere Erklärungen
ausschließlich final rechtsverbindlich, wenn sie in herkömmlicher
Schriftform (mit eigenhändiger Unterschrift) oder durch Übermittlung
eines solchen Schriftstücks per Telefax erfolgen.

Important note: The information included in this e-mail is confidential.
It is solely intended for the recipient. If you are not the intended
recipient of this e-mail please contact the sender and delete this
message. Thank you.
Without prejudice of e-mail correspondence, our statements are only
legally binding when they are made in the conventional written form
(with personal signature) or when such documents are sent by fax.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux