On Tue, Apr 12, 2016 at 11:53 AM, Simon Ferber <ferber@xxxxxxxxxxxxxxxxxxxxxxxx> wrote: > Thank you! That's it. I have installed the Kernel from the Jessie > backport. Now the crashes are gone. > How often do these things happen? It would be a worst case scenario, if > a system update breaks a productive system. For what it's worth, what you saw is kernel (i.e. client) side breakage. You didn't mess up your Ceph cluster, nor your CephFS metadata, nor any data. Also, anything you do in CephFS using a release before Jewel must be considered experimental, and while things will generally not break even on the client, you shouldn't be surprised if they do. Thirdly, my recommendation for any Ceph client-side kernel functionality (both rbd.ko and CephFS) would be to use nothing older than a 4.x kernel. A good update on the current state of CephFS is this tech talk, which John Spray did in February: https://www.youtube.com/watch?v=GbdHxL0vc9I slideshare.net/JohnSpray1/cephfs-update-february-2016 Also, please don't ever do this: cluster 2a028d5e-5708-4fc4-9c0d-3495c1a3ef3d health HEALTH_OK monmap e2: 2 mons at {ollie2=129.217.207.207:6789/0,stan2=129.217.207.206:6789/0} election epoch 12, quorum 0,1 stan2,ollie2 mdsmap e10: 1/1/1 up {0=ollie2=up:active}, 1 up:standby osdmap e72: 8 osds: 8 up, 8 in flags sortbitwise pgmap v137: 428 pgs, 4 pools, 2396 bytes data, 20 objects 281 MB used, 14856 GB / 14856 GB avail 428 active+clean 2 mons. Never, and I repeat never, run your Ceph cluster with 2 mons. You want to run 3. Cheers, Florian _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com