Hi, To clarify, I didn't notice this issue in 0.94.6 specifically... I just don't trust the udev magic to work every time after every kernel upgrade, etc. -- Dan On Mon, Mar 7, 2016 at 10:20 AM, Martin Palma <martin@xxxxxxxx> wrote: > Hi Dan, > > thanks for the quick replay and fix suggestion. So we are not the only > one facing this issue :-) > > Best, > Martin > > On Mon, Mar 7, 2016 at 10:04 AM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: >> Hi, >> >> As a workaround you can add "ceph-disk activate-all" to rc.local. >> (We use this all the time anyway just in case...) >> >> -- Dan >> >> On Mon, Mar 7, 2016 at 9:38 AM, Martin Palma <martin@xxxxxxxx> wrote: >>> Hi All, >>> >>> we are in the middle of patching our OSD servers and noticed that >>> after rebooting no OSD disk is mounted and therefore no OSD service >>> starts. >>> >>> We have then to manually call "ceph-disk-activate /dev/sdX1" for all >>> our disk in order to mount and start the OSD service again. >>> >>> Here a the version we are running before and after the update: >>> >>> OS: CentOS 7.1 Core --> CentOS 7.2 Core >>> Ceph: Hammer 0.94.3 --> Hammer 0.94.6 >>> >>> Any suggestions? >>> >>> We found this issue: http://tracker.ceph.com/issues/5194 from 2 years >>> ago and we tested several udev rules as linked in the issue, but >>> nothing happens after reboot. No OSD disk gets mounted. >>> >>> Best, >>> Martin >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com