Re: osds udev rules not triggered on reboot (jewel, jessie)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
this is fixed in master and comming 10.2.3.

you can test this new udev rule:

https://github.com/ceph/ceph/blob/master/udev/60-ceph-by-parttypeuuid.rules

----- Mail original -----
De: "Antoine Mahul" <antoine.mahul@xxxxxxxxx>
À: "SCHAER Frederic" <frederic.schaer@xxxxxx>
Cc: "Loic Dachary" <loic@xxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Vendredi 26 Août 2016 17:22:51
Objet: Re:  osds udev rules not triggered on reboot (jewel,	jessie)

Hi, 

We have the same issue on CentOS 7.2.1511 and Ceph 10.2.2 : sometimes ceph-disk@ services are not started and OSD daemons are failed. 
With udev in debug mode, we observe that udev triggers are fired but failed because /var (on LVM) is not ready. 

In ceph-disk, the setup_statedir function is called systematically but it is not useful for the first call of main_trigger that starting the systemd service ceph-disk@{dev}. 
I'm not sure but a solution here could be to call setup_statedir and setup_sysconfigdir only when necessary and to start ceph-disk@{dev} services after the local-fs.target. 
11019:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) 'Traceback (most recent call last):'
11046:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) '  File "/usr/sbin/ceph-disk", line 9, in <module>'
11074:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) '    load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()'
11099:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) '  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4994, in run'
11125:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) '    main(sys.argv[1:])'
11150:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) '  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4936, in main'
11175:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) '    setup_statedir(args.statedir)'
11199:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) '  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4396, in setup_statedir'
11223:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) '    os.mkdir(STATEDIR)'
11246:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1'(err) 'OSError: [Errno 2] No such file or directory: '/var/lib/ceph''
11271:Aug 25 15:58:42 cephnode03 systemd-udevd[984]: '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/sdf1' [1103] exit with return code 1 
Regards, 

Antoine. 





2016-06-24 17:24 GMT+02:00 SCHAER Frederic < [ mailto:frederic.schaer@xxxxxx | frederic.schaer@xxxxxx ] > : 


Hi, 

I'm facing the same thing after I reinstalled a node directly in jewel... 

Reading : [ http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/31917 | http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/31917 ] 
I can confirm that running : "udevadm trigger -c add -s block " fires the udev rules and gets ceph-osd up. 

Thing is : I now have reinstalled boxes (CentOS 7.2.1511 ) which do not fire udev rules at boot, and get no /dev/disk/by-parttypeuuid - and I fear there is none also just after installing the ceph RPMs since the udev rules did not pre-exist -, and other exact same boxes (same setup, same hardware, same paritions) which were upgraded from previous ceph versions, which do seem to work correctly - or so I think. 
All with rootfs on LVM... 

I'll try to compare the 2 kinds of hosts to see if I can find something usefull ... 

Regards 


-----Message d'origine----- 
De : ceph-users [mailto: [ mailto:ceph-users-bounces@xxxxxxxxxxxxxx | ceph-users-bounces@xxxxxxxxxxxxxx ] ] De la part de [ mailto:stephane.davy@xxxxxxxxxx | stephane.davy@xxxxxxxxxx ] 
Envoyé : vendredi 24 juin 2016 12:10 
À : Loic Dachary < [ mailto:loic@xxxxxxxxxxx | loic@xxxxxxxxxxx ] > 
Cc : ceph-users < [ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] > 
Objet : Re:  osds udev rules not triggered on reboot (jewel, jessie) 

Hi Loïc, 

Sorry for the delay. Well, it's a vanillia Centos iso image downloaded from [ http://centos.org/ | centos.org ] mirror: 
[root@hulk-stg ~]# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 

This issue happens after Ceph upgrade from hammer, I haven't tested with distro starting with a fresh Ceph install 

Thanks, 

Stéphane 

-----Original Message----- 
From: Loic Dachary [mailto: [ mailto:loic@xxxxxxxxxxx | loic@xxxxxxxxxxx ] ] 
Sent: Tuesday, June 21, 2016 14:48 
To: DAVY Stephane OBS/OCB 
Cc: ceph-users 
Subject: Re:  osds udev rules not triggered on reboot (jewel, jessie) 



On 16/06/2016 18:01, [ mailto:stephane.davy@xxxxxxxxxx | stephane.davy@xxxxxxxxxx ] wrote: 
> Hi, 
> 
> Same issue with Centos 7, I also put back this file in /etc/udev/rules.d. 

Hi Stephane, 

Could you please detail which version of CentOS 7 you are using ? I tried to reproduce the problem with CentOS 7.2 as found on the CentOS cloud images repository ( [ http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1511.qcow2 | http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1511.qcow2 ] ) but it "works for me". 

Thanks ! 

> 
> -----Original Message----- 
> From: ceph-users [mailto: [ mailto:ceph-users-bounces@xxxxxxxxxxxxxx | ceph-users-bounces@xxxxxxxxxxxxxx ] ] On Behalf 
> Of Alexandre DERUMIER 
> Sent: Thursday, June 16, 2016 17:53 
> To: Karsten Heymann; Loris Cuoghi 
> Cc: Loic Dachary; ceph-users 
> Subject: Re:  osds udev rules not triggered on reboot 
> (jewel, jessie) 
> 
> Hi, 
> 
> I have the same problem with osd disks not mounted at boot on jessie 
> with ceph jewel 
> 
> workaround is to re-add 60-ceph-partuuid-workaround.rules file to udev 
> 
> [ http://tracker.ceph.com/issues/16351 | http://tracker.ceph.com/issues/16351 ] 
> 
> 
> ----- Mail original ----- 
> De: "aderumier" < [ mailto:aderumier@xxxxxxxxx | aderumier@xxxxxxxxx ] > 
> À: "Karsten Heymann" < [ mailto:karsten.heymann@xxxxxxxxx | karsten.heymann@xxxxxxxxx ] >, "Loris Cuoghi" 
> < [ mailto:lc@xxxxxxxxxxxxxxxxx | lc@xxxxxxxxxxxxxxxxx ] > 
> Cc: "Loic Dachary" < [ mailto:loic@xxxxxxxxxxx | loic@xxxxxxxxxxx ] >, "ceph-users" 
> < [ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] > 
> Envoyé: Jeudi 28 Avril 2016 07:42:04 
> Objet: Re:  osds udev rules not triggered on reboot (jewel, jessie) 
> 
> Hi, 
> they are missing target files in debian packages 
> 
> [ http://tracker.ceph.com/issues/15573 | http://tracker.ceph.com/issues/15573 ] 
> [ https://github.com/ceph/ceph/pull/8700 | https://github.com/ceph/ceph/pull/8700 ] 
> 
> I have also done some other trackers about packaging bug 
> 
> jewel: debian package: wrong /etc/default/ceph/ceph location 
> [ http://tracker.ceph.com/issues/15587 | http://tracker.ceph.com/issues/15587 ] 
> 
> debian/ubuntu : TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES not specified in 
> /etc/default/cep 
> [ http://tracker.ceph.com/issues/15588 | http://tracker.ceph.com/issues/15588 ] 
> 
> jewel: debian package: init.d script bug 
> [ http://tracker.ceph.com/issues/15585 | http://tracker.ceph.com/issues/15585 ] 
> 
> 
> @CC loic dachary, maybe he could help to speed up packaging fixes 
> 
> ----- Mail original ----- 
> De: "Karsten Heymann" < [ mailto:karsten.heymann@xxxxxxxxx | karsten.heymann@xxxxxxxxx ] > 
> À: "Loris Cuoghi" < [ mailto:lc@xxxxxxxxxxxxxxxxx | lc@xxxxxxxxxxxxxxxxx ] > 
> Cc: "ceph-users" < [ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] > 
> Envoyé: Mercredi 27 Avril 2016 15:20:29 
> Objet: Re:  osds udev rules not triggered on reboot 
> (jewel, jessie) 
> 
> 2016-04-27 15:18 GMT+02:00 Loris Cuoghi < [ mailto:lc@xxxxxxxxxxxxxxxxx | lc@xxxxxxxxxxxxxxxxx ] >: 
>> Le 27/04/2016 14:45, Karsten Heymann a écrit : 
>>> one workaround I found was to add 
>>> 
>>> [Install] 
>>> WantedBy=ceph-osd.target 
>>> 
>>> to /lib/systemd/system/ceph-disk@.service and then manually enable 
>>> my disks with 
>>> 
>>> # systemctl enable ceph-disk\@dev-sdi1 # systemctl start 
>>> ceph-disk\@dev-sdi1 
>>> 
>>> That way they at least are started at boot time. 
> 
>> Great! But only if the disks keep their device names, right ? 
> 
> Exactly. It's just a little workaround until the real issue is fixed. 
> 
> +Karsten 
> _______________________________________________ 
> ceph-users mailing list 
> [ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
> [ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 
> _______________________________________________ 
> ceph-users mailing list 
> [ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
> [ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 
> _______________________________________________ 
> ceph-users mailing list 
> [ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
> [ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 
> 
> ______________________________________________________________________ 
> ___________________________________________________ 
> 
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
> exploites ou copies sans autorisation. Si vous avez recu ce message 
> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. 
> 
> This message and its attachments may contain confidential or 
> privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. 
> If you have received this email in error, please notify the sender and delete this message and its attachments. 
> As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. 
> Thank you. 
> 

-- 
Loïc Dachary, Artisan Logiciel Libre 

_________________________________________________________________________________________________________________________ 

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc 
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler 
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, 
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. 

This message and its attachments may contain confidential or privileged information that may be protected by law; 
they should not be distributed, used or copied without authorisation. 
If you have received this email in error, please notify the sender and delete this message and its attachments. 
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. 
Thank you. 

_______________________________________________ 
ceph-users mailing list 
[ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 
_______________________________________________ 
ceph-users mailing list 
[ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 





_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux