Re: ceph-disk list crashes in infernalis

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Loic,

thanx for the quick reply and filing the issue.

Regards

Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt

-----Ursprüngliche Nachricht-----
Von: Loic Dachary [mailto:loic@xxxxxxxxxxx] 
Gesendet: Donnerstag, 3. Dezember 2015 11:01
An: Stolte, Felix; ceph-users@xxxxxxxx
Betreff: Re:  ceph-disk list crashes in infernalis

Hi Felix,

This is a bug, I file an issue for you at
http://tracker.ceph.com/issues/13970

Cheers

On 03/12/2015 10:56, Stolte, Felix wrote:
> Hi all,
> 
>  
> 
> i upgraded from hammer to infernalis today and even so I had a hard time
doing so I finally got my cluster running in a healthy state (mainly my
fault, because I did not read the release notes carefully).
> 
>  
> 
> But when I try to list my disks with ?ceph-disk list? I get the following
Traceback:
> 
>  
> 
>  ceph-disk list
> 
> Traceback (most recent call last):
> 
>   File "/usr/sbin/ceph-disk", line 3576, in <module>
> 
>     main(sys.argv[1:])
> 
>   File "/usr/sbin/ceph-disk", line 3532, in main
> 
>     main_catch(args.func, args)
> 
>   File "/usr/sbin/ceph-disk", line 3554, in main_catch
> 
>     func(args)
> 
>   File "/usr/sbin/ceph-disk", line 2915, in main_list
> 
>     devices = list_devices(args)
> 
>   File "/usr/sbin/ceph-disk", line 2855, in list_devices
> 
>     partmap = list_all_partitions(args.path)
> 
>   File "/usr/sbin/ceph-disk", line 545, in list_all_partitions
> 
>     dev_part_list[name] = list_partitions(os.path.join('/dev', name))
> 
>   File "/usr/sbin/ceph-disk", line 550, in list_partitions
> 
>     if is_mpath(dev):
> 
>   File "/usr/sbin/ceph-disk", line 433, in is_mpath
> 
>     uuid = get_dm_uuid(dev)
> 
>   File "/usr/sbin/ceph-disk", line 421, in get_dm_uuid
> 
>     uuid_path = os.path.join(block_path(dev), 'dm', 'uuid')
> 
>   File "/usr/sbin/ceph-disk", line 416, in block_path
> 
>     rdev = os.stat(path).st_rdev
> 
> OSError: [Errno 2] No such file or directory: '/dev/cciss!c0d0'
> 
>  
> 
>  
> 
> I?m running ceph 9.2 on Ubuntu 14.04.3 LTS on HP Hardware with HP P400
Raidcontroller. 4 Node Cluster (3 of them are Mons), 5-6 OSDs per Node with
journals on separate drive.
> 
>  
> 
> Does anyone know how to solve this or did I hit a bug?
> 
>  
> 
> Regards Felix
> 
>  
> 
> Forschungszentrum Juelich GmbH
> 
> 52425 Juelich
> 
> Sitz der Gesellschaft: Juelich
> 
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> 
> Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
> 
> Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
> 
> Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> 
> Prof. Dr. Sebastian M. Schmidt
> 
>  
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux