Re: ceph-volume fails in all recent releases with IndexError

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I don't use rook but I haven't seen this issue yet in any of my test clusters (from octopus to reef). Althouth I don't redeploy OSDs all the time, I do set up fresh (single-node) clusters once or twice a week with different releases without any ceph-volume issues. Just to confirm I just recreated one cluster with 3 OSDs and separate rocksdb. Maybe it's a rook issue?

Regards,
Eugen


Zitat von Nico Schottelius <nico.schottelius@xxxxxxxxxxx>:

Hello dear fellow ceph users,

it seems that for some months all current ceph releases (16.x, 17.x,
18.x) are having a bug in ceph-volume that causes disk
activation to fail with the error "IndexError: list index out of range"
(details below, [0]).

It also seems there is already a fix for it available [1], but also that
it hasn't been merged into any official release [2,3,4].

This has started to affect more and more nodes in our clusters and thus
I was wondering if others are also seeing this issue and whether anyone
knows whether it is planned to create a new release based on this soon?

Best regards,

Nico

--------------------------------------------------------------------------------
[0]
kubectl -n rook-ceph logs -c activate rook-ceph-osd-30-6558b7cf69-5cbbl
+ OSD_ID=30
+ CEPH_FSID=bd3061a0-ecf3-4af6-9017-51b63c90b526
+ OSD_UUID=319e5756-318c-46a0-b7e9-429e39069302
+ OSD_STORE_FLAG=--bluestore
+ OSD_DATA_DIR=/var/lib/ceph/osd/ceph-30
+ CV_MODE=raw
+ DEVICE=/dev/sdf
+ cp --no-preserve=mode /etc/temp-ceph/ceph.conf /etc/ceph/ceph.conf
+ python3 -c '
import configparser

config = configparser.ConfigParser()
config.read('\''/etc/ceph/ceph.conf'\'')

if not config.has_section('\''global'\''):
    config['\''global'\''] = {}

if not config.has_option('\''global'\'','\''fsid'\''):
    config['\''global'\'']['\''fsid'\''] = '\''....\''

with open('\''/etc/ceph/ceph.conf'\'', '\''w'\'') as configfile:
    config.write(configfile)
'
+ ceph -n client.admin auth get-or-create osd.30 mon 'allow profile osd' mgr 'allow profile osd' osd 'allow *' -k /etc/ceph/admin-keyring-store/keyring
[osd.30]
    key = ...
+ [[ raw == \l\v\m ]]
++ mktemp
+ OSD_LIST=/tmp/tmp.OpZRJJOcrX
+ ceph-volume raw list /dev/sdf
Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 11, in <module>
load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__
    self.main(self.argv)
File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main
    terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line 32, in main
    terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line 166, in main
    self.list(args)
File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line 122, in list
    report = self.generate(args.device)
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py", line 91, in generate
    info_device = [info for info in info_devices if info['NAME'] == dev][0]
IndexError: list index out of range

[1] https://github.com/ceph/ceph/pull/49954
[2] https://github.com/ceph/ceph/pull/54705
[3] https://github.com/ceph/ceph/pull/54706
[4] https://github.com/ceph/ceph/pull/54707

--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux