Hi all,
I want to alert on a command we've learned to avoid for its inconsistent
results.
on Giant 0.87.1 and Hammer 0.93.0 (ceph-deploy-1.5.22-0.noarch was used
in both cases) "ceph-deploy disk list" command has a problem.
We should get an exhaustive list of devices entries, like this one :
../..
/dev/sdk :
/dev/sdk1 ceph data, active, cluster ceph, osd.34, journal /dev/sda9
../..
But from the admin node,
when we count how many disks we have on our nodes , results are
incorrect and differ each time :
$ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l
8
$ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l
12
$ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l
10
$ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l
15
$ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l
12
From the nodes,
results are correct (15) and always the same :
$ ceph-disk list 2>&1|grep "active," |wc -l
15
$ ceph-disk list 2>&1|grep "active," |wc -l
15
$ ceph-disk list 2>&1|grep "active," |wc -l
15
$ ceph-disk list 2>&1|grep "active," |wc -l
15
$ ceph-disk list 2>&1|grep "active," |wc -l
15
$ ceph-disk list 2>&1|grep "active," |wc -l
15
but a pretty similar 'ceph-deploy osd list' command works fine
Frederic
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com