Hi Frederic, Thanks for the report! Do you mind throwing this details into a bug report at http://tracker.ceph.com/ ? I have seen the same thing once before, but at the time didn't have the chance to check if the inconsistency was coming from ceph-deploy or from ceph-disk. This certainly seems to point at ceph-deploy! - Travis On Wed, Apr 8, 2015 at 4:15 AM, fred@xxxxxxxxxx <fred@xxxxxxxxxx> wrote: > Hi all, > > I want to alert on a command we've learned to avoid for its inconsistent > results. > > on Giant 0.87.1 and Hammer 0.93.0 (ceph-deploy-1.5.22-0.noarch was used in > both cases) "ceph-deploy disk list" command has a problem. > > We should get an exhaustive list of devices entries, like this one : > ../.. > /dev/sdk : > /dev/sdk1 ceph data, active, cluster ceph, osd.34, journal /dev/sda9 > ../.. > > But from the admin node, > when we count how many disks we have on our nodes , results are incorrect > and differ each time : > $ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l > 8 > $ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l > 12 > $ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l > 10 > $ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l > 15 > $ ceph-deploy disk list osdnode1 2>&1|grep "active," |wc -l > 12 > > From the nodes, > results are correct (15) and always the same : > $ ceph-disk list 2>&1|grep "active," |wc -l > 15 > $ ceph-disk list 2>&1|grep "active," |wc -l > 15 > $ ceph-disk list 2>&1|grep "active," |wc -l > 15 > $ ceph-disk list 2>&1|grep "active," |wc -l > 15 > $ ceph-disk list 2>&1|grep "active," |wc -l > 15 > $ ceph-disk list 2>&1|grep "active," |wc -l > 15 > > but a pretty similar 'ceph-deploy osd list' command works fine > > Frederic > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com