Hopefully I am not late to the party :) But ceph-deploy recently gained a `osd list` subcommand that does this plus a bunch of other interesting metadata: $ ceph-deploy osd list node1 [ceph_deploy.conf][DEBUG ] found configuration file at: /Users/alfredo/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.2): /Users/alfredo/.virtualenvs/ceph-deploy/bin/ceph-deploy osd list node1 [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [node1][INFO ] Running command: sudo ceph --cluster=ceph osd tree --format=json [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [node1][INFO ] Running command: sudo ceph-disk list [node1][INFO ] ---------------------------------------- [node1][INFO ] ceph-0 [node1][INFO ] ---------------------------------------- [node1][INFO ] Path /var/lib/ceph/osd/ceph-0 [node1][INFO ] ID 0 [node1][INFO ] Name osd.0 [node1][INFO ] Status up [node1][INFO ] Reweight 1.000000 [node1][INFO ] Magic ceph osd volume v026 [node1][INFO ] Journal_uuid 214a6865-416b-4c09-b031-a354d4f8bdff [node1][INFO ] Active ok [node1][INFO ] Device /dev/sdb1 [node1][INFO ] Whoami 0 [node1][INFO ] Journal path /dev/sdb2 [node1][INFO ] ---------------------------------------- On Thu, May 22, 2014 at 8:30 AM, John Spray <john.spray at inktank.com> wrote: > On Thu, May 22, 2014 at 10:57 AM, Sharmila Govind > <sharmilagovind at gmail.com> wrote: >> root at cephnode4:/mnt/ceph/osd2# mount |grep ceph >> /dev/sdc on /mnt/ceph/osd3 type ext4 (rw) >> /dev/sdb on /mnt/ceph/osd2 type ext4 (rw) >> >> All the above commands just pointed out the mount points(/mnt/ceph/osd3), >> the folders were named by me as ceph/osd. But, if a new user has to get the >> osd mapping to the mounted devices, would be difficult if we named the osd >> disk folders differently. Any other command which could give the mapping >> would be useful. > > It really depends on how you have set up the OSDs. If you're using > ceph-deploy or ceph-disk to partition and format the drives, they get > a special partition type set which marks them as a Ceph OSD. On a > system set up that way, you get nice uniform output like this: > > # ceph-disk list > /dev/sda : > /dev/sda1 other, ext4, mounted on /boot > /dev/sda2 other, LVM2_member > /dev/sdb : > /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2 > /dev/sdb2 ceph journal, for /dev/sdb1 > /dev/sdc : > /dev/sdc1 ceph data, active, cluster ceph, osd.3, journal /dev/sdc2 > /dev/sdc2 ceph journal, for /dev/sdc1 > > John > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com