Looks like you may not have any OSDs properly setup and mounted. It should look more like: user at host:~# mount | grep ceph /dev/sdb1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,inode64) /dev/sdc1 on /var/lib/ceph/osd/ceph-1 type xfs (rw,noatime,inode64) /dev/sdd1 on /var/lib/ceph/osd/ceph-2 type xfs (rw,noatime,inode64) Confirm the OSD in your ceph cluster with: user at host:~# ceph osd tree - Mike On 5/21/2014 11:15 AM, Sharmila Govind wrote: > Hi Mike, > Thanks for your quick response. When I try mount on the storage node > this is what I get: > > *root at cephnode4:~# mount* > */dev/sda1 on / type ext4 (rw,errors=remount-ro)* > *proc on /proc type proc (rw,noexec,nosuid,nodev)* > *sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)* > *none on /sys/fs/fuse/connections type fusectl (rw)* > *none on /sys/kernel/debug type debugfs (rw)* > *none on /sys/kernel/security type securityfs (rw)* > *udev on /dev type devtmpfs (rw,mode=0755)* > *devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)* > *tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)* > *none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)* > *none on /run/shm type tmpfs (rw,nosuid,nodev)* > */dev/sdb on /mnt/CephStorage1 type ext4 (rw)* > */dev/sdc on /mnt/CephStorage2 type ext4 (rw)* > */dev/sda7 on /mnt/Storage type ext4 (rw)* > */dev/sda2 on /boot type ext4 (rw)* > */dev/sda5 on /home type ext4 (rw)* > */dev/sda6 on /mnt/CephStorage type ext4 (rw)* > > > > Is there anything wrong in the setup I have? I dont have any 'ceph' > related mounts. > > Thanks, > Sharmila > > > > On Wed, May 21, 2014 at 8:34 PM, Mike Dawson <mike.dawson at cloudapt.com > <mailto:mike.dawson at cloudapt.com>> wrote: > > Perhaps: > > # mount | grep ceph > > - Mike Dawson > > > > On 5/21/2014 11:00 AM, Sharmila Govind wrote: > > Hi, > I am new to Ceph. I have a storage node with 2 OSDs. Iam > trying to > figure out to which pyhsical device/partition each of the OSDs are > attached to. Is there are command that can be executed in the > storage > node to find out the same. > > Thanks in Advance, > Sharmila > > > _________________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com> > http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> > >