Re: Unable to mount cephfs - can't read superblock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/09/2013 12:06 PM, Adam Nielsen wrote:
Thanks for your quick reply!

Could you show the output of "ceph -s"

$ ceph -s
    health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
    monmap e1: 1 mons at {0=192.168.0.6:6789/0}, election epoch 0,
quorum 0 0
    osdmap e3: 1 osds: 1 up, 1 in
     pgmap v119: 192 pgs: 192 active+degraded; 0 bytes data, 10204 MB
used, 2740 GB / 2750 GB avail
    mdsmap e1: 0/0/1 up


Ah, I see you only have one OSD, where the default replication level is 2. Also, pools don't work by default if only one replica is left.

You better add a second OSD or just do a mkcephfs again with a second OSD in the configuration.

Just a reminder, it's also in the docs, but CephFS is still in beta, so expect weird things to happen. It can however not hurt to play with it!

P.S.: There's also a new and shiny ceph-users list since two days ago, might want to subscribe there.

Wido

Also, which version of Ceph are you using under which OS?

The latest stable Debian release from ceph.com (bobtail AFAIK).

Thanks,
Adam.


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux