Re: Unable to mount cephfs - can't read superblock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ah, I see you only have one OSD, where the default replication level is 2.
Also, pools don't work by default if only one replica is left.

You better add a second OSD or just do a mkcephfs again with a second OSD in
the configuration.

Ah ok. From my earlier post I think I can add the second OSD on the same disk since it is mounted on /var/lib/ceph/. Are there likely to be any problems with this, going into production? (I mean having two OSDs on the same node.) I will eventually have a second node, but I know only having two isn't ideal.

Just a reminder, it's also in the docs, but CephFS is still in beta, so expect
weird things to happen. It can however not hurt to play with it!

Thanks for the reminder - but I only plan to use CephFS to store backup copies of things until it becomes stable, so hopefully I can give it some testing and survive if it breaks :-)

P.S.: There's also a new and shiny ceph-users list since two days ago, might
want to subscribe there.

I will use that list as soon as it appears on GMane, since I find their NNTP interface a lot easier than managing a bunch of mailing list subscriptions! Maybe someone with more authority than myself can add it?

  http://gmane.org/subscribe.php

Cheers,
Adam.


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux