Re: Backup of cephfs metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well, if you've made changes to your data which impacted the metadata,
and then you restore to a backup of the metadata pool, but not the
data, then what's there isn't what CephFS thinks is there. Which would
be confusing for all the same reasons that it is in a local
filesystem. You could construct a use case and a backup system which
would produce a working system, but I don't think it'd be very useful
for most applications.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Wed, Apr 10, 2013 at 12:13 PM, Maik Kulbe
<info@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> I think going backwards in time would be, what a backup is for, isn't it? ;)
>
> My question really just is, if it is possible, to back it up. It just really
> stinks to completely rebuild the cluster and re-import all the data
> everytime I make some small mistake in the environment and the metadata
> backup would really help to save some time here.
>
>> If you were to do that you'd be going backwards in time with your
>> metadata, so — not really. CephFS is not generally production-ready
>> at
>> this time, but we welcome bug reports!
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>>
>> On Mon, Apr 8, 2013 at 12:52 PM, Maik Kulbe
>>  wrote:
>> > Hi,
>> >
>> > I'm currently testing Ceph with the POSIX fs at work as a fs cluster.
>> So far
>> > I've managed to cripple the test environment three or four times be
>> > unintendently crashing the MDS(1 active, 2 hot-standby). It seems to
>> me the
>> > MDS are pretty sensitive to all kinds of environmental changes and so
>> to use
>> > this I really need a backup solution for the Metadata.
>> >
>> > I've seen that RADOS supports an export command and that this also
>> works
>> > with the MD pool. Now the question is - can I just export and in case
>> I need
>> > it re-import the exported metadata or are there problems with that? If
>> so,
>> > what would be a better way to handle this?
>> >
>> > Thanks in advance,
>> >
>> > Maik Kulbe
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux