Hi Sage,
hopefully two last questions:
How to delete an existing mds ? The ceph-manual says: "Coming soon"
I tried:
root@bd-a:/# ceph-deploy mds destroy bd-0 bd-1 bd-2
[ceph_deploy.cli][INFO ] Invoked (1.3.4): /usr/bin/ceph-deploy mds
destroy bd-0 bd-1 bd-2
[ceph_deploy.mds][ERROR ] subcommand destroy not implemented
root@bd-a:/#
I think the existing data is lost during destroying the mds.
Is it possible to create a second mds beside the existing and copy the
data from one to the other internally or do i have to save the data to
an external repository ?
Thank you,
Markus
Am 09.01.2014 13:21, schrieb Sage Weil:
I take it back: it is encoded in teh MDSMap, so it can be changed for an
existing file system, except for teh fact that the monitor doesn't yet
have a command to do it. I can put a patch together to do that. In the
meantime, you *can* set the limit on cluster creation by adding
mds max file size = 100000000000000
(or whatever) to your ceph.conf before creating the monitors.
sage
On Thu, 9 Jan 2014, Markus Goldberg wrote:
Hi Sage,
that sounds good.
Thank you very much,
Markus
Am 09.01.2014 13:10, schrieb Sage Weil:
Hi Markus,
There is a compile time limit of 1 tb per file in cephfs. We
can increase that pretty easily. I need to check whether it can
be safely switched to a configurable...
sage
Markus Goldberg <goldberg@xxxxxxxxxxxxxxxxx> wrote:
Hi,
i want to use my ceph for a backup-repository holding virtual-tapes.
When i copy files from my existing backup-system to ceph all files are
cut at 1TB. The biggest files are arounf 5TB for now.
So i'm afraid, that the actual filesizelimit is set to 1TB.
How can I increase this limit ?
Can this be done without losing existing data ?
Thank you very much,
Markus
____________________________________________________________________________
Markus Goldberg Universit?t Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
____________________________________________________________________________
____________________________________________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Sent from Kaiten Mail. Please excuse my brevity.
--
MfG,
Markus Goldberg
--------------------------------------------------------------------------
Markus Goldberg Universit?t Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
--------------------------------------------------------------------------
--
MfG,
Markus Goldberg
--------------------------------------------------------------------------
Markus Goldberg Universität Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
--------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com