Re: How to remove deactivated cephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tom,

thanks for the detailed steps.

I think our problem literally vanished. A couple of days after my email I noticed that the web interface suddenly listed only one cephFS. Also the command "ceph fs status" doesn't return an error anymore but shows the corret output.
I guess Ceph is indeed a self-healing storage solution! :-)

Regards,
Eugen


Zitat von Thomas Bennett <thomas@xxxxxxxxx>:

Hi Eugen,

From my experiences, to truely delete and recreate  the Ceph FS *cephfs*
file system I've done the following:

1. Remove the file system:
ceph fs rm cephfs --yes-i-really-mean-it
ceph fs rm_data_pool cephfs_data
ceph fs rm_data_pool cephfs cephfs_data

2. Remove the associated pools:
ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it
ceph osd pool delete cephfs_metadata
cephfs_metadata --yes-i-really-really-mean-it

3. Create a new default ceph file system:
ceph osd pool create cephfs_data <int[0-]> {<int[0-]>} {<ruleset>}
ceph osd pool create cephfs_metadata <int[0-]> {<int[0-]>} {<ruleset>}
ceph fs new cephfs cephfs_metadata cephfs_data
ceph fs set_default cephfs

Not sure if this helps, as you may need to repeat the whole process from
the start.

Regards,
Tom

On Mon, Jan 8, 2018 at 2:19 PM, Eugen Block <eblock@xxxxxx> wrote:

Hi list,

all this is on Ceph 12.2.2.

An existing cephFS (named "cephfs") was backed up as a tar ball, then
"removed" ("ceph fs rm cephfs --yes-i-really-mean-it"), a new one created
("ceph fs new cephfs cephfs-metadata cephfs-data") and the content restored
from the tar ball. According to the output of "ceph fs rm",  the old cephFS
has only been deactivated, not deleted.  Looking at the Ceph manager's web
interface, it now lists two entries "cephfs", one with id 0 (the "old" FS)
and id "1" (the currently active FS).

When we try to run "ceph fs status", we get an error with a traceback:

---cut here---
ceph3:~ # ceph fs status
Error EINVAL: Traceback (most recent call last):
  File "/usr/lib64/ceph/mgr/status/module.py", line 301, in handle_command
    return self.handle_fs_status(cmd)
  File "/usr/lib64/ceph/mgr/status/module.py", line 219, in
handle_fs_status
    stats = pool_stats[pool_id]
KeyError: (29L,)
---cut here---

while this works:

---cut here---
ceph3:~ # ceph fs ls
name: cephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
---cut here---

We see the new id 1 when we run

---cut here---
ceph3:~ #  ceph fs get cephfs
Filesystem 'cephfs' (1)
fs_name cephfs
[...]
data_pools      [35]
metadata_pool   36
inline_data     disabled
balancer
standby_count_wanted    1
[...]
---cut here---

The new FS seems to work properly and can be mounted from the clients,
just like before removing and rebuilding it. I'm not sure which other
commands would fail with this traceback, for now "ceph fs status" is the
only one.

So it seems that having one deactivated cephFS has an impact on some of
the functions/commands. Is there any way to remove it properly? Most of the
commands work with the name, not the id of the FS, so it's difficult to
access the data from the old FS. Has anyone some insights on how to clean
this up?

Regards,
Eugen

--
Eugen Block                             voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg                         e-mail  : eblock@xxxxxx

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Thomas Bennett

SKA South Africa
Science Processing Team

Office: +27 21 5067341
Mobile: +27 79 5237105



--
Eugen Block                             voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg                         e-mail  : eblock@xxxxxx

        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
          Sitz und Registergericht: Hamburg, HRB 90934
                  Vorstand: Jens-U. Mozdzen
                   USt-IdNr. DE 814 013 983

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux