CephFS MDS Setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 29, 2014 at 1:48 AM, Scottix <scottix at gmail.com> wrote:
> // Then tried but got error
> ceph osd pool delete metadata metadata --yes-i-really-really-mean-it
> Error EBUSY: pool 'metadata' is in use by CephFS

This is issue #8010.  Previously, there was no check for pools being
in use before they were deleted, which was kind of dangerous - in
firefly we added the check, and this is the consequence.

There will soon (https://github.com/ceph/ceph/pull/1852) be a "rm"
command that will let a user fully disable the filesystem, so that
they can remove the pools.  We willl also no longer be creating the
filesystem pools by default.

> // Tried and it looks like a bug of sort
> ceph mds cluster_down
> // Still get
> mdsmap e78: 0/0/0 up
> // Shouldn't it be down?

That's telling you that there are "zero MDSs up", rather than saying
that something is up.

> Do I need to start over and not add the mds to be clean?

You're already clean, apart from having a couple of unwanted pools
hanging around.  If you're worried about resource consumption from
those pools, you could create some very-small-pg_num pools and use
newfs to switch the FS to those.

Cheers,
John


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux