Re: CephFS removal.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oh, hah, your initial email had a very delayed message
delivery...probably got stuck in the moderation queue. :)

On Thu, Feb 12, 2015 at 8:26 AM,  <warren.jeffs@xxxxxxxxxx> wrote:
> I am running 0.87, In the end I just wiped the cluster and started again - it was quicker.
>
> Warren
>
> -----Original Message-----
> From: Gregory Farnum [mailto:greg@xxxxxxxxxxx]
> Sent: 12 February 2015 16:25
> To: Jeffs, Warren (STFC,RAL,ISIS)
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  CephFS removal.
>
> What version of Ceph are you running? It's varied by a bit.
>
> But I think you want to just turn off the MDS and run the "fail"
> command — deactivate is actually the command for removing a logical MDS from the cluster, and you can't do that for a lone MDS because there's nobody to pass off the data to. I'll make a ticket to clarify this. When you've done that you should be able to delete it.
> -Greg
>
> On Mon, Feb 2, 2015 at 1:40 AM,  <warren.jeffs@xxxxxxxxxx> wrote:
>> Hi All,
>>
>>
>>
>> Having a few problems removing cephfs file systems.
>>
>>
>>
>> I want to remove my current pools (was used for test data) – wiping
>> all current data, and start a fresh file system on my current cluster.
>>
>>
>>
>> I have looked over the documentation but I can’t find anything on
>> this. I have an object store pool, Which I don’t want to remove – but
>> I’d like to remove the cephfs file system pools and remake them.
>>
>>
>>
>>
>>
>> My cephfs is called ‘data’.
>>
>>
>>
>> Running ceph fs delete data returns: Error EINVAL: all MDS daemons
>> must be inactive before removing filesystem
>>
>>
>>
>> To make an MDS inactive I believe the command is: ceph mds deactivate
>> 0
>>
>>
>>
>> Which returns: telling mds.0 135.248.53.134:6809/16692 to deactivate
>>
>>
>>
>> Checking the status of the mds using: ceph mds stat  returns: e105:
>> 1/1/0 up {0=node2=up:stopping}
>>
>>
>>
>> This has been sitting at this status for the whole weekend with no
>> change. I don’t have any clients connected currently.
>>
>>
>>
>> When trying to manually just remove the pools, it’s not allowed as
>> there is a cephfs file system on them.
>>
>>
>>
>> I’m happy that all of the failsafe’s to stop someone removing a pool
>> are all working correctly.
>>
>>
>>
>> If this is currently undoable. Is there a way to quickly wipe a cephfs
>> filesystem – using RM from a kernel client is really slow.
>>
>>
>>
>> Many thanks
>>
>>
>>
>> Warren Jeffs
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux