Re: Wipe an Octopus install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

I honestly do not get what the problem is. Just yum remove the rpm's, dd 
your osd drives, if there is something left in /var/lib/ceph, /etc/ceph, 
rm -R -f * those. Do a find / -iname "*ceph*" if there is still 
something there.



-----Original Message-----
To: Samuel Taylor Liston
Cc: ceph-users@xxxxxxx
Subject:  Re: Wipe an Octopus install

Hm, not really, that command only remove the ceph.conf on my admin node. 
So the same as already reported in [2].


Zitat von Samuel Taylor Liston <sam.liston@xxxxxxxx>:

> Eugen,
> 	That sounds promising.  I missed that in the man.  Thanks for 
> pointing it out.
>
> Sam Liston (sam.liston@xxxxxxxx)
> ==========================================
> Center for High Performance Computing - Univ. of Utah
> 155 S. 1452 E. Rm 405
> Salt Lake City, Utah 84112 (801)232-6932 
> ==========================================
>
>
>
>> On Oct 7, 2020, at 2:32 AM, Eugen Block <eblock@xxxxxx> wrote:
>>
>> Hi,
>>
>> I haven't had the opportunity to test it yet but have you tried:
>>
>> cephadm rm-cluster
>>
>> from cephadm man page [1]. But it doesn't seem to work properly yet 
[2].
>>
>> Regards,
>> Eugen
>>
>>
>> [1] https://docs.ceph.com/en/latest/man/8/cephadm/
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1881192
>>
>>
>> Zitat von Samuel Taylor Liston <sam.liston@xxxxxxxx>:
>>
>>> Wondering if anyone knows or has put together a way to wipe an 
>>> Octopus install?  I’ve looked for documentation on the process, but 

>>> if it exists, I haven’t found it yet.  I’m going through some test 

>>> installs - working through the ins and outs of cephadm and 
>>> containers and would love an easy way to tear things down and start 
>>> over.
>>> 	In previous releases managed through ceph-deploy there were 
three 
>>> very convenient commands that nuked the world.  I am looking for 
>>> something as complete for Octopus.
>>> Thanks,
>>>
>>> Sam Liston (sam.liston@xxxxxxxx)
>>> ==========================================
>>> Center for High Performance Computing - Univ. of Utah
>>> 155 S. 1452 E. Rm 405
>>> Salt Lake City, Utah 84112 (801)232-6932 
>>> ==========================================
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 

>>> email to ceph-users-leave@xxxxxxx
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
>> email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux