I would recommend as Wido to use the dd command. block db device holds the metada/allocation of objects stored in data block, not cleaning this is asking for problems, besides it does not take any time. In our testing building new custer on top of older installation, we did see many cases where osds will not start and report an error such as fsid of cluster and/or OSD does not match metada in BlueFS superblock...these errors do not appear if we use the dd command.
On 2018-02-01 06:06, David Turner wrote:
I know that for filestore journals that is fine. I think it is also safe for bluestore. Doing Wido's recommendation of writing 100MB would be a good idea, but not necessary.
Hi David,
Thanks for your reply.
I am wondering what if I don't remove the journal(wal,db for bluestore) partion on the ssd and only zap the data disk.Then I assign the journal(wal,db for bluestore) partion to a new osd.What would happen?
发送时间:2018-01-31 17:24
主题:Re: How to clean data of osd with ssd journal(wal, db if it is bluestore) ?
I use gdisk to remove the partition and partprobe for the OS to see the new partition table. You can script it with sgdisk.
Hi list,
if I create an osd with journal(wal,db if it is bluestore) in the same hdd, I use ceph-disk zap to clean the disk when I want to remove the osd and clean the data on the disk.
But if I use a ssd partition as the journal(wal,db if it is bluestore) , how should I clean the journal (wal,db if it is bluestore) of the osd I want to remove?Especially when there are other osds are using other partition of the same ssd as journals(wal,db if it is bluestore) .
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com