Re: How to clean data of osd with ssd journal(wal, db if it is bluestore) ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Lin,

We do the extra dd after zapping the disk. ceph-disk has a zap function that uses wipefs to wipe fs traces, dd  to zero 10MB at partition starts, then sgdisk to remove partition table, i believe ceph-volume does the same. After this zap for each data or db block that will be created on this device we use the dd command to zero 500MB, this may be a bit overboard but other users have had similar issues:

http://tracker.ceph.com/issues/22354

Also the initial zap will wipe out the the disk and zeros the start of partitions as they used to be, it is possible the new disk will have db block with a different size so the start of partitioning has changed.

I am not sure if your question was because you hit this issue or you just want to skip the extra dd function or you are facing issues cleaning disks, if it is the later we can send you some patch that does this.

Maged

On 2018-02-01 15:04, shadow_lin wrote:

Hi Maged,
The problem you met beacuse of the left over of older cluster.Did you remove the db partition or you just use the old partition?
I thought Wido suggest to remove the partition then use the dd to be safe.Is it safe I don't remove the partition and just use dd the try to destory the data on that partition?
How would ceph-disk or ceph-volume do to the existing partition of journal,db,wal?Will it clean it or it just uses it without any action?
 
2018-02-01
lin.yunfan

发件人:Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
发送时间:2018-02-01 14:22
主题:Re: How to clean data of osd with ssd journal(wal, db if it is bluestore) ?
收件人:"David Turner"<drakonstein@xxxxxxxxx>
抄送:"shadow_lin"<shadow_lin@xxxxxxx>,"ceph-users"<ceph-users@xxxxxxxxxxxxxx>
 

I would recommend as Wido to use the dd command. block db device holds the metada/allocation of objects stored in data block, not cleaning this is asking for problems, besides it does not take any time.  In our testing building new custer on top of older installation, we did see many cases where osds will not start and report an error such as fsid of cluster and/or OSD does not match metada in BlueFS superblock...these errors do not appear if we use the dd command. 

On 2018-02-01 06:06, David Turner wrote:

I know that for filestore journals that is fine.  I think it is also safe for bluestore.  Doing Wido's recommendation of writing 100MB would be a good idea, but not necessary.


On Wed, Jan 31, 2018, 10:10 PM shadow_lin <shadow_lin@xxxxxxx> wrote:
Hi David,
Thanks for your reply.
I am wondering what if I don't remove the journal(wal,db for bluestore) partion on the ssd and only zap the data disk.Then I assign the journal(wal,db for bluestore) partion to a new osd.What would happen?
 
2018-02-01
lin.yunfan

发件人:David Turner <drakonstein@xxxxxxxxx>
发送时间:2018-01-31 17:24
主题:Re: How to clean data of osd with ssd journal(wal, db if it is bluestore) ?
收件人:"shadow_lin"<shadow_lin@xxxxxxx>
抄送:"ceph-users"<ceph-users@xxxxxxxxxxxxxx>
 

I use gdisk to remove the partition and partprobe for the OS to see the new partition table. You can script it with sgdisk.


On Wed, Jan 31, 2018, 4:10 AM shadow_lin <shadow_lin@xxxxxxx> wrote:
Hi list,
if I create an osd with journal(wal,db if it is bluestore) in the same hdd, I use ceph-disk zap to clean the disk when I want to remove the osd and clean the data on the disk.
But if I use a ssd partition as the journal(wal,db if it is bluestore) , how should I clean the journal (wal,db if it is bluestore) of the osd I want to remove?Especially when there are other osds are using other partition of the same ssd  as journals(wal,db if it is bluestore) .
 
 
2018-01-31

shadow_lin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux