Hi Maged,
I haven't met this problem, but I think I did have
read the bug report you provide.
I just want to know what is the best
practice to remove a journal,db,wal partition if there are some other
partitions in the same ssd for other osd and don't effect other osds.
I had used the ceph-disk zap a lot before
but I just zap the whole disk(journal,db,wal collocated with data in
the same disk) so I think I need to it maually if I only want to "zap" a
certain partition.
Thanks.
发件人:Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
发送时间:2018-02-01 22:15
主题:Re: [ceph-users] How to clean data of osd with ssd
journal(wal, db if it is bluestore) ?
收件人:"shadow_lin"<shadow_lin@xxxxxxx>
抄送:"ceph-users"<ceph-users@xxxxxxxxxxxxxx>
Hi Lin,
We do the extra dd after zapping the disk. ceph-disk has a zap
function that uses wipefs to wipe fs traces, dd to zero 10MB at
partition starts, then sgdisk to remove partition table, i believe ceph-volume
does the same. After this zap for each data or db block that will be created
on this device we use the dd command to zero 500MB, this may be a bit
overboard but other users have had similar issues:
http://tracker.ceph.com/issues/22354
Also the initial zap will wipe out the the disk and zeros the start of
partitions as they used to be, it is possible the new disk will have db block
with a different size so the start of partitioning has changed.
I am not sure if your question was because you hit this issue or you just
want to skip the extra dd function or you are facing issues cleaning disks, if
it is the later we can send you some patch that does this.
Maged
On 2018-02-01 15:04, shadow_lin wrote:
Hi Maged,
The problem you met beacuse of the left over
of older cluster.Did you remove the db partition or you just use the old
partition?
I thought Wido suggest to remove the partition
then use the dd to be safe.Is it safe I don't remove the partition and
just use dd the try to destory the data on that partition?
How would ceph-disk or ceph-volume do to the
existing partition of journal,db,wal?Will it clean it or it just uses
it without any action?
发件人:Maged Mokhtar <mmokhtar@xxxxxxxxxxx>
发送时间:2018-02-01 14:22
主题:Re: [ceph-users] How to clean data of osd with
ssd journal(wal, db if it is bluestore) ?
收件人:"David
Turner"<drakonstein@xxxxxxxxx>
抄送:"shadow_lin"<shadow_lin@xxxxxxx>,"ceph-users"<ceph-users@xxxxxxxxxxxxxx>
I would recommend as Wido to use the dd command. block db device holds
the metada/allocation of objects stored in data block, not cleaning this
is asking for problems, besides it does not take any time. In our
testing building new custer on top of older installation, we did see many
cases where osds will not start and report an error such as fsid of
cluster and/or OSD does not match metada in BlueFS superblock...these
errors do not appear if we use the dd command.
On 2018-02-01 06:06, David Turner wrote:
I know that for filestore journals that is fine. I
think it is also safe for bluestore. Doing Wido's recommendation
of writing 100MB would be a good idea, but not necessary.
Hi David,
Thanks for your reply.
I am wondering what if I don't remove
the journal(wal,db for bluestore) partion on the ssd and only zap the
data disk.Then I assign the journal(wal,db for bluestore) partion to a
new osd.What would happen?
发送时间:2018-01-31 17:24
主题:Re: [ceph-users] How to clean data of osd
with ssd journal(wal, db if it is bluestore) ?
I use gdisk to remove the partition and partprobe for the
OS to see the new partition table. You can script it with
sgdisk.
Hi list,
if I create an osd with
journal(wal,db if it is bluestore) in the same hdd, I use
ceph-disk zap to clean the disk when I want to remove
the osd and clean the data on the disk.
But if I use a ssd partition as the
journal(wal,db if it is bluestore) , how should I clean the
journal (wal,db if it is bluestore) of the osd I want to
remove?Especially when there are other osds are using other
partition of the same ssd as journals(wal,db if it is
bluestore) .
_______________________________________________ ceph-users
mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|