Re: replace OSD failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here is the issue.
https://tracker.ceph.com/issues/47758


Thanks!
Tony
> -----Original Message-----
> From: Tony Liu <tonyliu0592@xxxxxxxxxxx>
> Sent: Thursday, February 4, 2021 8:46 PM
> To: ceph-users@xxxxxxx
> Subject:  Re: replace OSD failed
> 
> Here is the log from ceph-volume.
> ```
> [2021-02-05 04:03:17,000][ceph_volume.process][INFO  ] Running command:
> /usr/sbin/vgcreate --force --yes ceph-a3886f74-3de9-4e6e-a983-
> 8330eda0bd64 /dev/sdd
> [2021-02-05 04:03:17,134][ceph_volume.process][INFO  ] stdout Physical
> volume "/dev/sdd" successfully created.
> [2021-02-05 04:03:17,166][ceph_volume.process][INFO  ] stdout Volume
> group "ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64" successfully created
> [2021-02-05 04:03:17,189][ceph_volume.process][INFO  ] Running command:
> /usr/sbin/vgs --noheadings --readonly --units=b --nosuffix --
> separator=";" -S vg_name=ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64 -o
> vg_name,pv_count,lv_count,vg_attr,vg_extent_count,vg_free_count,vg_exten
> t_size
> [2021-02-05 04:03:17,229][ceph_volume.process][INFO  ] stdout ceph-
> a3886f74-3de9-4e6e-a983-8330eda0bd64";"1";"0";"wz--n-
> ";"572317";"572317";"4194304
> [2021-02-05 04:03:17,229][ceph_volume.api.lvm][DEBUG ] size was passed:
> 2.18 TB -> 572318
> [2021-02-05 04:03:17,235][ceph_volume.process][INFO  ] Running command:
> /usr/sbin/lvcreate --yes -l 572318 -n osd-block-b05c3c90-b7d5-4f13-8a58-
> f72761c1971b ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64
> [2021-02-05 04:03:17,244][ceph_volume.process][INFO  ] stderr Volume
> group "ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64" has insufficient free
> space (572317 extents): 572318 required.
> ```
> size was passed: 2.18 TB -> 572318
> How is  this calculated?
> 
> 
> Thanks!
> Tony
> > -----Original Message-----
> > From: Tony Liu <tonyliu0592@xxxxxxxxxxx>
> > Sent: Thursday, February 4, 2021 8:34 PM
> > To: ceph-users@xxxxxxx
> > Subject:  replace OSD failed
> >
> > Hi,
> >
> > With 15.2.8, run "ceph orch rm osd 12 --replace --force", PGs on
> > osd.12 are remapped, osd.12 is removed from "ceph osd tree", the
> > daemon is removed from "ceph orch ps", the device is "available"
> > in "ceph orch device ls". Everything seems good at this point.
> >
> > Then dry-run service spec.
> > ```
> > # cat osd-spec.yaml
> > service_type: osd
> > service_id: osd-spec
> > placement:
> >   hosts:
> >   - ceph-osd-1
> > data_devices:
> >   rotational: 1
> > db_devices:
> >   rotational: 0
> >
> > # ceph orch apply osd -i osd-spec.yaml --dry-run
> > +---------+----------+------------+----------+----------+-----+
> > |SERVICE  |NAME      |HOST        |DATA      |DB        |WAL  |
> > +---------+----------+------------+----------+----------+-----+
> > |osd      |osd-spec  |ceph-osd-3  |/dev/sdd  |/dev/sdb  |-    |
> > +---------+----------+------------+----------+----------+-----+
> > ```
> > It looks as expected.
> >
> > Then "ceph orch apply osd -i osd-spec.yaml".
> > Here is the log of cephadm.
> > ```
> > /bin/docker:stderr --> relative data size: 1.0 /bin/docker:stderr -->
> > passed block_db devices: 1 physical, 0 LVM /bin/docker:stderr Running
> > command: /usr/bin/ceph-authtool --gen-print-key /bin/docker:stderr
> > Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-
> > osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f
> > json /bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph
> > --name client.bootstrap-osd --keyring
> > /var/lib/ceph/bootstrap-osd/ceph.keyring
> > -i - osd new b05c3c90-b7d5-4f13-8a58-f72761c1971b 12
> > /bin/docker:stderr Running command: /usr/sbin/vgcreate --force --yes
> > ceph-a3886f74-3de9-
> > 4e6e-a983-8330eda0bd64 /dev/sdd /bin/docker:stderr  stdout: Physical
> > volume "/dev/sdd" successfully created.
> > /bin/docker:stderr  stdout: Volume group
> > "ceph-a3886f74-3de9-4e6e-a983- 8330eda0bd64" successfully created
> /bin/docker:stderr Running command:
> > /usr/sbin/lvcreate --yes -l 572318 -n
> > osd-block-b05c3c90-b7d5-4f13-8a58-
> > f72761c1971b ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64
> > /bin/docker:stderr  stderr: Volume group
> > "ceph-a3886f74-3de9-4e6e-a983- 8330eda0bd64" has insufficient free
> > space (572317 extents): 572318 required.
> > /bin/docker:stderr --> Was unable to complete a new OSD, will rollback
> > changes ``` Q1, why VG name (ceph-<id>) is different from others
> > (ceph- block-<id>)?
> > Q2, where is that 572318 from? Since all HDDs are the same model, VG
> > "Total PE" of all HDDs is 572317.
> > Has anyone seen similar issues? Anything I am missing?
> >
> >
> > Thanks!
> > Tony
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux