Re: [cephadm] not detecting new disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



use wipefs -a /dev/<device>

that will take care of it

On 03/09/2022 16.18, Satish Patel wrote:
I did use sgdisk zap to disk and wipe out everything. But still not detecting.

Is there any other  good way to wipeout ?

Sent from my iPhone

On Sep 3, 2022, at 2:53 AM, Eugen Block <eblock@xxxxxx> wrote:

It is detecting the disk, but it contains a partition table so it can’t use it. Wipe the disk properly first.

Zitat von Satish Patel <satish.txt@xxxxxxxxx>:

Folks,

I have created a new lab using cephadm and installed a new 1TB spinning
disk which is trying to add in a cluster but somehow ceph is not detecting
it.

$ parted /dev/sda print
Model: ATA WDC WD10EZEX-00B (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  Flags

Trying following but no luck

$ cephadm shell -- ceph orch daemon add osd os-ctrl-1:/dev/sda
Inferring fsid 351f8a26-2b31-11ed-b555-494149d85a01
Using recent ceph image
quay.io/ceph/ceph@sha256:c5fd9d806c54e5cc9db8efd50363e1edf7af62f101b264dccacb9d6091dcf7aa
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1446, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 171, in
handle_command
    return dispatch[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 414, in call
    return self.func(mgr, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 107, in
<lambda>
    wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
# noqa: E731
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 96, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/module.py", line 803, in
_daemon_add_osd
    raise_if_exception(completion)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 228, in
raise_if_exception
    raise e
RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config
/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/mon.os-ctrl-1/config
Non-zero exit code 2 from /usr/bin/docker run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume
--privileged --group-add=disk --init -e CONTAINER_IMAGE=
quay.io/ceph/ceph@sha256:c5fd9d806c54e5cc9db8efd50363e1edf7af62f101b264dccacb9d6091dcf7aa
-e NODE_NAME=os-ctrl-1 -e CEPH_USE_RANDOM_NONCE=1 -e
CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e
CEPH_VOLUME_DEBUG=1 -v
/var/run/ceph/351f8a26-2b31-11ed-b555-494149d85a01:/var/run/ceph:z -v
/var/log/ceph/351f8a26-2b31-11ed-b555-494149d85a01:/var/log/ceph:z -v
/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/crash:/var/lib/ceph/crash:z
-v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
/run/lock/lvm:/run/lock/lvm -v /:/rootfs -v
/tmp/ceph-tmpznn3t_7i:/etc/ceph/ceph.conf:z -v
/tmp/ceph-tmpun8t5_ej:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
quay.io/ceph/ceph@sha256:c5fd9d806c54e5cc9db8efd50363e1edf7af62f101b264dccacb9d6091dcf7aa
lvm batch --no-auto /dev/sda --yes --no-systemd
/usr/bin/docker: stderr usage: ceph-volume lvm batch [-h] [--db-devices
[DB_DEVICES [DB_DEVICES ...]]]
/usr/bin/docker: stderr                              [--wal-devices
[WAL_DEVICES [WAL_DEVICES ...]]]
/usr/bin/docker: stderr                              [--journal-devices
[JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
/usr/bin/docker: stderr                              [--auto] [--no-auto]
[--bluestore] [--filestore]
/usr/bin/docker: stderr                              [--report] [--yes]
/usr/bin/docker: stderr                              [--format
{json,json-pretty,pretty}] [--dmcrypt]
/usr/bin/docker: stderr                              [--crush-device-class
CRUSH_DEVICE_CLASS]
/usr/bin/docker: stderr                              [--no-systemd]
/usr/bin/docker: stderr                              [--osds-per-device
OSDS_PER_DEVICE]
/usr/bin/docker: stderr                              [--data-slots
DATA_SLOTS]
/usr/bin/docker: stderr                              [--block-db-size
BLOCK_DB_SIZE]
/usr/bin/docker: stderr                              [--block-db-slots
BLOCK_DB_SLOTS]
/usr/bin/docker: stderr                              [--block-wal-size
BLOCK_WAL_SIZE]
/usr/bin/docker: stderr                              [--block-wal-slots
BLOCK_WAL_SLOTS]
/usr/bin/docker: stderr                              [--journal-size
JOURNAL_SIZE]
/usr/bin/docker: stderr                              [--journal-slots
JOURNAL_SLOTS] [--prepare]
/usr/bin/docker: stderr                              [--osd-ids [OSD_IDS
[OSD_IDS ...]]]
/usr/bin/docker: stderr                              [DEVICES [DEVICES ...]]
/usr/bin/docker: stderr ceph-volume lvm batch: error: GPT headers found,
they must be removed on: /dev/sda
Traceback (most recent call last):
  File
"/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
line 8971, in <module>
    main()
  File
"/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
line 8959, in main
    r = ctx.func(ctx)
  File
"/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
line 1902, in _infer_config
    return func(ctx)
  File
"/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
line 1833, in _infer_fsid
    return func(ctx)
  File
"/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
line 1930, in _infer_image
    return func(ctx)
  File
"/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
line 1820, in _validate_fsid
    return func(ctx)
  File
"/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
line 5172, in command_ceph_volume
    out, err, code = call_throws(ctx, c.run_cmd())
  File
"/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
line 1622, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host
--stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume
--privileged --group-add=disk --init -e CONTAINER_IMAGE=
quay.io/ceph/ceph@sha256:c5fd9d806c54e5cc9db8efd50363e1edf7af62f101b264dccacb9d6091dcf7aa
-e NODE_NAME=os-ctrl-1 -e CEPH_USE_RANDOM_NONCE=1 -e
CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e
CEPH_VOLUME_DEBUG=1 -v
/var/run/ceph/351f8a26-2b31-11ed-b555-494149d85a01:/var/run/ceph:z -v
/var/log/ceph/351f8a26-2b31-11ed-b555-494149d85a01:/var/log/ceph:z -v
/var/lib/ceph/351f8a26-2b31-11ed-b555-494149d85a01/crash:/var/lib/ceph/crash:z
-v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
/run/lock/lvm:/run/lock/lvm -v /:/rootfs -v
/tmp/ceph-tmpznn3t_7i:/etc/ceph/ceph.conf:z -v
/tmp/ceph-tmpun8t5_ej:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
quay.io/ceph/ceph@sha256:c5fd9d806c54e5cc9db8efd50363e1edf7af62f101b264dccacb9d6091dcf7aa
lvm batch --no-auto /dev/sda --yes --no-systemd

Not detecting disk also using following command

$ cephadm shell -- ceph orch apply osd --all-available-devices --dry-run
Inferring fsid 351f8a26-2b31-11ed-b555-494149d85a01
Using recent ceph image
quay.io/ceph/ceph@sha256:c5fd9d806c54e5cc9db8efd50363e1edf7af62f101b264dccacb9d6091dcf7aa
WARNING! Dry-Runs are snapshots of a certain point in time and are bound
to the current inventory setup. If any of these conditions change, the
preview will be invalid. Please make sure to have a minimal
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE  |NAME  |HOST  |DATA  |DB  |WAL  |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux