Rook-Ceph OSD Deployment Error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mailing-Lister's,

I am reaching out for assistance regarding a deployment issue I am facing
with Ceph on a 4 node RKE2 cluster. We are attempting to deploy Ceph via
the rook helm chart, but we are encountering an issue that apparently seems
related to a known bug (https://tracker.ceph.com/issues/61597).

During the OSD preparation phase, the deployment consistently fails with an
IndexError: list index out of range. The logs indicate a problem occurs
when configuring new Disks, specifically using /dev/dm-3 as a metadata
device. It's important to note that /dev/dm-3 is an LVM on top of an mdadm
RAID, which might or might not be contributing to this issue. (I swear,
this setup worked already)

Here is a snippet of the error from the deployment logs:
> 2023-11-23 23:11:30.196913 D | exec: IndexError: list index out of range
> 2023-11-23 23:11:30.236962 C | rookcmd: failed to configure devices:
failed to initialize osd: failed ceph-volume report: exit status 1
https://paste.openstack.org/show/bileqRFKbolrBlTqszmC/

We have attempted different configurations, including specifying devices
explicitly and using the useAllDevices: true option with a specified
metadata device (/dev/dm-3 or the /dev/pv_md0/lv_md0 path). However, the
issue persists across multiple configurations.

tested configurations are as follows:

Explicit device specification:

```yaml
nodes:
  - name: "ceph01.maas"
    devices:
      - name: /dev/dm-1
      - name: /dev/dm-2
      - name: "sdb"
        config:
          metadataDevice: "/dev/dm-3"
      - name: "sdc"
        config:
          metadataDevice: "/dev/dm-3"
```

General device specification with metadata device:
```yaml
storage:
  useAllNodes: true
  useAllDevices: true
  config:
    metadataDevice: /dev/dm-3
```

I would greatly appreciate any insights or recommendations on how to
proceed or work around this issue.
Is there a halfway decent way to apply the fix or maybe a workaround that
we can apply to successfully deploy Ceph in our environment?

Kind regards,
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux