Trouble getting cephadm to deploy iSCSI gateway

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am attempting to set up a 3 node Ceph cluster using Ubuntu server 22.04LTS, and the Cehpadm deployment tool.

3 times I've succeeded in setting up ceph itself, getting the cluster healthy, and OSDs all set up. The nodes (all monitors) are at 192.168.122.3, 192.168.122.4, and 192.168.122.5. All nodes have a second "backend" network on separate interface in the 10.0.0.3-10.0.0.5 range.

I then create a RBD pool called "rbd".

All is healthy with the cluster per the dashboard up to this point.

I then try to set up iSCSI gateways on 192.168.122.3 and 192.168.122.5, following these directions:https://docs.ceph.com/en/pacific/cephadm/services/iscsi/

That means doing `cephadm shell`, getting the `iscsi.yaml` file into the docker container (with echo since there seems to be no text editors available) and then running their recommended deployment command of `ceph orch apply -i iscsi.yaml`. The yaml file has in it:

    service_type: iscsi
    service_id: iscsi
    placement:
      hosts:
        - ceph1
        - ceph3
    spec:
      pool: rbd  # RADOS pool where ceph-iscsi config data is stored.
      trusted_ip_list: "192.168.122.3,192.168.122.5,10.0.0.3,10.0.0.5,192.168.122.4,10.0.0.4"

I then get in the dashboard status page there there are 2 iSCSI gateways configured, but down.
https://i.stack.imgur.com/wr619.png

In services, it shows that the services are running:

https://i.stack.imgur.com/PwSik.png


 On the iSCSI gateways page it shows this:

(Ceph dashboard iSCSI gateway page showing both gateways down)https://i.stack.imgur.com/Se2Mv.png


Looking on one of the node's containers, it does look like cephadm started/deployed containers for this (apologies in advance for the horrible email formatting of a console table - look at the first two):

    root@ceph1:~# docker ps
    CONTAINER ID   IMAGE                                     COMMAND                  CREATED             STATUS             PORTS     NAMES
    cefaf78b98ee   quay.ceph.io/ceph-ci/ceph                 "/usr/bin/rbd-target…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-iscsi-iscsi-ceph1-alnale
    b405b321bd6a   quay.ceph.io/ceph-ci/ceph                 "/usr/bin/tcmu-runner"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-iscsi-iscsi-ceph1-alnale-tcmu
    a05af7ac9609   quay.io/prometheus/prometheus:v2.33.4     "/bin/prometheus --c…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-prometheus-ceph1
    4699606a7878   quay.io/prometheus/alertmanager:v0.23.0   "/bin/alertmanager -…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-alertmanager-ceph1
    103abafd0c19   quay.ceph.io/ceph-ci/ceph                 "/usr/bin/ceph-osd -…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-osd-2
    adcad13a1dcb   quay.ceph.io/ceph-ci/ceph                 "/usr/bin/ceph-osd -…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-osd-0
    9626b0794794   quay.io/ceph/ceph-grafana:8.3.5           "/bin/sh -c 'grafana…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-grafana-ceph1
    9a717edbf83f   quay.io/prometheus/node-exporter:v1.3.1   "/bin/node_exporter …"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-node-exporter-ceph1
    c1c52d37baf1   quay.ceph.io/ceph-ci/ceph                 "/usr/bin/ceph-crash…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-crash-ceph1
    f6b2c9fef7e9   quay.ceph.io/ceph-ci/ceph:master          "/usr/bin/ceph-mgr -…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-mgr-ceph1-mpqrst
    8889082c55b1   quay.ceph.io/ceph-ci/ceph:master          "/usr/bin/ceph-mon -…"   About an hour ago   Up About an hour             ceph-9f724dc4-d2de-11ec-b7be-8f11f39bf88a-mon-ceph1


I have tried a fresh redeployment with setting the API username and password in the iscsi.yaml file to the same as the main Ceph dashboard login, but that just gave 500 errors when trying to got to the iSCSI gateway page.

I have also tried setting the dashboard not to verify SSL certs as I don't have any signed ones:

    root@ceph1:/# ceph dashboard set-iscsi-api-ssl-verification false
    Option ISCSI_API_SSL_VERIFICATION updated

I have also tried looking at the URLs it is using, and going to them in a browser. I get 404 errors when trying to go to those in a browser, but that may be normal if it's an API base URL:

    root@ceph1:/# ceph dashboard iscsi-gateway-list
    {"gateways": {"ceph1": {"service_url":"http://admin:admin@192.168.122.3:5000"}, "ceph3": {"service_url":"http://admin:admin@192.168.122.5:5000"}}}

(The above had the admin username and password for the main dashboard encoded into the URL on a previous deployment where I set the api_user and api_password is the iscsi.yaml file.

Doing `cephadm shell` and then trying `gwcli` gives:

    REST API failure, code : 500
    Unable to access the configuration object
    Unable to contact the local API endpoint (https://localhost:5000/api)

Trying the API with wget from one of the nodes (so we are coming from a trusted IP) gives:

    wgethttp://192.168.122.3:5000/api/targets  --user "admin" --password "admin"
    --2022-05-13 19:32:54--http://192.168.122.3:5000/api/targets
    Connecting to 192.168.122.3:5000... connected.
    HTTP request sent, awaiting response... 401 UNAUTHORIZED
    Unknown authentication scheme.
Username/Password Authentication Failed.
The same occurrs whether I use empty username and password, the main ceph username and password, or the (above) admin/admin.

How can I get these iSCSI gateways to come up? How do I troubleshoot this from here? What can I check to get more information on what is going off the rails here? Is it just that the dashboard is having trouble, but it would work from CLI (if so, what commands can I use (and where do I run them with all these docker containers) to see status and set up ISCSI targets?)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux