you could check owner of /var/lib/ceph on host with grafana container running. If its owner is root, change to 167:167 recursively. Then systemctl daemon-reload and restart the service. Good luck. Ben Adiga, Anantha <anantha.adiga@xxxxxxxxx> 于2023年5月17日周三 03:57写道: > Hi > > Upgraded from Pacific 16.2.5 to 17.2.6 on May 8th > > However, Grafana fails to start due to bad folder path > :/tmp# journalctl -u > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201 -n 25 > -- Logs begin at Sun 2023-05-14 20:05:52 UTC, end at Tue 2023-05-16 > 19:07:51 UTC. -- > May 16 19:05:00 fl31ca104ja0201 systemd[1]: Stopped Ceph > grafana-fl31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > May 16 19:05:00 fl31ca104ja0201 systemd[1]: Started Ceph > grafana-fl31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > May 16 19:05:00 fl31ca104ja0201 bash[2575021]: /bin/bash: > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/u> > May 16 19:05:00 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Main process ex> > May 16 19:05:00 fl31ca104ja0201 bash[2575030]: /bin/bash: > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/u> > May 16 19:05:00 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Failed with res> > May 16 19:05:10 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Scheduled resta> > May 16 19:05:10 fl31ca104ja0201 systemd[1]: Stopped Ceph > grafana-fl31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > May 16 19:05:10 fl31ca104ja0201 systemd[1]: Started Ceph > grafana-fl31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > May 16 19:05:10 fl31ca104ja0201 bash[2575273]: /bin/bash: > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/u> > May 16 19:05:10 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Main process ex> > May 16 19:05:10 fl31ca104ja0201 bash[2575282]: /bin/bash: > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/u> > May 16 19:05:10 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Failed with res> > May 16 19:05:20 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Scheduled resta> > May 16 19:05:20 fl31ca104ja0201 systemd[1]: Stopped Ceph > grafana-fl31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > May 16 19:05:20 fl31ca104ja0201 systemd[1]: Started Ceph > grafana-fl31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > May 16 19:05:20 fl31ca104ja0201 bash[2575369]: /bin/bash: > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/u> > May 16 19:05:20 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Main process ex> > May 16 19:05:20 fl31ca104ja0201 bash[2575370]: /bin/bash: > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/u> > May 16 19:05:20 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Failed with res> > May 16 19:05:30 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Scheduled resta> > May 16 19:05:30 fl31ca104ja0201 systemd[1]: Stopped Ceph > grafana-fl31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > May 16 19:05:30 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Start request r> > May 16 19:05:30 fl31ca104ja0201 systemd[1]: > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service: > Failed with res> > May 16 19:05:30 fl31ca104ja0201 systemd[1]: Failed to start Ceph > grafana-fl31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > ESCOC > 19:07:51 UTC. -- > 31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > 31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/unit.run: > No such file or directory > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service<mailto: > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service>: Main process exited, > code=exited, status=127/n/a > ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/unit.poststop: > No such file or directory > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service<mailto: > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service>: Failed with result > 'exit-code'. > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service<mailto: > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service>: Scheduled restart > job, restart counter is at 3. > 31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > 31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/unit.run: > No such file or directory > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service<mailto: > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service>: Main process exited, > code=exited, status=127/n/a > ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/unit.poststop: > No such file or directory > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service<mailto: > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service>: Failed with result > 'exit-code'. > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service<mailto: > -be05-a7a3a1d7a87e@grafana-fl31ca104ja0201.service>: Scheduled restart > job, restart counter is at 4. > 31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > 31ca104ja0201 for d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e. > > > Check if path exists: > #<mailto:root@fl31ca104ja0201:/var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana.fl31ca104ja0201#> > ls > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/unit.run > ls: cannot access > '/var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana-fl31ca104ja0201/unit.run': > No such file or directory > > Check if grafana.fl31ca104ja0201 directory exists: > > #<mailto:root@fl31ca104ja0201:/var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana.fl31ca104ja0201#> > ls -l > rt/var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana.fl31ca104ja0201/unit.run > total 36 > drwxr-xr-x 4 167 167 4096 Apr 20 08:05 data > drwxr-xr-x 3 167 167 4096 Apr 20 08:13 etc > -rw------- 1 167 167 48 Apr 20 08:13 unit.created > -rw------- 1 167 167 390 May 8 16:12 unit.stop > -rw------- 1 167 167 390 May 8 16:12 unit.poststop > -rw------- 1 167 167 365 May 8 16:12 unit.meta > -rw------- 1 167 167 32 May 8 16:12 unit.image > -rw------- 1 167 167 38 May 8 16:12 unit.configured > -rw------- 1 167 167 1063 May 8 16:12 unit.run > > cat unit.run > set -e > # grafana.fl31ca104ja0201 > ! /usr/bin/docker rm -f > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e-grafana.fl31ca104ja0201 2> > /dev/null > ! /usr/bin/docker rm -f > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e-grafana-fl31ca104ja0201 2> > /dev/null > /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host > --init --name > ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e-grafana-fl31ca104ja0201 --user > 472 -e CONTAINER_IMAGE=docker.io/grafana/grafana:6.7.4 -e > NODE_NAME=fl31ca104ja0201 -e CEPH_USE_RANDOM_NONCE=1 -v > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana.fl31ca104ja0201/etc/grafana/grafana.ini:/etc/grafana/grafana.ini:Z > -v > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana.fl31ca104ja0201/etc/grafana/provisioning/datasources:/etc/grafana/provisioning/datasources:Z > -v > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana.fl31ca104ja0201/etc/grafana/certs:/etc/grafana/certs:Z > -v > /var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana.fl31ca104ja0201/data/grafana.db:/var/lib/grafana/grafana.db:Z > docker.io/grafana/grafana:6.7.4 > > Thank you, > Anantha > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx