On 31/01/2024 09:52, Eugen Block wrote:
I deployed the nfs with ceph version 17.2.7 and then upgraded to 18.2.1
successfully, the ingress service is still present. Can you tell if it
was there while you were on quincy? To fix it I would just apply the
nfs.yaml again and see if the ingress service is deployed. To know what
happened during (or after) the upgrade you'd probably have to look
through the mgr logs...
We just discussed that internally. We don't know for sure if the ingress
service was in the service spec, we just know that we could use the
virtual IP for mounting.
We did quite a bit of mucking around getting it to work so I guess it is
possible the backend host just happened to have a leftover ingress
container running and it got lost in the upgrade, or something like that.
I got it running for now by crafting a ingress.yml manually. I'll go dig
in the logs. Thanks!
Mvh.
Torkil
Zitat von Torkil Svensgaard <torkil@xxxxxxxx>:
On 31/01/2024 09:36, Eugen Block wrote:
Hi,
if I understand this correctly, with the "keepalive-only" option only
one ganesha instance is supposed to be deployed:
If a user additionally supplies --ingress-mode keepalive-only a
partial ingress service will be deployed that still provides a
virtual IP, but has nfs directly binding to that virtual IP and
leaves out any sort of load balancing or traffic redirection. This
setup will restrict users to deploying only 1 nfs daemon as multiple
cannot bind to the same port on the virtual IP.
Maybe that's why it disappeared as you have 3 hosts in the placement
parameter? Is the ingress service still present in 'ceph orch ls'?
As I read the documentation[1] the "count: 1" handles that so what I
have is a placement pool from which only one is selected for deployment?
The absence of the ingress service is puzzling me, as it worked just
fine prior to the upgrade and the upgrade shouldn't have touched the
service spec in any way?
Mvh.
Torkil
[1]
https://docs.ceph.com/en/latest/cephadm/services/nfs/#nfs-with-virtual-ip-but-no-haproxy
Regards,
Eugen
Zitat von Torkil Svensgaard <torkil@xxxxxxxx>:
Hi
Last week we created an NFS service like this:
"
ceph nfs cluster create jumbo "ceph-flash1,ceph-flash2,ceph-flash3"
--ingress --virtual_ip 172.21.15.74/22 --ingress-mode keepalive-only
"
Worked like a charm. Yesterday we upgraded from 17.2.7 to 18.20.0
and the NFS virtual IP seems to have gone missing in the process:
"
# ceph nfs cluster info jumbo
{
"jumbo": {
"backend": [
{
"hostname": "ceph-flash1",
"ip": "172.21.15.148",
"port": 2049
}
],
"virtual_ip": null
}
}
"
Service spec:
"
service_type: nfs
service_id: jumbo
service_name: nfs.jumbo
placement:
count: 1
hosts:
- ceph-flash1
- ceph-flash2
- ceph-flash3
spec:
port: 2049
virtual_ip: 172.21.15.74
"
I've tried restarting the nfs.jumbo service which didn't help.
Suggestions?
Mvh.
Torkil
--
Torkil Svensgaard
Sysadmin
MR-Forskningssektionen, afs. 714
DRCMR, Danish Research Centre for Magnetic Resonance
Hvidovre Hospital
Kettegård Allé 30
DK-2650 Hvidovre
Denmark
Tel: +45 386 22828
E-mail: torkil@xxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Torkil Svensgaard
Sysadmin
MR-Forskningssektionen, afs. 714
DRCMR, Danish Research Centre for Magnetic Resonance
Hvidovre Hospital
Kettegård Allé 30
DK-2650 Hvidovre
Denmark
Tel: +45 386 22828
E-mail: torkil@xxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Torkil Svensgaard
Sysadmin
MR-Forskningssektionen, afs. 714
DRCMR, Danish Research Centre for Magnetic Resonance
Hvidovre Hospital
Kettegård Allé 30
DK-2650 Hvidovre
Denmark
Tel: +45 386 22828
E-mail: torkil@xxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx