The firewalld service 'ceph' includes the range of ports required.
Not sure why it helped, but after a reboot of each OSD node the issue went away!
On Thu, 25 Jul 2019 at 23:14, <DHilsbos@xxxxxxxxxxxxxx> wrote:
Nathan;
I'm not an expert on firewalld, but shouldn't you have a list of open ports?
ports: ?????
Here's the configuration on my test cluster:
public (active)
target: default
icmp-block-inversion: no
interfaces: bond0
sources:
services: ssh dhcpv6-client
ports: 6789/tcp 3300/tcp 6800-7300/tcp 8443/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
trusted (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: bond1
sources:
services:
ports: 6789/tcp 3300/tcp 6800-7300/tcp 8443/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
I use interfaces as selectors, but would think source selectors would work the same.
You might start by adding the MON ports to the firewall on the MONs:
firewall-cmd --zone=public --add-port=6789/tcp --permanent
firewall-cmd --zone=public --add-port=3300/tcp --permanent
firewall-cmd --reload
Thank you,
Dominic L. Hilsbos, MBA
Director – Information Technology
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx
www.PerformAir.com
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Nathan Harper
Sent: Thursday, July 25, 2019 2:08 PM
To: ceph-users@xxxxxxxx
Subject: [Disarmed] Re: ceph-ansible firewalld blocking ceph comms
This is a new issue to us, and did not have the same problem running the same activity on our test system.
Regards,
Nathan
On 25 Jul 2019, at 22:00, solarflow99 <solarflow99@xxxxxxxxx> wrote:
I used ceph-ansible just fine, never had this problem.
On Thu, Jul 25, 2019 at 1:31 PM Nathan Harper <nathan.harper@xxxxxxxxxxx> wrote:
Hi all,
We've run into a strange issue with one of our clusters managed with ceph-ansible. We're adding some RGW nodes to our cluster, and so re-ran site.yml against the cluster. The new RGWs added successfully, but....
When we did, we started to get slow requests, effectively across the whole cluster. Quickly we realised that the firewall was now (apparently) blocking Ceph communications. I say apparently, because the config looks correct:
[root@osdsrv05 ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces:
sources: MailScanner has detected a possible fraud attempt from "172.20.22.0" claiming to be 172.20.22.0/24 MailScanner has detected a possible fraud attempt from "172.20.23.0" claiming to be 172.20.23.0/24
services: ssh dhcpv6-client ceph
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
If we drop the firewall everything goes back healthy. All the clients (Openstack cinder) are on the 172.20.22.0 network (172.20.23.0 is the replication network). Has anyone seen this?
--
Nathan Harper // IT Systems Lead
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Nathan Harper // IT Systems Lead
CFMS Services Ltd // Bristol & Bath Science Park // Dirac Crescent // Emersons Green // Bristol // BS16 7FR
CFMS Services Ltd is registered in England and Wales No 05742022 - a subsidiary of CFMS Ltd
CFMS Services Ltd registered office // 43 Queens Square // Bristol // BS1 4QP
CFMS Services Ltd registered office // 43 Queens Square // Bristol // BS1 4QP
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com