Hi list,
some of you also use ceph as storage backend for OpenStack, so maybe
you can help me out.
Last week we upgraded our Mitaka cloud to Ocata (via Newton, of
course), and also upgraded the cloud nodes from openSUSE Leap 42.1 to
Leap 42.3. There were some issues as expected, but no showstoppers
(luckily).
So the cloud is up and working again, but our monitoring shows a high
CPU load for cinder-volume service on the control node, also visible
in a tcpdump showing lots of connections to the ceph nodes. But since
all the clients are on the compute nodes we are wondering what cinder
actually does on the control node except initializing the connections
of course, but this should only be relevant for new volumes. The data
sent to the ceph nodes contains all these rbd_header files, e.g.
rb.0.24d5b04[...]. I expect this kind of traffic on the compute nodes,
of course, but why does the control node also establish so many
connections?
I'd appreciate any insight!
Regards,
Eugen
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eblock@xxxxxx
Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com