Re: Public Network Meaning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I thought that it was easy but apparently it's not!

I have the following in my conf file


mon_host = 192.168.1.100,192.168.1.101,192.168.1.102
public_network = 192.168.1.0/24
mon_initial_members = fu,rai,jin


but still the 15.12.6.21 link is being saturated....

Any ideas why???

Should I put cluster network as well??

Should I put each OSD in the CONF file???


Regards,


George





Andrija,

thanks a lot for the useful info!

I would also like to thank "Kingrat" at the IRC channel for his
useful advice!


I was under the wrong impression that public is the one used for RADOS.

So I thought that public=external=internet and therefore I used that
one in my conf.

I understand now that I should have specified in CEPH Public's
Network what I call
"internal" and which is the one that all machines are talking
directly to each other.


Thanks you all for the feedback!


Regards,


George


Public network is clients-to-OSD traffic - and if you have NOT
explicitely defined cluster network, than also OSD-to-OSD replication
takes place over same network.

Otherwise, you can define public and cluster(private) network - so OSD replication will happen over dedicated NICs (cluster network) and thus
speed up.

If i.e. replica count on pool is 3, that means, each 1GB of data
writen to some particualr OSD, will generate 3 x 1GB of more writes,
to the replicas... - which ideally will take place over separate NICs
to speed up things...

On 14 March 2015 at 17:43, Georgios Dimitrakakis  wrote:

Hi all!!

What is the meaning of public_network in ceph.conf?

Is it the network that OSDs are talking and transferring data?

I have two nodes with two IP addresses each. One for internal
network MAILSCANNER WARNING: NUMERICAL LINKS ARE OFTEN MALICIOUS:
192.168.1.0/24 [1]
and one external 15.12.6.*

I see the following in my logs:

osd.0 is down since epoch 2204, last address MAILSCANNER WARNING:
NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6826/33094 [2]
osd.1 is down since epoch 2206, last address MAILSCANNER WARNING:
NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6817/32463 [3]
osd.2 is down since epoch 2198, last address MAILSCANNER WARNING:
NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6843/34921 [4]
osd.3 is down since epoch 2200, last address MAILSCANNER WARNING:
NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6838/34208 [5]
osd.4 is down since epoch 2202, last address MAILSCANNER WARNING:
NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6831/33610 [6]
osd.5 is down since epoch 2194, last address MAILSCANNER WARNING:
NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6858/35948 [7]
osd.7 is down since epoch 2192, last address MAILSCANNER WARNING:
NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6871/36720 [8]
osd.8 is down since epoch 2196, last address MAILSCANNER WARNING:
NUMERICAL LINKS ARE OFTEN MALICIOUS: 15.12.6.21:6855/35354 [9]

I ve managed to add a second node and during rebalancing I see that
data is transfered through
the internal 192.* but the external link is also saturated!

What is being transferred from that?

Any help much appreciated!

Regards,

George
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx [10]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [11]

--

Andrija Panić

Links:
------
[1] http://192.168.1.0/24
[2] http://15.12.6.21:6826/33094
[3] http://15.12.6.21:6817/32463
[4] http://15.12.6.21:6843/34921
[5] http://15.12.6.21:6838/34208
[6] http://15.12.6.21:6831/33610
[7] http://15.12.6.21:6858/35948
[8] http://15.12.6.21:6871/36720
[9] http://15.12.6.21:6855/35354
[10] mailto:ceph-users@xxxxxxxxxxxxxx
[11] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[12] mailto:giorgis@xxxxxxxxxxxx

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux