radosgw. Strange behavior in 2 zone configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, all!
I have successfully create 2 zone cluster(se and se2). But my radosgw machines are sending many GET /admin/log requests to each other after put 10k items to cluster via radosgw. It's look like:
2017-03-03 17:31:17.897872 7f21b9083700 1 civetweb: 0x7f222001f660: 10.30.18.24 - - [03/Mar/2017:17:31:17 +0500] "GET /admin/log/ HTTP/1.1" 200 0 - -
2017-03-03 17:31:17.944212 7f21ca0a5700 1 civetweb: 0x7f2200015510: 10.30.18.24 - - [03/Mar/2017:17:31:17 +0500] "GET /admin/log/ HTTP/1.1" 200 0 - -
2017-03-03 17:31:17.945363 7f21b9083700 1 civetweb: 0x7f222001f660: 10.30.18.24 - - [03/Mar/2017:17:31:17 +0500] "GET /admin/log/ HTTP/1.1" 200 0 - -
2017-03-03 17:31:17.988330 7f21ca0a5700 1 civetweb: 0x7f2200015510: 10.30.18.24 - - [03/Mar/2017:17:31:17 +0500] "GET /admin/log/ HTTP/1.1" 200 0 - -
2017-03-03 17:31:18.005993 7f21b9083700 1 civetweb: 0x7f222001f660: 10.30.18.24 - - [03/Mar/2017:17:31:17 +0500] "GET /admin/log/ HTTP/1.1" 200 0 - -
2017-03-03 17:31:18.006234 7f21c689e700 1 civetweb: 0x7f221c011260: 10.30.18.24 - - [03/Mar/2017:17:31:17 +0500] "GET /admin/log/ HTTP/1.1" 200 0 - -
up to 2k rps!!! Do anybody know what is it???
Tcpdump show the request is:
GET /admin/log/?type=data&id=100&info&rgwx-zonegroup=bfe2e3bb-2040-4b1a-9ccb-ab5347ce3017 HTTP/1.1
Host: se2.local
Accept: */*
Transfer-Encoding: chunked
AUTHORIZATION: AWS hEY2W7nW3tdodGrsnrdv:v6+m2FGGhqCSDQteGJ4w039X1uw=
DATE: Fri Mar 3 12:32:20 2017
Expect: 100-continue
and answer:

...2...m{"marker":"1_1488542463.536646_1448.1","last_update":"2017-03-03 12:01:03.536646Z"}



All system install on:
OS: Ubuntu 16.04
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
rga sync status
2017-03-03 17:36:20.146017 7f7a72b5ea00 0 error in read_id for id : (2) No such file or directory
2017-03-03 17:36:20.147015 7f7a72b5ea00 0 error in read_id for id : (2) No such file or directory
realm d9ed5678-5734-4609-bf7a-fe3d5f700b23 (s)
zonegroup bfe2e3bb-2040-4b1a-9ccb-ab5347ce3017 (se)
zone 9b212551-a7cf-4aaa-9ef6-b18a31a6e032 (se-k8)
metadata sync no sync (zone is master)
data sync source: 029e0f49-f4dc-4f29-8855-bcc23a8bbcd9 (se2-k12)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source


My config files are:
[client.radosgw.se2-k12-2]
rgw data = /var/lib/ceph/radosgw/ceph-radosgw.se2-k12-2
rgw zonegroup = se
rgw zone = se2-k12
#rgw zonegroup root pool = se.root
#rgw zone root pool = se.root
keyring = /etc/ceph/bak.client.radosgw.se2-k12-2.keyring
rgw host = cbrgw04
rgw dns name = se2.local
log file = /var/log/radosgw/client.radosgw.se2-k12-2.log
rgw_frontends = "civetweb num_threads=50 port=80"
rgw cache lru size = 10
rgw cache enabled = false
#debug rgw = 20
rgw enable ops log = false
#log to stderr = false
rgw enable usage log = false
rgw swift versioning enabled = true
rgw swift url = http://se2.local/
rgw override bucket index max shards = 20
rgw print continue = false

[client.radosgw.se-k8-2]
rgw data = /var/lib/ceph/radosgw/ceph-radosgw.se-k8-2
rgw zonegroup = se
rgw zone = se-k8
#rgw zonegroup root pool = .se.root
#rgw zone root pool = .se.root
keyring = /etc/ceph/ceph.client.radosgw.se-k8-2.keyring
rgw host = cnrgw02
rgw dns name = se.local
log file = /var/log/radosgw/client.radosgw.se-k8-2.log
rgw_frontends = "civetweb num_threads=100 port=80"
rgw cache enabled = false
rgw cache lru size = 10
#debug rgw = 20
rgw enable ops log = false
#log to stderr = false
rgw enable usage log = false
rgw swift versioning enabled = true
rgw swift url = http://se.local
rgw override bucket index max shards = 20
rgw print continue = false
rga zonegroup get
{
"id": "bfe2e3bb-2040-4b1a-9ccb-ab5347ce3017",
"name": "se",
"api_name": "se",
"is_master": "true",
"endpoints": [
"http:\/\/se.local:80"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "9b212551-a7cf-4aaa-9ef6-b18a31a6e032",
"zones": [
{
"id": "029e0f49-f4dc-4f29-8855-bcc23a8bbcd9",
"name": "se2-k12",
"endpoints": [
"http:\/\/se2.local:80"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 0,
"read_only": "false"
},
{
"id": "9b212551-a7cf-4aaa-9ef6-b18a31a6e032",
"name": "se-k8",
"endpoints": [
"http:\/\/se.local:80"
],
"log_meta": "true",
"log_data": "true",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "d9ed5678-5734-4609-bf7a-fe3d5f700b23"
}


-- 
K K
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170303/0cd961cb/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux