Re: Dedicated radosgw gateways

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've searched for rgw_enable_lc_threads and rgw_enable_gc_threads a bit.

but there is little information about those settings. Is there any
documentation in the wild about those settings?

Are they enabled by default?



On Thu, May 18, 2023 at 9:15 PM Tarrago, Eli (RIS-BCT) <
Eli.Tarrago@xxxxxxxxxxxxxxxxxx> wrote:

> Adding a bit more context to this thread.
>
> I added an additional radosgw to each cluser. Radosgw 1-3 are customer
> facing. Radosgw #4 is dedicated to syncing
>
> Radosgw 1-3 now have an additional lines:
> rgw_enable_lc_threads = False
> rgw_enable_gc_threads = False
>
> Radosgw4 has the additional line:
> rgw_sync_obj_etag_verify = True
>
> The logs on any of the radosgw’s appear to be identical, but here is an
> example log that is reflect on any of the servers. Notice the IP addresses
> are 1-4. Where I expect the traffic to be from only 4.
>
> Is this to be expected?
>
> My understanding of this thread is that this traffic would be regulated to
> radosgw 04.
>
>
> Example Ceph Conf for a single node, this is RadosGw 01
>
> [client.rgw.west01.rgw0]
> host = west01
> keyring = /var/lib/ceph/radosgw/west-rgw.west01.rgw0/keyring
> log file = /var/log/ceph/west-rgw-west01.rgw0.log
> rgw frontends = beast port=8080 num_threads=500
> rgw_dns_name = west01.example.com
> rgw_max_chunk_size = 67108864
> rgw_obj_stripe_size = 67108864
> rgw_put_obj_min_window_size = 67108864
> rgw_zone = rgw-west
> rgw_enable_lc_threads = False
> rgw_enable_gc_threads = False
>
> ------------
>
> Example Logs:
>
> 2023-05-18T19:06:48.295+0000 7fb295f83700  1 beast: 0x7fb3e82b26f0:
> 10.10.10.1 - synchronization-user [18/May/2023:19:06:48.107 +0000] "GET
> /admin/log/?type=data&id=69&marker=1_xxxx&extra-info=true&rgwx-zonegroup=xxxxx
> HTTP/1.1" 200 44 - - - latency=0.188007131s
> 2023-05-18T19:06:48.371+0000 7fb1dd612700  1 ====== starting new request
> req=0x7fb3e80ae6f0 =====
> 2023-05-18T19:06:48.567+0000 7fb1dd612700  1 ====== req done
> req=0x7fb3e80ae6f0 op status=0 http_status=200 latency=0.196007445s ======
> 2023-05-18T19:06:48.567+0000 7fb1dd612700  1 beast: 0x7fb3e80ae6f0:
> 10.10.10.3 - synchronization-user [18/May/2023:19:06:48.371 +0000] "GET
> /admin/log/?type=data&id=107&marker=1_xxxx&extra-info=true&rgwx-zonegroup=xxxx
> HTTP/1.1" 200 44 - - - latency=0.196007445s
> 2023-05-18T19:06:49.023+0000 7fb290f79700  1 ====== starting new request
> req=0x7fb3e81b06f0 =====
> 2023-05-18T19:06:49.023+0000 7fb28bf6f700  1 ====== req done
> req=0x7fb3e81b06f0 op status=0 http_status=200 latency=0.000000000s ======
> 2023-05-18T19:06:49.023+0000 7fb28bf6f700  1 beast: 0x7fb3e81b06f0:
> 10.10.10.2 - synchronization-user [18/May/2023:19:06:49.023 +0000] "GET
> /admin/log?bucket-instance=ceph-bucketxxx%3A81&format=json&marker=00000020447.3609723.6&type=bucket-index&rgwx-zonegroup=xxx
> HTTP/1.1" 200 2 - - - latency=0.000000000s
> 2023-05-18T19:06:49.147+0000 7fb27af4d700  1 ====== starting new request
> req=0x7fb3e81b06f0 =====
> 2023-05-18T19:06:49.151+0000 7fb27af4d700  1 ====== req done
> req=0x7fb3e81b06f0 op status=0 http_status=200 latency=0.004000151s ======
> 2023-05-18T19:06:49.475+0000 7fb280f59700  1 beast: 0x7fb3e82316f0:
> 10.10.10.4 - synchronization-user [18/May/2023:19:06:49.279 +0000] "GET
> /admin/log/?type=data&id=58&marker=1_xxxx.1&extra-info=true&rgwx-zonegroup=xxxx
> HTTP/1.1" 200 312 - - - latency=0.196007445s
> 2023-05-18T19:06:49.987+0000 7fb27c750700  1 ====== starting new request
> req=0x7fb3e81b06f0 =====
>
>
>
>
> radosgw-admin zonegroup get
> {
>     "id": "x",
>     "name": "eastWestceph",
>     "api_name": "EastWestCeph",
>     "is_master": "true",
>     "endpoints": [
>         "http://east01.noam.lnrm.net:8080";,
>         "http://east02.noam.lnrm.net:8080";,
>         "http://east03.noam.lnrm.net:8080";,
>         "http://east04.noam.lnrm.net:8080";, << ---- sync node
>         "http://west01.noam.lnrm.net:8080";,
>         "http://west02.noam.lnrm.net:8080";,
>         "http://west03.noam.lnrm.net:8080";,
>         "http://west04.noam.lnrm.net:8080"; << ---- sync node
>     ],
> .......
>     ],
>     "hostnames_s3website": [],
>     "master_zone": "x",
>     "zones": [
>         {
>             "id": "x",
>             "name": "rgw-west",
>             "endpoints": [
>                 "http://west01.noam.lnrm.net:8080";,
>                 "http://west02.noam.lnrm.net:8080";,
>                 "http://west03.noam.lnrm.net:8080";,
>                 "http://west04.noam.lnrm.net:8080"; << -- sync node
>             ],
>             "log_meta": "false",
>             "log_data": "true",
>             "bucket_index_max_shards": 0,
>             "read_only": "false",
>             "tier_type": "",
>             "sync_from_all": "true",
>             "sync_from": [],
>             "redirect_zone": ""
>         },
>         {
>             "id": "x",
>             "name": "rgw-east",
>             "endpoints": [
>                 "http://east01.noam.lnrm.net:8080";,
>                 "http://east02.noam.lnrm.net:8080";,
>                 "http://east03.noam.lnrm.net:8080";,
>                 "http://east04.noam.lnrm.net:8080";   << -- sync node
> ....
> From: Ulrich Klein <Ulrich.Klein@xxxxxxxxxxxxxx>
> Date: Tuesday, May 16, 2023 at 10:22 AM
> To: Konstantin Shalygin <k0ste@xxxxxxxx>
> Cc: Michal Strnad <michal.strnad@xxxxxxxxx>, ceph-users <
> ceph-users@xxxxxxx>
> Subject:  Re: Dedicated radosgw gateways
> [You don't often get email from ulrich.klein@xxxxxxxxxxxxxx. Learn why
> this is important at https://aka.ms/LearnAboutSenderIdentification ]
>
> *** External email: use caution ***
>
>
>
> Hi,
>
> Might be a dumb question …
> I'm wondering how I can set those config variables in some but not all RGW
> processes?
>
> I'm on a cephadm 17.2.6. On 3 nodes I have RGWs. The ones on 8080 are
> behind haproxy for users. the ones one 8081 I'd like for sync only.
>
> # ceph orch ps | grep rgw
> rgw.max.maxvm4.lmjaef  maxvm4   *:8080       running (51m)     4s ago
>  2h     262M        -  17.2.6   d007367d0f3c  315f47a4f164
> rgw.max.maxvm4.lwzxpf  maxvm4   *:8081       running (51m)     4s ago
>  2h     199M        -  17.2.6   d007367d0f3c  7ae82e5f6ef2
> rgw.max.maxvm5.syxpnb  maxvm5   *:8081       running (51m)     4s ago
>  2h     137M        -  17.2.6   d007367d0f3c  c0635c09ba8f
> rgw.max.maxvm5.wtpyfk  maxvm5   *:8080       running (51m)     4s ago
>  2h     267M        -  17.2.6   d007367d0f3c  b4ad91718094
> rgw.max.maxvm6.ostneb  maxvm6   *:8081       running (51m)     4s ago
>  2h     150M        -  17.2.6   d007367d0f3c  83b2af8f787a
> rgw.max.maxvm6.qfulra  maxvm6   *:8080       running (51m)     4s ago
>  2h     262M        -  17.2.6   d007367d0f3c  81d01bf9e21d
>
> # ceph config show rgw.max.maxvm4.lwzxpf
> Error ENOENT: no config state for daemon rgw.max.maxvm4.lwzxpf
>
> # ceph config set rgw.max.maxvm4.lwzxpf rgw_enable_lc_threads false
> Error EINVAL: unrecognized config target 'rgw.max.maxvm4.lwzxpf'
> (Not surprised)
>
> # ceph tell rgw.max.maxvm4.lmjaef get rgw_enable_lc_threads
> error handling command target: local variable 'poolid' referenced before
> assignment
>
> # ceph tell rgw.max.maxvm4.lmjaef set rgw_enable_lc_threads false
> error handling command target: local variable 'poolid' referenced before
> assignment
>
> Is there any way to set the config for specific RGWs in a containerized
> env?
>
> (ceph.conf doesn't work. Doesn't do anything and gets overwritten with a
> minimall version at "unpredictable" intervalls)
>
> Thanks for any ideas.
>
> Ciao, Uli
>
> > On 15. May 2023, at 14:15, Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
> >
> > Hi,
> >
> >> On 15 May 2023, at 14:58, Michal Strnad <michal.strnad@xxxxxxxxx>
> wrote:
> >>
> >> at Cephalocon 2023, it was mentioned several times that for service
> tasks such as data deletion via garbage collection or data replication in
> S3 via zoning, it is good to do them on dedicated radosgw gateways and not
> mix them with gateways used by users. How can this be achieved? How can we
> isolate these tasks? Will using dedicated keyrings instead of admin keys be
> sufficient? How do you operate this in your environment?
> >
> > Just:
> >
> > # don't put client traffic to "dedicated radosgw gateways"
> > # disable lc/gc on "gateways used by users" via `rgw_enable_lc_threads =
> false` & `rgw_enable_gc_threads = false`
> >
> >
> > k
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
> ________________________________
> The information contained in this e-mail message is intended only for the
> personal and confidential use of the recipient(s) named above. This message
> may be an attorney-client communication and/or work product and as such is
> privileged and confidential. If the reader of this message is not the
> intended recipient or an agent responsible for delivering it to the
> intended recipient, you are hereby notified that you have received this
> document in error and that any review, dissemination, distribution, or
> copying of this message is strictly prohibited. If you have received this
> communication in error, please notify us immediately by e-mail, and delete
> the original message.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux