Has anyone figured out an elegant way to emit this from inside cephadm managed/containerized ceph, so it can be handled via the host's journald and processed/shipped? We had gone down that path before, but decided to hold off on the suggestion that the LUA-based scripting might be a better option. David On Fri, May 7, 2021 at 4:21 PM Matt Benjamin <mbenjami@xxxxxxxxxx> wrote: > > Hi David, > > I think the solution is most likely the ops log. It is called for > every op, and has the transaction id. > > Matt > > On Fri, May 7, 2021 at 4:58 PM David Orman <ormandj@xxxxxxxxxxxx> wrote: > > > > Hi Yuval, > > > > We've managed to get an upgrade done with the 16.2.3 release in a > > testing cluster, and we've been able to implement some of the logging > > I need via this mechanism, but the logs are emitted only when > > debug_rgw is set to 20. I don't need to log any of that level of data > > (we used centralized logging and the sheer volume of this output is > > staggering); I'm just trying to get the full request log, to include > > the transactionID, so I can match it up with the logging we do on our > > load balancer solution. Is there another mechanism to emit these logs > > at normal log levels? RGWDebugLog() doesn't appear to be what I'm > > actually looking for. My intent is to emit JSON logs using this > > mechanism, in the end, with all of the required fields for requests. > > The current "beast: " log lines don't contain the information we need, > > such as txid, which is what we're attempting to solve for - but can't > > afford to have full debug logging enabled in production clusters. > > > > Thanks! > > David > > > > On Thu, Apr 1, 2021 at 11:21 AM Yuval Lifshitz <ylifshit@xxxxxxxxxx> wrote: > > > > > > Hi David, > > > Don't have any good idea for "octopus" (other than ops log), but you can do that (and more) in "pacific" using lua scripting on the RGW: > > > https://docs.ceph.com/en/pacific/radosgw/lua-scripting/ > > > > > > Yuval > > > > > > On Thu, Apr 1, 2021 at 7:11 PM David Orman <ormandj@xxxxxxxxxxxx> wrote: > > >> > > >> Hi, > > >> > > >> Is there any way to log the x-amz-request-id along with the request in > > >> the rgw logs? We're using beast and don't see an option in the > > >> configuration documentation to add headers to the request lines. We > > >> use centralized logging and would like to be able to search all layers > > >> of the request path (edge, lbs, ceph, etc) with a x-amz-request-id. > > >> > > >> Right now, all we see is this: > > >> > > >> debug 2021-04-01T15:55:31.105+0000 7f54e599b700 1 beast: > > >> 0x7f5604c806b0: x.x.x.x - - [2021-04-01T15:55:31.105455+0000] "PUT > > >> /path/object HTTP/1.1" 200 556 - "aws-sdk-go/1.36.15 (go1.15.3; linux; > > >> amd64)" - > > >> > > >> We've also tried this: > > >> > > >> ceph config set global rgw_enable_ops_log true > > >> ceph config set global rgw_ops_log_socket_path /tmp/testlog > > >> > > >> After doing this, inside the rgw container, we can socat - > > >> UNIX-CONNECT:/tmp/testlog and see the log entries being recorded that > > >> we want, but there has to be a better way to do this, where the logs > > >> are emitted like the request logs above by beast, so that we can > > >> handle it using journald. If there's an alternative that would > > >> accomplish the same thing, we're very open to suggestions. > > >> > > >> Thank you, > > >> David > > >> _______________________________________________ > > >> ceph-users mailing list -- ceph-users@xxxxxxx > > >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > > >> > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > > > > > -- > > Matt Benjamin > Red Hat, Inc. > 315 West Huron Street, Suite 140A > Ann Arbor, Michigan 48103 > > http://www.redhat.com/en/technologies/storage > > tel. 734-821-5101 > fax. 734-769-8938 > cel. 734-216-5309 > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx