On Fri, Dec 15, 2017 at 05:21:37PM +0000, David Turner wrote: > We're trying to build an auditing system for when a user key pair performs > an operation on a bucket (put, delete, creating a bucket, etc) and so far > were only able to find this information in the level 10 debug logging in > the rgw systems logs. > > We noticed that our rgw log pool has been growing somewhat indefinitely and > we had to move it off of the nvme's and put it to HDD's due to it's growing > size. What is in that pool and how can it be accessed? I haven't found > the right terms to search for to find anything about what's in this pool on > the ML or on Google. > > What I would like to do is export the log to ElasticSearch, cleanup the log > on occasion, and hopefully find the information we're looking for to > fulfill our user auditing without having our RGW daemons running on debug > level 10 (which is a lot of logging!). I have a terrible solution in HAProxy's Lua that recognizes most S3 operations and spits out UDP/logs based on that. It's not ideal, has LOTS of drawbacks (mostly in duplication of code, incl S3 signature stuff). I'd be very interested in writing useful log data out either in a difference channel or as part of the HTTP response (key, bucket, object, operation, actual bytes moved [esp for in-place S3 COPY]) -- Robin Hugh Johnson Gentoo Linux: Dev, Infra Lead, Foundation Asst. Treasurer E-Mail : robbat2@xxxxxxxxxx GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
Attachment:
signature.asc
Description: Digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com