Unable to Retrieve Usage Logs in Ceph RGW with Boto3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

I am currently working on retrieving user usage and usage logs from Ceph
RGW using Boto3. While I am able to successfully obtain user usage and all
bucket usage with the following model and code:

```python
params = {
"start-date": "2023-05-13T00:00:00Z",
"end-date": "2023-05-13T23:59:59Z",

}
print(json.dumps(s3_client.get_usage_stats(**params), indent=2))
```

```json
{
"version": 1.0,
"merge": {
"operations": {
"GetUsageStats": {
"name": "GetUsageStats",
"http": {
"method": "GET",
"requestUri": "/?usage&start-date={start-date}&end-date={end-date}",
"responseCode": 200
},
"input": {
"shape": "GetUsageLogInput"
},
"output": {
"shape": "GetUsageStatsOutput"
},
"documentationUrl": "
https://docs.ceph.com/docs/master/radosgw/s3/serviceops#get-usage-stats";,
"documentation": "<p>Get usage stats for the user</p>"
}
},
"shapes": {
"GetUsageLogInput": {
"type": "structure",
"members": {
"start-date": {
"shape": "Timestamp",
"location": "uri",
"documentation": "<p>The start date and time for the log entries to
retrieve.</p>"
},
"end-date": {
"shape": "Timestamp",
"location": "uri",
"documentation": "<p>The end date and time for the log entries to
retrieve.</p>"
}
},
"documentation": "<p>Input shape for the GetUsageLog operation.</p>"
},
"Timestamp": {
"type": "string",
"documentation": "<p>A timestamp in ISO 8601 format.</p>"
},
"GetUsageStatsOutput": {
"type": "structure",
"members": {
"Summary": {
"shape": "UsageStatsSummary",
"documentation": "<p>Summary of usage stats.</p>"
},
"CapacityUsed": {
"shape": "CapacityUsedUsage",
"documentation": "<p>Usage stats for capacity used.</p>"
}
}
},
"CapacityUsedUsage": {
"type": "structure",
"members": {
"User": {
"shape": "UserUsageEntries",
"documentation": "<p>Usage stats for the user.</p>"
}
}
},
"UserUsageEntries": {
"type": "structure",
"members": {
"Buckets": {
"shape": "BucketsUsageEntries",
"documentation": "<p>Usage stats for buckets.</p>"
}
}
},
"BucketsUsageEntries": {
"type": "structure",
"members": {
"Entry": {
"shape": "UsageEntries",
"documentation": "<p>List of usage entries for buckets.</p>"
}
}
},
"UsageEntries": {
"type": "list",
"member": {
"shape": "BucketEntry"
},
"documentation": "<p>List of usage entries.</p>"
},
"BucketEntry": {
"type": "structure",
"members": {
"Bucket": {
"shape": "String",
"documentation": "<p>Name of the bucket.</p>"
},
"Bytes_Rounded": {
"shape": "Bytes_Rounded",
"documentation": "<p>Number of bytes (rounded).</p>"
},
"Bytes": {
"shape": "Bytes",
"documentation": "<p>Number of bytes.</p>"
}
}
},
"UsageStatsSummary": {
"type": "structure",
"members": {
"QuotaMaxBytes": {
"shape": "QuotaMaxBytes",
"documentation": "<p>Maximum quota in bytes.</p>"
},
"QuotaMaxBuckets": {
"shape": "QuotaMaxBuckets",
"documentation": "<p>Maximum quota for buckets.</p>"
},
"QuotaMaxObjCount": {
"shape": "QuotaMaxObjCount",
"documentation": "<p>Maximum quota for object count.</p>"
},
"QuotaMaxBytesPerBucket": {
"shape": "QuotaMaxBytesPerBucket",
"documentation": "<p>Maximum quota in bytes per bucket.</p>"
},
"QuotaMaxObjCountPerBucket": {
"shape": "QuotaMaxObjCountPerBucket",
"documentation": "<p>Maximum quota for object count per bucket.</p>"
},
"TotalBytes": {
"shape": "TotalBytes",
"documentation": "<p>Total number of bytes.</p>"
},
"TotalBytesRounded": {
"shape": "TotalBytesRounded",
"documentation": "<p>Total number of bytes (rounded).</p>"
},
"TotalEntries": {
"shape": "TotalEntries",
"documentation": "<p>Total number of entries.</p>"
}
}
},
"BytesReceived": {
"type": "integer",
"documentation": "<p>Number of bytes received.</p>"
},
"QuotaMaxBytes": {
"type": "integer",
"documentation": "<p>Maximum quota in bytes.</p>"
},
"QuotaMaxBuckets": {
"type": "integer",
"documentation": "<p>Maximum quota for buckets.</p>"
},
"QuotaMaxObjCount": {
"type": "integer",
"documentation": "<p>Maximum quota for object count.</p>"
},
"QuotaMaxBytesPerBucket": {
"type": "integer",
"documentation": "<p>Maximum quota in bytes per bucket.</p>"
},
"QuotaMaxObjCountPerBucket": {
"type": "integer",
"documentation": "<p>Maximum quota for object count per bucket.</p>"
},
"TotalBytesRounded": {
"type": "integer",
"documentation": "<p>Total number of bytes (rounded).</p>"
},
"TotalBytes": {
"type": "integer",
"documentation": "<p>Total number of bytes.</p>"
},
"TotalEntries": {
"type": "integer",
"documentation": "<p>Total number of entries.</p>"
},
"String": {
"type": "string",
"documentation": "<p>A string value.</p>"
},
"Bytes_Rounded": {
"type": "integer",
"documentation": "<p>Number of bytes (rounded).</p>"
},
"Bytes": {
"type": "integer",
"documentation": "<p>Number of bytes.</p>"
}
},
"documentation": "<p>Model for retrieving usage statistics.</p>"
}
}
```

and this is my output:

```

  "Summary": {

    "QuotaMaxBytes": -1,

    "QuotaMaxBuckets": 1000,

    "QuotaMaxObjCount": -1,

    "QuotaMaxBytesPerBucket": -1,

    "QuotaMaxObjCountPerBucket": -1,

    "TotalBytes": 52696,

    "TotalBytesRounded": 53248,

    "TotalEntries": 1

  },

  "CapacityUsed": {

    "User": {

      "Buckets": {

        "Entry": [

          {

            "Bucket": "opa-test",

            "Bytes_Rounded": 53248,

            "Bytes": 52696

          },

          {

            "Bucket": "ramfrom",

            "Bytes_Rounded": 0,

            "Bytes": 0

          }

      }

    }

  }

}

```


The issue I am facing is that the usage logs are not being displayed. Upon
investigating the code in the send_response() function of the
RGWGetUsage_ObjStore_S3 class, I found the following conditions:


```cpp
if (show_log_sum) {
formatter->open_array_section("Summary");
map<string, rgw_usage_log_entry>::iterator siter;
for (siter = summary_map.begin(); siter != summary_map.end(); ++siter) {
const rgw_usage_log_entry& entry = siter->second;
formatter->open_object_section("User");
formatter->dump_string("User", siter->first);
dump_usage_categories_info(formatter, entry, &categories);
rgw_usage_data total_usage;
entry.sum(total_usage, categories);
formatter->open_object_section("Total");
encode_json("BytesSent", total_usage.bytes_sent, formatter);
encode_json("BytesReceived", total_usage.bytes_received, formatter);
encode_json("Ops", total_usage.ops, formatter);
encode_json("SuccessfulOps", total_usage.successful_ops, formatter);
formatter->close_section(); // total
formatter->close_section(); // user
}

```


According to the code, the show_log_sum parameter must be true to display
the summary log entries, and I have confirmed that it is indeed set to true.
Additionally, the show_log_entries parameter must also be true to retrieve
the usage log entries, and I have verified that it is set to true as well.
I am using Ceph version 16.2.12, and I have set the RGW debug level to 99.
Here is a snippet of the log output:



```log
2023-05-13T20:39:17.111+0330 7f346da3d700 1 ====== starting new request req=
0x7f3504a0e620 =====
2023-05-13T20:39:17.147+0330 7f346da3d700 2 req 11854699226730900750
0.035999894s
initializing for trans_id = tx00000a4845a234864ad0e-00645fc43d-fb27-default
2023-05-13T20:39:17.283+0330 7f346da3d700 10 req 11854699226730900750
0.171999499s
rgw api priority: s3=8 s3website=7
2023-05-13T20:39:17.287+0330 7f346da3d700 10 req 11854699226730900750
0.175999492s
host=s3.abrak.stage.test.com
2023-05-13T20:39:17.291+0330 7f346da3d700 20 req 11854699226730900750
0.179999471s
subdomain= domain=s3.abrak.stage.test.com in_hosted_domain=1
in_hosted_domain_s3website=0
2023-05-13T20:39:17.291+0330 7f346da3d700 20 req 11854699226730900750
0.179999471s
final domain/bucket subdomain= domain=s3.abrak.stage.test.com
in_hosted_domain=1 in_hosted_domain_s3website=0 s->info.domain=s3.abrak.
stage.test.com s->info.request_uri=/
2023-05-13T20:39:17.315+0330 7f346da3d700 10 req 11854699226730900750
0.207999393s
meta>> HTTP_X_AMZ_CONTENT_SHA256
2023-05-13T20:39:17.319+0330 7f346da3d700 10 req 11854699226730900750
0.207999393s
meta>> HTTP_X_AMZ_DATE
2023-05-13T20:39:17.319+0330 7f346da3d700 10 req 11854699226730900750
0.207999393s
x>> x-amz-content-sha256:
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023-05-13T20:39:17.319+0330 7f346da3d700 10 req 11854699226730900750
0.207999393s
x>> x-amz-date:20230513T170916Z
2023-05-13T20:39:17.347+0330 7f346da3d700 10 req 11854699226730900750
0.235999316s
name: usage val:
2023-05-13T20:39:17.347+0330 7f346da3d700 10 req 11854699226730900750
0.235999316s
name: start-date val: 2023-05-13T00:00:00Z
2023-05-13T20:39:17.347+0330 7f346da3d700 10 req 11854699226730900750
0.235999316s
name: end-date val: 2023-05-13T23:59:59Z
2023-05-13T20:39:17.355+0330 7f346da3d700 20 req 11854699226730900750
0.243999273s
get_handler handler=26RGWHandler_REST_Service_S3
2023-05-13T20:39:17.383+0330 7f346da3d700 10 req 11854699226730900750
0.271999210s
handler=26RGWHandler_REST_Service_S3
2023-05-13T20:39:17.383+0330 7f346da3d700 2 req 11854699226730900750
0.271999210s
getting op 0
2023-05-13T20:39:17.423+0330 7f346da3d700 10 req 11854699226730900750
0.311999112s
s3:get_self_usage scheduling with throttler client=3 cost=1
2023-05-13T20:39:17.431+0330 7f346da3d700 10 req 11854699226730900750
0.319999069s
s3:get_self_usage op=23RGWGetUsage_ObjStore_S3
2023-05-13T20:39:17.431+0330 7f346da3d700 2 req 11854699226730900750
0.319999069s
s3:get_self_usage verifying requester
2023-05-13T20:39:17.435+0330 7f346da3d700 20 req 11854699226730900750
0.323999047s
s3:get_self_usage rgw::auth::StrategyRegistry::s3_main_strategy_t: trying
rgw::auth::s3::AWSAuthStrategy
2023-05-13T20:39:17.447+0330 7f346da3d700 20 req 11854699226730900750
0.335999012s
s3:get_self_usage rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::
S3AnonymousEngine
2023-05-13T20:39:17.451+0330 7f346da3d700 20 req 11854699226730900750
0.339999020s
s3:get_self_usage rgw::auth::s3::S3AnonymousEngine denied with reason=-1
2023-05-13T20:39:17.451+0330 7f346da3d700 20 req 11854699226730900750
0.339999020s
s3:get_self_usage rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::
LocalEngine
2023-05-13T20:39:17.455+0330 7f346da3d700 10 req 11854699226730900750
0.343998998s
v4 signature format =
2e4a95626b3633439d13f5769d6a0be142c30823192837c3599fbb3f758d7c40
2023-05-13T20:39:17.463+0330 7f346da3d700 10 req 11854699226730900750
0.351998985s
v4 credential format = OSDDH3XY9FPGHJ07O7JP/20230513/us-east-1/s3/
aws4_request
2023-05-13T20:39:17.463+0330 7f346da3d700 10 req 11854699226730900750
0.351998985s
access key id = OSDDH3XY9FPGHJ07O7JP
2023-05-13T20:39:17.463+0330 7f346da3d700 10 req 11854699226730900750
0.351998985s
credential scope = 20230513/us-east-1/s3/aws4_request
2023-05-13T20:39:17.471+0330 7f346da3d700 10 req 11854699226730900750
0.359998941s
canonical headers format = host:s3.abrak.stage.test.com
x-amz-content-sha256:
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20230513T170916Z

2023-05-13T20:39:17.479+0330 7f346da3d700 10 req 11854699226730900750
0.367998898s
payload request hash =
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023-05-13T20:39:17.483+0330 7f346da3d700 10 req 11854699226730900750
0.371998906s
canonical request = GET
/
end-date=2023-05-13T23%3A59%3A59Z&start-date=2023-05-13T00%3A00%3A00Z&usage=
host:s3.abrak.stage.test.com
x-amz-content-sha256:
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20230513T170916Z

host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023-05-13T20:39:17.483+0330 7f346da3d700 10 req 11854699226730900750
0.371998906s
canonical request hash =
060e8a249e9f3708393aa7d2cfde262976bdd4d10216be12ca74ae7961e3994f
2023-05-13T20:39:17.483+0330 7f346da3d700 10 req 11854699226730900750
0.371998906s
string to sign = AWS4-HMAC-SHA256
20230513T170916Z
20230513/us-east-1/s3/aws4_request
060e8a249e9f3708393aa7d2cfde262976bdd4d10216be12ca74ae7961e3994f
2023-05-13T20:39:17.511+0330 7f346da3d700 10 req 11854699226730900750
0.399998844s
date_k = fe25d3b269a34fa9c86caf0b35d6a46791ddd7fbcd0650a212fb7528a2dce06d
2023-05-13T20:39:17.511+0330 7f346da3d700 10 req 11854699226730900750
0.399998844s
region_k = 03b05d2d23e0b150ae09d8baf554e8a8bc34887951f47b5925dead931c7faeaa
2023-05-13T20:39:17.511+0330 7f346da3d700 10 req 11854699226730900750
0.399998844s
service_k = d10551f64ed49675cde1f1b609ee5b54b513b077e2c00198ade53b689dee17df
2023-05-13T20:39:17.511+0330 7f346da3d700 10 req 11854699226730900750
0.399998844s
signing_k = c35cb1ca41d5cc1c5a49429c50afe3c840d0b67e81df0faf8122d979294098e7
2023-05-13T20:39:17.511+0330 7f346da3d700 10 req 11854699226730900750
0.399998844s
generated signature =
2e4a95626b3633439d13f5769d6a0be142c30823192837c3599fbb3f758d7c40
2023-05-13T20:39:17.511+0330 7f346da3d700 15 req 11854699226730900750
0.399998844s
s3:get_self_usage string_to_sign=AWS4-HMAC-SHA256
20230513T170916Z
20230513/us-east-1/s3/aws4_request
060e8a249e9f3708393aa7d2cfde262976bdd4d10216be12ca74ae7961e3994f
2023-05-13T20:39:17.511+0330 7f346da3d700 15 req 11854699226730900750
0.399998844s
s3:get_self_usage server signature=
2e4a95626b3633439d13f5769d6a0be142c30823192837c3599fbb3f758d7c40
2023-05-13T20:39:17.511+0330 7f346da3d700 15 req 11854699226730900750
0.399998844s
s3:get_self_usage client signature=
2e4a95626b3633439d13f5769d6a0be142c30823192837c3599fbb3f758d7c40
2023-05-13T20:39:17.511+0330 7f346da3d700 15 req 11854699226730900750
0.399998844s
s3:get_self_usage compare=0
2023-05-13T20:39:17.519+0330 7f346da3d700 20 req 11854699226730900750
0.407998830s
s3:get_self_usage rgw::auth::s3::LocalEngine granted access
2023-05-13T20:39:17.519+0330 7f346da3d700 20 req 11854699226730900750
0.407998830s
s3:get_self_usage rgw::auth::s3::AWSAuthStrategy granted access
2023-05-13T20:39:17.519+0330 7f346da3d700 2 req 11854699226730900750
0.407998830s
s3:get_self_usage normalizing buckets and tenants
2023-05-13T20:39:17.519+0330 7f346da3d700 10 req 11854699226730900750
0.407998830s
s->object=<NULL> s->bucket=
2023-05-13T20:39:17.519+0330 7f346da3d700 2 req 11854699226730900750
0.407998830s
s3:get_self_usage init permissions
2023-05-13T20:39:17.535+0330 7f346da3d700 20 req 11854699226730900750
0.423998743s
s3:get_self_usage get_system_obj_state: rctx=0x7f3504a0ca28 obj=default.rgw.
meta:users.uid:development state=0x56448667e0a0 s->prefetch_data=0
2023-05-13T20:39:17.535+0330 7f346da3d700 10 req 11854699226730900750
0.423998743s
s3:get_self_usage cache get: name=default.rgw.meta+users.uid+development :
hit (requested=0x6, cached=0x17)
2023-05-13T20:39:17.539+0330 7f346da3d700 20 req 11854699226730900750
0.427998751s
s3:get_self_usage get_system_obj_state: s->obj_tag was set empty
2023-05-13T20:39:17.539+0330 7f346da3d700 20 req 11854699226730900750
0.427998751s
s3:get_self_usage Read xattr: user.rgw.idtag
2023-05-13T20:39:17.539+0330 7f346da3d700 10 req 11854699226730900750
0.427998751s
s3:get_self_usage cache get: name=default.rgw.meta+users.uid+development :
hit (requested=0x3, cached=0x17)
2023-05-13T20:39:17.679+0330 7f346da3d700 2 req 11854699226730900750
0.567998350s
s3:get_self_usage recalculating target
2023-05-13T20:39:17.679+0330 7f346da3d700 2 req 11854699226730900750
0.567998350s
s3:get_self_usage reading permissions
2023-05-13T20:39:17.687+0330 7f346da3d700 2 req 11854699226730900750
0.575998306s
s3:get_self_usage init op
2023-05-13T20:39:17.687+0330 7f346da3d700 2 req 11854699226730900750
0.575998306s
s3:get_self_usage verifying op mask
2023-05-13T20:39:17.695+0330 7f346da3d700 20 req 11854699226730900750
0.583998263s
s3:get_self_usage required_mask= 1 user.op_mask=7
2023-05-13T20:39:17.695+0330 7f346da3d700 2 req 11854699226730900750
0.583998263s
s3:get_self_usage verifying op permissions
2023-05-13T20:39:17.695+0330 7f346da3d700 2 req 11854699226730900750
0.583998263s
s3:get_self_usage verifying op params
2023-05-13T20:39:17.695+0330 7f346da3d700 2 req 11854699226730900750
0.583998263s
s3:get_self_usage pre-executing
2023-05-13T20:39:17.695+0330 7f346da3d700 2 req 11854699226730900750
0.583998263s
s3:get_self_usage executing
2023-05-13T20:39:21.023+0330 7f343c1da700 2 req 11854699226730900750
3.911988497s
s3:get_self_usage completing
2023-05-13T20:39:21.067+0330 7f343c1da700 30 AccountingFilter::send_status:
e=0, sent=17, total=0
2023-05-13T20:39:21.071+0330 7f343c1da700 30 AccountingFilter::send_header:
e=0, sent=0, total=0
2023-05-13T20:39:21.075+0330 7f343c1da700 30 AccountingFilter::
send_chunked_transfer_encoding: e=0, sent=28, total=0
2023-05-13T20:39:21.079+0330 7f343c1da700 30 AccountingFilter::send_header:
e=0, sent=0, total=0
2023-05-13T20:39:21.107+0330 7f343b1d8700 30 AccountingFilter::
complete_header: e=0, sent=156, total=0
2023-05-13T20:39:21.107+0330 7f343b1d8700 30 AccountingFilter::set_account:
e=1
2023-05-13T20:39:21.199+0330 7f3457a11700 30 AccountingFilter::send_body: e=
1, sent=4051, total=0
2023-05-13T20:39:21.199+0330 7f3457a11700 30 AccountingFilter::
complete_request: e=1, sent=5, total=4051
2023-05-13T20:39:21.199+0330 7f3457a11700 30 req 11854699226730900750
4.087987900s
log_usage: bucket_name= tenant=, bytes_sent=4056, bytes_received=0, success=
1
2023-05-13T20:39:21.199+0330 7f3457a11700 2 req 11854699226730900750
4.087987900s
s3:get_self_usage op status=0
2023-05-13T20:39:21.199+0330 7f3457a11700 2 req 11854699226730900750
4.087987900s
s3:get_self_usage http status=200
2023-05-13T20:39:21.219+0330 7f3457a11700 1 ====== req done req=
0x7f3504a0e620 op status=0 http_status=200 latency=4.103988171s ======
2023-05-13T20:39:21.227+0330 7f3457a11700 1 beast: 0x7f3504a0e620: 192.168.
200.1 - development [13/May/2023:20:39:17.059 +0330] "GET
/?usage&start-date=2023-05-13T00%3A00%3A00Z&end-date=2023-05-13T23%3A59%3A59Z
HTTP/1.0" 200 4056 - "Boto3/1.26.131 Python/3.10.11 Darwin/22.4.0
Botocore/1.29.131" - latency=4.103988171s

```


I would greatly appreciate any guidance or insights into why I am unable to
retrieve the usage logs. How can I ensure that the usage logs are included
in the response?

Thank you for your assistance!



P.S.

I have also verified the response using tools like Wireshark and tcpdump,
and I can confirm that the responses received are consistent with the
output I provided earlier.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux