Re: RGW bucket mtime stat?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Great - thanks for the quick fix!

-Mike Neufeld


From: Casey Bodley <cbodley@xxxxxxxxxx>
Sent: Tuesday, March 7, 2023 6:19:22 PM
To: Neufeld, Michael
Cc: dev@xxxxxxx
Subject: Re: RGW bucket mtime stat?
 
On Tue, Mar 7, 2023 at 3:55 PM Neufeld, Michael <mneufeld@xxxxxxxxxx> wrote:
>
> Hello-
> I've been looking into an issue we've had when migrating from RGW 15.2.17 to RGW 17.2.5.
> When getting bucket stats (e.g. radosgw-admin bucket stats) the mtime field in RGW 15.2.17 has an expected time/date value, but 17.2.5 has "0.0000". I've tracked that down to a bug introduced during a large refactoring checkin:
>
> https://urldefense.com/v3/__https://github.com/ceph/ceph/commit/72d1a363263cf707d022ee756122236ba175cda2__;!!GjvTz_vk!RbFt45AqXwUalvxCMHmk18FAFix2vfrRGGyyva0PI9H2QjMXmOhqSc4R_KZBkRMWfxJqkHDBSLkVyMQ$
>
> The fix for that is simple, just have to add a call to bucket->get_modification_time() in rgw_bucket.cc to match the corresponding call to bucket->get_creation_time().
>
> This works fine in 17.2.5 - when creating a bucket and getting its stats on a local "vstart.sh" test cluster I have an mtime that is a few seconds later than the creation time.
>
> However, when I try the same fix on the current head the behavior goes back to having an mtime of "0.0000". As it turns out, the mtime returned from bucket->get_modification_time() right after creation is the default initialized value - not updated as it was previously.
>
> I haven't done more digging into this yet, but before I did was curious if there's been some intentional changes to how bucket mtime works. It'd be easy enough to add a check for an empty mtime and swap in the creation time instead, but wanted to make sure I wouldn't just be masking some other issue.
>
> Thanks!
>
> -Mike Neufeld
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
>

thanks for helping to track that down!

i have a working fix for main in
https://urldefense.com/v3/__https://github.com/ceph/ceph/pull/50429__;!!GjvTz_vk!RbFt45AqXwUalvxCMHmk18FAFix2vfrRGGyyva0PI9H2QjMXmOhqSc4R_KZBkRMWfxJqkHDBGptPYdU$ , and opened
https://urldefense.com/v3/__https://tracker.ceph.com/issues/58932__;!!GjvTz_vk!RbFt45AqXwUalvxCMHmk18FAFix2vfrRGGyyva0PI9H2QjMXmOhqSc4R_KZBkRMWfxJqkHDBi8yV10E$  to track the reef and quincy
backports

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux