RGW lifecycle wrongly removes NOT expired delete-markers which have a objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We have very strange behavior, when LC "restores" previous versions of
objects.

It looks like:

1. We have latest object:

*object1*

2. We remove object and have delete-marker on top of it. We can't see
object1 in bucket listing:

*marker (latest)*
*object1 (not latests, version1)*

3. LC decides to remove marker as Expired:

*RGW log:
DELETED::REDACTED-bucket-REDACTED[default.1471649938.1]):REDACTED-object-REDACTED[N3gFGHIZLylFcO9RdnlChEfIAL076VC]
(delete marker expiration) wp_thrd: 0, 0*

4. and we can see object1 in bucket listing:

*object1 (latest, version1)*

But, it is wrong behavior because ExpiredObjectDeleteMarker - it's marker
that doesn't have ANY version under it.
Unfortunately we can't reproduce this behavior in our test environment
(proof that it's not a normal behavior). But it occurs regularly in our
production in one customer's bucket. It crashes the customer's application
because for it, it looks like an "unknown file".

We are 100% sure that it's LC. Because we have logs when an object was
uploaded, removed (no version specified), and we have LC log which removes
DM but object1 exists.
In this case we have unique object names like sha256. It means that we cant
have name duplication in this bucket, it can't be a new object with the
same name.
And we have mtime of "restored" object1 which in the past, for example 20
days past. All our findings are matched together.

LC policy:
{
    "Rules": [
        {
            "Status": "Enabled",
            "Prefix": "",
            "NoncurrentVersionExpiration": {
                "NoncurrentDays": 30
            },
            "Expiration": {
                "ExpiredObjectDeleteMarker": true
            },
            "ID": "Remove all NOT latest version"
        }
    ]
}

Ceph 16.2.13
We have about 20 RGW instances and only two of them have LC thread enabled.
The bucket has about 1.5M objects and 701 shards. I know that it's too many
shards for this bucket, but it's another story)

Where can be a problem?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux