Re: RGW container deletion problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Bump

On 2016-07-25 14:05:38 +0000, Daniel Schneller said:

Hi!

I created a bunch of test containers with some objects in them via
RGW/Swift (Ubuntu, RGW via Apache, Ceph Hammer 0.94.1)

Now I try to get rid of the test data.

I manually staretd with one container:

~/rgwtest ➜  swift -v -V 1.0 -A http://localhost:8405/auth -U <...> -K
<...> --insecure delete test_a6b3e80c-e880-bef9-b1b5-892073e3b153
test_10
test_5
test_100
test_20
test_30

So far so good. Notice that locahost:8405 is bound by haproxy,
distributing requests to 4 RGWs on different servers, in case that is
relevant.

To make sure my script gets error handling right, I tried to delete the
same container again, leading to an error:

~/rgwtest ➜  swift -v --retries=0 -V 1.0 -A http://localhost:8405/auth
-U <...> -K <...> --insecure delete
test_a6b3e80c-e880-bef9-b1b5-892073e3b153
Container DELETE failed:
http://localhost:8405:8405/swift/v1/test_a6b3e80c-e880-bef9-b1b5-892073e3b153
500 Internal Server Error   UnknownError

Stat'ing it still works:

~/rgwtest ➜  swift -v -V 1.0 -A http://localhost:8405/auth -U <...> -K
<...> --insecure stat test_a6b3e80c-e880-bef9-b1b5-892073e3b153
           URL:
http://localhost:8405/swift/v1/test_a6b3e80c-e880-bef9-b1b5-892073e3b153
    Auth Token: AUTH_rgwtk...
       Account: v1
     Container: test_a6b3e80c-e880-bef9-b1b5-892073e3b153
       Objects: 0
         Bytes: 0
      Read ACL:
     Write ACL:
       Sync To:
      Sync Key:
        Server: Apache/2.4.7 (Ubuntu)
X-Container-Bytes-Used-Actual: 0
X-Storage-Policy: default-placement
  Content-Type: text/plain; charset=utf-8


Checking the RGW Logs I found this:

2016-07-25 15:21:29.751055 7fbcd67f4700  1 ====== starting new request
req=0x7fbce40a1100 =====
2016-07-25 15:21:29.768688 7fbcd67f4700  0 WARNING: set_req_state_err
err_no=125 resorting to 500
2016-07-25 15:21:29.768743 7fbcd67f4700  1 ====== req done
req=0x7fbce40a1100 http_status=500 ======

Googling a little and finding this:

http://tracker.ceph.com/issues/14208

mentioning similar issues and an out-of-sync metadata cache between
different RGWs. I vaguely remember having seen something like this
in the Firefly  timeframe before, but I am not sure if it is the same.

Where does this metadata cache live? Can it be flushed somehow without
disturbing other operations?

I found this PDF

https://archive.fosdem.org/2016/schedule/event/virt_iaas_ceph_rados_gateway_overview/attachments/audio/1077/export/events/attachments/virt_iaas_ceph_rados_gateway_overview/audio/1077/Fosdem_RGW.pdf


but without the "audio track" it doesn't really help me.

Thanks!
Daniel


--
--
Daniel Schneller
Principal Cloud Engineer

CenterDevice GmbH
https://www.centerdevice.de


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux