Re: Radosgw using s3 copy corrupt files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 22, 2013 at 1:03 AM, Yann ROBIN <yann.robin@xxxxxxxxxxxxx> wrote:
> Hi,
>
>
>
> We use radosgw and s3 API and we recently needed to update metadata on some
> files.
>
> So we used the copy part of the S3 API for an in place replacement of the
> file adding some meta.
>
>
>
> We quickly saw some very high response time for some of those uploaded file.
> But there was no slow request.
>
> We looked at some file and saw that file greater than 512kb were corrupted.
>
> After 512kb the content of the files is :
>
> Status: 404
>
> Content-Length: 75
>
> Accept-Ranges: bytes
>
> Content-type: application/xml
>
>
>
> <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code></Error>
>
>

What version are you using? I tried to reproduce that but couldn't. I
vaguely remember some fixes in that area that went in, but that was a
while back.

If you can reproduce it then rgw log with debug rgw = 20, debug ms = 1
could assist here.

Thanks,
Yehuda
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux