Radosgw using s3 copy corrupt files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

We use radosgw and s3 API and we recently needed to update metadata on some files.

So we used the copy part of the S3 API for an in place replacement of the file adding some meta.

 

We quickly saw some very high response time for some of those uploaded file. But there was no slow request.

We looked at some file and saw that file greater than 512kb were corrupted.

After 512kb the content of the files is :

Status: 404

Content-Length: 75

Accept-Ranges: bytes

Content-type: application/xml

 

<?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code></Error>

 

--

Yann ROBIN

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux