Repetitive replication occuring in slave zone causing OSD's to fill

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm currently running 2 ceph clusters (ver. 0.80.1) which are providing
secondary storage for CloudPlatform.  Each cluster resides in a different
datacenter and our federated gateway consists of a region (us-east-1) with 2
zones (zone-a [master], zone-b [slave]). Objects appear to be
replicating/syncing from zone-a to zone-b as expected, meaning the objects
appear in both zones with the same size and checksum using an s3 client to
view.  We've recently run into an issue where an object is replicated to
zone-b, the object appears to be complete, yet the
.us-east-1.zone-b.rgw.buckets pool continues to fill with shadow files for
this object. We noticed the osd's were being consumed rather quickly, and
while troubleshooting we found 230+ unique TAGS for the object (i.e.
TAG_shadow_1 through TAG_shadow_515). Has anyone seen this behavior or have
any idea what may have caused it.

 

Thanks in advance for any help that may be provided,

MLM

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140919/0202922e/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux