tried removing, but no luck: rados -p .be-east.rgw.buckets rm "be-east.5436.1__:2bpm.1OR-cqyOLUHek8m2RdPVRZ.pDT__sanity" error removing .be-east.rgw.buckets>be-east.5436.1__:2bpm.1OR-cqyOLUHek8m2RdPVRZ.pDT__sanity: (2) anyone? On 21-08-15 13:06, Sam Wouters wrote: > I suspect these to be the cause: > > rados ls -p .be-east.rgw.buckets | grep > sanitybe-east.5436.1__:2bpm.1OR-cqyOLUHek8m2RdPVRZ.pDT__sanity > be-east.5436.1__sanity > be-east.5436.1__:2vBijaGnVQF4Q0IjZPeyZSKeUmBGn9X__sanity > be-east.5436.1__sanity > be-east.5436.1__:4JTCVFxB1qoDWPu1nhuMDuZ3QNPaq5n__sanity > be-east.5436.1__sanity > be-east.5436.1__:9jFwd8xvqJMdrqZuM8Au4mi9M62ikyo__sanity > be-east.5436.1__sanity > be-east.5436.1__:BlfbGYGvLi92QPSiabT2mP7OeuETz0P__sanity > be-east.5436.1__sanity > be-east.5436.1__:MigpcpJKkan7Po6vBsQsSD.hEIRWuim__sanity > be-east.5436.1__sanity > be-east.5436.1__:QDTxD5p0AmVlPW4v8OPU3vtDLzenj4y__sanity > be-east.5436.1__sanity > be-east.5436.1__:S43EiNAk5hOkzgfbOynbOZOuLtUv0SB__sanity > be-east.5436.1__sanity > be-east.5436.1__:UKlOVMQBQnlK20BHJPyvnG6m.2ogBRW__sanity > be-east.5436.1__sanity > be-east.5436.1__:kkb6muzJgREie6XftdEJdFHxR2MaFeB__sanity > be-east.5436.1__sanity > be-east.5436.1__:oqPhWzFDSQ-sNPtppsl1tPjoryaHNZY__sanity > be-east.5436.1__sanity > be-east.5436.1__:pLhygPGKf3uw7C7OxSJNCw8rQEMOw5l__sanity > be-east.5436.1__sanity > be-east.5436.1__:tO1Nf3S2WOfmcnKVPv0tMeXbwa5JR36__sanity > be-east.5436.1__sanity > be-east.5436.1__:ye4oRwDDh1cGckbMbIo56nQvM7OEyPM__sanity > be-east.5436.1__sanity > be-east.5436.1___sanity be-east.5436.1__sanity > > would it be save and/or help to remove those with "rados rm", and try an > bucket check --fix --check-objects? > > On 21-08-15 11:28, Sam Wouters wrote: >> Hi, >> >> We are running hammer 0.94.2 and have an increasing amount of >> "heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7f38c77e6700' had >> timed out after 600" messages in our radosgw logs, with radosgw >> eventually stalling. A restart of the radosgw helps for a few minutes, >> but after that it hangs again. >> >> "ceph daemon /var/run/ceph/ceph-client.*.asok objecter_requests" shows >> "call rgw.bucket_list" ops. No new bucket lists are requested, so those >> ops seem to stay there. Anyone any idea how to get rid of those. Restart >> of the affecting osd didn't help neither. >> >> I'm not sure if its related, but we have an object called "_sanity" in >> the bucket where the listing was performed on. I know there is some bug >> with objects starting with "_". >> >> Any help would be much appreciated. >> >> r, >> Sam >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com