Hi, So in RGW there's no hive* objects now, could you please check if there's any exists in the S3 perspective? That's to check the object listing of bucket 'olla' via the S3 API (boto or s3cmd could do the job) I've met some similar issue in Hadoop over SwiftFS before. There's some OSDs were down in Ceph cluster, then the file listing in Hadoop and Swift does not match. Don’t know the detail failures though. I was simply trying to do some benchmarks so the data are not important. By manually deleting the objects/buckets and regenerating the data issue was fixed. hope this can help. thanks, -yuan -----Original Message----- From: 张绍文 [mailto:zhangshaowen@xxxxxxxx] Sent: Tuesday, November 3, 2015 1:45 PM To: Zhou, Yuan Cc: ceph-cn@xxxxxxxxxxxxxx; ceph-users@xxxxxxxx Subject: Re: [Ceph-cn] librados: Objecter returned from getxattrs r=-2 On Tue, 3 Nov 2015 05:32:27 +0000 "Zhou, Yuan" <yuan.zhou@xxxxxxxxx> wrote: > Hi, > > The directory there should be some simulated hierarchical structure > with '/' in the object names. Do you mind checking the rest objects in > ceph pool .rgw.buckets? > > $ rados ls -p .rgw.buckets | grep default.157931.5_hive > > If there're still objects come out, you might try to delete them from > the 'olla' bucket with S3 API. (Note I'm not sure how's your Hive data > generated, so please do backup first if it's important.) > Thanks for your reply. I dumped object list yesterday: # rados -p .rgw.buckets ls >obj-list # ls -lh obj-list -rw-r--r-- 1 root root 1.2G Nov 2 15:51 obj-list # grep default.157931.5_hive obj-list # There's no such object. > > -----Original Message----- > From: Ceph-cn [mailto:ceph-cn-bounces@xxxxxxxxxxxxxx] On Behalf Of ??? > Sent: Tuesday, November 3, 2015 12:22 PM > To: ceph-cn@xxxxxxxxxxxxxx; ceph-users@xxxxxxxx > Subject: Re: [Ceph-cn] librados: Objecter returned from getxattrs r=-2 > > With debug_objecter = 20/0 I get this, I guess the thing is: the > object has been removed, but "directory" info still exists. > > 2015-11-03 12:07:22.264704 7f03c42f3700 10 client.214496.objecter > ms_dispatch 0x2c18840 osd_op_reply(81 > default.157931.5_hive/staging_hive_2015-11-01_14-57-40_861_37977797652 > 10222008-1/_tmp.-ext-10000/ [getxattrs,stat] v0'0 uv0 ack = -2 ((2) No > such file or directory)) v6 > > So, how can I safely remove the "directory" info? > > On Tue, 3 Nov 2015 10:10:26 +0800 > 张绍文 <zhangshaowen@xxxxxxxx> wrote: > > > On Mon, 2 Nov 2015 16:47:11 +0800 > > 张绍文 <zhangshaowen@xxxxxxxx> wrote: > > > > > On Mon, 2 Nov 2015 16:36:57 +0800 > > > 张绍文 <zhangshaowen@xxxxxxxx> wrote: > > > > > > > Hi, all: > > > > > > > > I'm using hive via s3a, but it's not usable after I removed some > > > > temp files with: > > > > > > > > /opt/hadoop/bin/hdfs dfs -rm -r -f s3a://olla/hive/ > > > > > > > > With debug_radosgw = 10/0, I got these messages repeatly: > > > > > > > > 2015-11-02 14:30:44.547271 7f08ef7fe700 10 librados: Objecter > > > > returned from getxattrs r=-2 2015-11-02 14:30:44.549117 > > > > 7f08ef7fe700 10 librados: getxattrs > > > > oid=default.157931.5_hive/staging_hive_2015-11-01_14-57-40_861_3 > > > > 79 7779765210222008-1/_tmp.-ext-10000/ > > > > nspace= > > > > > > > > I dumped whole object list, and there's no object named starts > > > > with hive/..., and hive is not usable now, please help. > > > > > > > > > > Sorry, I forgot this: > > > > > > # ceph -v > > > ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3) > > > > > > Other known "directories" under THE same bucket is readable. > > > > > > > > > > Also happened to others on ceph-users maillist, seems not resolved: > > > > http://article.gmane.org/gmane.comp.file-systems.ceph.user/7653/matc > > h= > > objecter+returned+getxattrs > > > > > > > > -- > 张绍文 > _______________________________________________ > Ceph-cn mailing list > Ceph-cn@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-cn-ceph.com -- 张绍文 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com