Hi,
On 01/26/2016 10:24 AM, Yan, Zheng wrote:
On Tue, Jan 26, 2016 at 3:16 PM, Burkhard Linke
<Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Hi,
On 01/26/2016 07:58 AM, Yan, Zheng wrote:
*snipsnap*
I have a few questions
Which version of ceph are you using? When was the filesystem created?
Did you manually delete 10002af7f78.00000000 from pool 8?
Ceph version is 0.94.5. The filesystem was created about 1.5 years ago using
a Firefly release. The exact order of commands was:
The FS was created 1.5 years ago can explain the issue.
here is way to fix other potential bad files in the FS.
1. List all object whose name end with .00000000 in pool 7
2. Find all non-directory inodes in the FS
3. For all non-directory inodes without corresponding object in pool
7, create an empty object in pool 7 and copy xattr 'parent' from
corresponding object in pool 12 or pool 20.
Thanks for the help with this case. Most files on that filesystem are
processed by the nightly backup, so other affected files would have
shown up before.
There're currently abount 10 million objects in pool 7 with over 13 TB
accumulated data, and 15 million objects / 35 TB data in pool 12.
Manually checking them has to be postponed until our cluster is updated
with more suitable hardware.
I'm going to remove the dangling directory entry from the directory's
omap following the guideline in your first mail if you do not need any
further debug data.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com