Re: CephFS: No space left on device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

That sounds great. I'll certainly try it out. 

Kind regards,

Davie De Smet

-----Original Message-----
From: Yan, Zheng [mailto:ukernel@xxxxxxxxx] 
Sent: Wednesday, October 12, 2016 3:41 PM
To: Davie De Smet <davie.desmet@xxxxxxxxxxxx>
Cc: Gregory Farnum <gfarnum@xxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxx>
Subject: Re:  CephFS: No space left on device

I have written a tool that fixes this type of error. I'm currently testing it. Will push it out tomorrow

Regards
Yan, Zheng

On Wed, Oct 12, 2016 at 9:18 PM, Davie De Smet <davie.desmet@xxxxxxxxxxxx> wrote:
> Hi Gregory,
>
> Thanks for the help! I've been looping over all trashcan files and the amount of strays is lowering. This is going to take quite some time as it are a lot of files but so far so good. If I should encounter any further problems regarding this topic, I'll give this thread a heads up.
>
> Kind regards,
>
> Davie De Smet
> Director Technical Operations and Customer Services, Nomadesk
> +32 9 240 10 31 (Office)
>
> -----Original Message-----
> From: Gregory Farnum [mailto:gfarnum@xxxxxxxxxx]
> Sent: Wednesday, October 12, 2016 2:11 AM
> To: Davie De Smet <davie.desmet@xxxxxxxxxxxx>
> Cc: Mykola Dvornik <mykola.dvornik@xxxxxxxxx>; John Spray 
> <jspray@xxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxx>
> Subject: Re:  CephFS: No space left on device
>
> On Tue, Oct 11, 2016 at 12:20 AM, Davie De Smet <davie.desmet@xxxxxxxxxxxx> wrote:
>> Hi,
>>
>> We do use hardlinks a lot. The application using the cluster has a build in 'trashcan' functionality based on hardlinks. Obviously, all removed files and hardlinks are not visible anymore on the CephFS mount itself. Can I manually remove the strays on the OSD's themselves?
>
> No, definitely not. At least part of the problem is:
> *) Ceph stores file metadata organized by its *path* location, not in a separate on-disk inode data structure like local FSes do.
> *) When you hard link a file in CephFS, its "primary" location increments the link counter and its "remote" location just records the inode number (and it has to look up metadata later on-demand).
> *) When you unlink the primary link, the inode data gets moved into the stray directory until one of the remote links comes calling.
>
>>Or do you mean that I'm required to do a small touch/write on all files that have not yet been deleted (this would be painfull as the cluster is 200TB+)?
>
> Luckily, it doesn't take quite that much work. It looks like just doing a getattr on all the remote links in your system should do it.
> If it's just your trash can, "ls -l" on that directory will probably 
> pull them in. Or you could delete the whole trashcan folder (set of
> folders?) and they'll go away as well.
> -Greg
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux