On 06/24/2010 10:57 AM, Andy Pace wrote: > Good call. > > > However, when running scale-n-defrag.sh (not supposed to run > defrag.sh standalone apparently), i get a lot of errors: > > find: `setfattr': No such file or directory The setfattrs are actually attribute deletes, so deleting what already didn't exist is a no-op. I was working with another user on the IRC channel yesterday who was seeing the same thing. The approach we came up with was: (1) Remove the xattrs *on the server side* to make sure they're well and truly gone and there won't be any inconsistent remnants to cause problems later. (2) Mount with lookup-unhashed and unhashed-sticky-bit enabled. (3) Run scale-n-defrag.sh on the client to redistribute *and* make sure all of the maps/links are consistent. I haven't heard back about the results yet, but a test on one directory seemed to work correctly so he seemed to feel comfortable doing it across the whole data set (personally I would have sought more input from the real devs first). > is it safe to ignore thouse? because it seems to have defragged > anyway: > > Defragmenting directory /distributed//29150 > (/root/defrag-store-29150.log) Completed directory Seems promising, but the real question is whether examining disk usage across the bricks shows improved distribution.