I have made some more research and found out the following .. thor:~# df -i Filesystem Inodes IUsed IFree IUse% Mounted on -[cut]- /dev/mapper/vgraid-data 475987968 227652 475760316 1% /data thor:~# strace e2defrag -r /dev/vgraid/data -[cut]- mmap2(NULL, 1903955968, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x46512000 (delay 15 seconds while allocating memory) mmap2(NULL, 475992064, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory) -[cut]- The first allocation seems to be 4 bytes per available inode on my filesystem. I wish now that I created the FS with less inodes, and there is another question. What's the gain of having less available inodes? If I recreated my filesystem, would it be an idea to make one inode per hundred block or something since that still is way more than I need? Would I gain speed from it? -----Original Message----- From: Magnus Månsson Sent: den 13 oktober 2006 14:14 To: 'ext3-users@xxxxxxxxxx' Subject: e2defrag - Unable to allocate buffer for inode priorities Hi, first of all, apologies if this isn't the right mailing list but it was the best I could find. If you know a better mailing list, please tell me. Today I tried to defrag one of my filesystems. It's a 3.5T large filesystem that has 6 software-raids in the bottom and then merged together using lvm. I was running ext3 but removed the journal flag with thor:~# tune2fs -O ^has_journal /dev/vgraid/data After that I fsckd just to be sure I wouldnt meet any unexpected problems. So now it was time to defrag, I used this command: thor:~# e2defrag -r /dev/vgraid/data After about 15 seconds (after it ate all my 1.5G of RAM) I got this answer: e2defrag (/dev/vgraid/data): Unable to allocate buffer for inode priorities I am using Debian unstable and here is the version information from e2defrag: thor:~# e2defrag -V e2defrag 0.73pjm1 RCS version $Id: defrag.c,v 1.4 1997/08/17 14:23:57 linux Exp $ I also tried to use -p 256, -p 128, -p 64 to see if it used less memory then, it didn't seem like that to me, took the same time for the program to abort. Is there any way to get around this problem? The answer might be to get 10G of RAM, but that's not very realistic, 2G sure, but I think that's the limit on my motherboard. A huge amount of swapfiles may solve it, and that's probably doable, but it will be enormous slow I guess? Why do I want to defrag? Well, fsck gives this nice info to me: /dev/vgraid/data: 227652/475987968 files (41.2% non-contiguous), 847539147/951975936 blocks 41% sounds like a lot in my ears and I am having a constant read of files on the drives, it's to slow already. Very thankful for ideas or others experiences, maybe it's just not possible with such large partition with todays tools, hey ext[23] only supports 4T. Let's hope ext4 comes within a year in the mainstream kernels. PS! Please CC me since I am not on the list so I dont have to wait for marc's archive to get the mails. -- Magnus Månsson Systems administrator Massive Entertainment AB Malmö, Sweden Office: +46-40-6001000 _______________________________________________ Ext3-users mailing list Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users