Hello! I've always thought at indes as something uniq on every file on the same file system. Today, I saw something weird that tickled my theory about this. I found two folders, on the same filesystem that had the same indoe. It is inode 1. Here is some info.. [root@test ~]# ls -lid /sys 1 drwxr-xr-x 11 root root 0 Jul 25 13:06 /sys [root@test ~]# ls -lid /dev/pts 1 drwxr-xr-x 2 root root 0 Jul 25 13:06 /dev/pts [root@test ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 36756152 2138060 32720828 7% / /dev/hda1 101086 11818 84049 13% /boot tmpfs 253652 0 253652 0% /dev/shm [root@test ~]# ls /sys/ block bus class devices firmware fs kernel module power [root@test ~]# ls /dev/pts/ 0 [root@test ~]# stat /sys/ ; stat /dev/pts File: `/sys/' Size: 0 Blocks: 0 IO Block: 4096 directory Device: 0h/0d Inode: 1 Links: 11 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2008-07-28 07:51:41.543581362 -0100 Modify: 2008-07-25 13:06:46.705937679 -0100 Change: 2008-07-25 13:06:46.705937679 -0100 File: `/dev/pts' Size: 0 Blocks: 0 IO Block: 4096 directory Device: bh/11d Inode: 1 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2008-07-28 07:51:53.067829410 -0100 Modify: 2008-07-25 13:06:46.707937375 -0100 Change: 2008-07-25 13:06:46.707937375 -0100 [root@larscen ~]# fdisk -l Disk /dev/hda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 14 4865 38973690 8e Linux LVM [root@test ~]# uname -a Linux test 2.6.18-92.el5 #1 SMP Tue Jun 10 18:49:47 EDT 2008 i686 i686 i386 GNU/Linux As you see, both /sys and /dev/pts have the same inode. This is not an issue, but a question. I can see that the device is not the same on these two files/folders, but they are on the same fs.. I searched for a couple of more inodes (find / -inum NUMBER) to find out that this is very common.. The OS is centos5, but the same seems to be the case on debian. Side question: How many % used inodes critical to much? I have a server that have around 66% used inodes, and wonder if I should do something or if I should let the problem fix itself. Thanks Lars -- To unsubscribe from this list: send the line "unsubscribe linux-admin" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html