Hmmm, OK. - This really seems to belong to one virtual machine, that is writing to a single object over and over again. Thanks for your hint. I guess I will have to look into the VM. Christian 2011/11/24 Gregory Farnum <gregory.farnum@xxxxxxxxxxxxx>: > PGInfos are updated on disk with every write to the PG. I'm surprised > that one PG has so much more activity than the others, but that's why > that inode has so much activity. > > What are you doing with this installation, and roughly how many PGs > are on the node? > > On Thu, Nov 24, 2011 at 3:33 AM, Christian Brunner <chb@xxxxxx> wrote: >> I'm running a btrfs-debug patch on one of our nodes. This patch prints >> calls to btrfs_orphan_add. I'm still waiting for the problem the patch >> was intended to trace, but in the logs I found something ceph related >> I don't understand: >> >> When I look at the btrfs_orphan_add messages there is one inode that >> is updated over and over again. When I count the inodes in my log, I >> can see the following distribution (only inodes with more than 500 >> btrfs_orphan_add calls listed): >> >> #cnt Inode >> 1117 7403 >> 17218 7457 >> 848 7484 >> 539 7984 >> 1446 9098 >> 635 9346 >> >> When I look at the filesysteme, I can see that inode 7457 belongs to: >> >> current/meta/DIR_8/DIR_A/pginfo\u2.8d__0_28D2BFA8 >> >> I don't know how the pginfo stuff works, but I really wonder why the >> distribution is so uneven. >> >> "strace -f | grep 28D2BFA8" is giving me the this output: >> >> [...] >> [pid 3840] stat("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> {st_mode=S_IFREG|0644, st_size=8, ...}) = 0 >> [pid 3840] stat("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> {st_mode=S_IFREG|0644, st_size=8, ...}) = 0 >> [pid 3840] truncate("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> 0) = 0 >> [pid 3840] stat("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 >> [pid 3840] stat("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 >> [pid 3840] open("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> O_WRONLY|O_CREAT, 0644) = 64 >> [pid 3841] stat("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> {st_mode=S_IFREG|0644, st_size=8, ...}) = 0 >> [pid 3841] stat("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> {st_mode=S_IFREG|0644, st_size=8, ...}) = 0 >> [pid 3841] truncate("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> 0) = 0 >> [pid 3841] stat("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 >> [pid 3841] stat("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 >> [pid 3841] open("/ceph/osd.015/current/meta/DIR_2/pginfo\\u2.d6__0_28D16932", >> O_WRONLY|O_CREAT, 0644) = 64 >> [...] >> >> Regards, >> Christian >> -- >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html