Re: POHMELFS high performance network filesystem. Transactions, failover, performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 14, 2008 at 06:41:53AM -0700, Sage Weil (sage@xxxxxxxxxxxx) wrote:
> Yes.  Only a pagevec at a time, though... apparently 14 is a small enough 
> number not to bite too many people in practice?

Well, POHMELFS can use up to 90 out of 512 or 1024 on x86, but that just
moves a problem a bit closer.

IMHO problem can be in fact, that copy can be more significant overhead
than per page sockt lock and direct DMA (I belive most of the GigE and
above (and of course RDMA) links have scatter-gather and RX checksumming),
it has to be tested, so I will change writeback path for POHMELFS to test
things. If there will not be any performance degradataion (and I believe
there will not be, as long as no improvements, since tests were always
network bound), I will use that approach.

-- 
	Evgeniy Polyakov
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux