On Mon, May 17, 2010 at 05:24:39PM +0300, Boaz Harrosh wrote: > On 05/17/2010 04:53 PM, J. Bruce Fields wrote: > > On Wed, May 12, 2010 at 04:28:12PM -0400, bfields wrote: > >> On Wed, May 12, 2010 at 09:46:43AM +0300, Benny Halevy wrote: > >>> On May. 10, 2010, 6:36 +0300, Zhang Jingwang <zhangjingwang@xxxxxxxxxxxx> wrote: > >>>> Optimize for sequencial write. Layout infos and tags are organized by > >>>> file offset. When appending data to a file whole list will be examined, > >>>> which introduce notable performance decrease. > >>> > >>> Looks good to me. > >>> > >>> Fred, can you please double check? > >> > >> I don't know if Fred's still up for reviewing block stuff? > >> > >> I've been trying to keep up with at least some minimal testing, but not > >> as well as I'd like. > >> > >> The one thing I've noticed is that the connectathon general test has > >> started failing right at the start with an IO error. The last good > >> version I tested was b5c09c21, which was based on 33-rc6. The earliest > >> bad version I tested was 419312ada, based on 34-rc2. A quick look at > >> network traces from the two traces didn't turn up anything obvious. I > >> haven't had the chance yet to look closer. > > > > As of the latest (6666f47d), in my tests the client is falling back on > > IO to the MDS and doing no block IO at all. b5c09c21 still works, so > > the problem isn't due to a change in the server I'm testing against. I > > haven't investigated any more closely. > > > > You might be hitting the .commit bug, no? Still no fix. I'm using a work > around for objects. I'm not sure how it affects blocks. I think you should > see that the very first IO goes through layout driver then the IO is redone > through MDS, for each node. Even though write/read returned success because > commit returns NOT_ATTEMPTED. But I might be totally off. I don't believe it's even attempting a GETLAYOUT. I'll take a look at the network....--b. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html