onsdag 09 maj 2007 skrev Steffen Prohaska: > The old implementation executed 'cvs status' for each file touched by > the patch > to be applied. The new code calls 'cvs status' only once and parses > cvs's > output to collect status information of all files contained in the > cvs working > copy. > > Runtime is now independent of the number of modified files. A > drawback is that > the new code retrieves status information for all files even if only > a few are > touched. The old implementation may be noticeably faster for small > patches to Ouch, lets see now. My working cvs checkout contains ~25k files and my typical commit touches 5-20 files. A quick (well....) test says cvs status on my checkout takes about five minutes to execute. Compare this with my typical exportcommit time of about ten seconds. If you really need this, make a switch to select it. Still we're missing a check for the case that new files/directories have been added on the server, but are missing from the checkout, or why not run an update first. If you are commit this number of large files you'll need that check, or it's hurt a lot when things fail. > large workingcopies. However, the old implementation doesn't scale if > more > files are touched, especially in remotely located cvs repositories. How come your commit are so large you'd prefer this behaviour? -- robin - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html