> > That's an average. For a random seek to exceed that, it's going to have > to > > span many cylinders. Give the container size of a modern cylinder, > that's a > > pretty big jump. Single applications will tend to have their data > lumped > > somewhat together on the drive. > > Only at the start, which is usualy when people benchmark. But after a > while filesystem fragment. Files get distributed all over the disk, > files themself get spread out as they grow. And suddenly an FS that > was fine month ago is too slow. There can be a lot of application dependent variation, of course, but even with a fragmented disk, many applications still tend to wind up with their files clustered together on the disk. If the application writes each file once and never updates it, creating many more files as time goes by, then indeed the database will grow ever more scattered. Random access files, of course, may wind up scattered all over the drive, even if there is only one file use by the app. If the application tends to update the majority of its files on a regular basis, however, then the file updates tend to fall in little pools across the disk, rather than being scattered in a perfectly random fashion. One's mileage will definitely vary. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html