On Wed, 2014-01-29 at 13:44 +0100, Andreas Rohner wrote: > > I hope I understand your approach correctly. Can it be summarized as > follows: Instead of overwriting the super block you want to reserve the > first segment to write the super block in a round-robin way into groups. Not superblock but super root. It's different things. > Thereby spreading the writes over a larger area. Then the groups should > probably have a typical erase block size like 512k. Group size hasn't any relation with erase block size. Group is not erase block. Moreover, different NAND chips can have different erase block size. Erase block size can be 8 MB for modern chips. I don't think that it needs to have any relations with erase block size. Group can have different sizes that it needs to define from the algorithm efficiency point of view. This special segment will fill by COW policy, anyway. > If that is true, I > don't think you need any special algorithm to search the latest super > block. You just read in the whole segment at mount time and select the > one with the biggest s_last_cno. > Even if you read the whole segment then you will need to search. But if you have smart algorithm then you don't need to read the whole segment in memory. It can be expensive operation that it will make mount more slow operation. > What about Ryusukes suggestion of never updating the super block and > instead using a clever segment allocation scheme that allows a binary > search for the latest segment? > I think that you need to read our discussion with more attention. Of course, my suggestion can have disadvantages and it needs to discuss it more deeply. But now I have feeling that you misunderstand my suggestion yet. Thanks, Vyacheslav Dubeyko. -- To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html