On Tue, Feb 7, 2012 at 3:31 PM, Loke, Chetan <Chetan.Loke@xxxxxxxxxxxx> wrote: > I came across UBIFS when I was looking to implement just a backing > device. So I'm trying to understand if you looked at UBIFS and if parts > of that could have been used(or if they are used already?). Because in > the end we need to create an erase-aware FS(without the usual > defrag/syscall/etc support) and manage that in conjunction with > block-layer. Agreed that UBIFS is a generic FS. But we also need to > worry about erase-cycles/counters etc > (http://lxr.linux.no/#linux+v3.2.5/Documentation/filesystems/ubifs.txt#L > 12 - all the 5 points). Plus, it supports on-the-fly compression. I've > never used UBIFS. So I don't know how it performs. May be we could just > pick UBI's FS part and stick that with your block-layer code? I don't quite follow what you're trying to do - you want to turn raw flash into a block device, i.e. an ftl? Bcache is pretty close to being an ftl, and I think what's left will get finished off sooner or later (I recently implemented flash only volume support, it works and it's fast but I haven't implemented the copying garbage collector yet so internal fragmentation limits its usefulness. It's basically a thin provisioning volume manager for flash, though). As far as using bcache for the bottom of a real filesystem - I definitely want to do that. I haven't looked at ubifs at all; I was thinking exofs might be a decent starting point, ubifs hadn't crossed my mind. There's a bunch of fun things we could do with it, though. After multiple cache device support is done (there's not much left - it's just been at the bottom of the todo list) I'm going to write an allocator more suitable for rotating disks and then it'll do volume management + thin provisioning + caching, all with the same index. The btree is outrageously fast now, too - if there's another btree out there that comes close I'd really like to know. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html