Thanks. Copying the respons back to the list. Roland 2010/8/1 Gregory Farnum <gregf@xxxxxxxxxxxxxxx>: > On Sun, Aug 1, 2010 at 11:02 AM, Roland Rabben <roland@xxxxxxxx> wrote: >> Great. I'll have a look at BTRFS. Any draw-backs with BTRFS? It looks >> pretty young. > It is pretty young, but we expect it'll be ready (at least for > replicated storage) as soon as Ceph is. :) > >> So if I understand you correctly. Use BTRFS to combine the disks in >> logical volumes. Perhaps 3 logical volumes each accross 12 disks each? >> Then running 3 OSDs, each with 4 GB of RAM. > Well, actually you'd want to do 3 logical volumes across 11 volumes > each, and save one disk per OSD instance to provide a journaling > device. > >> 6 logical volumes accross 6 disks each. Then running 6 OSDs with 2 GB RAM each. > We don't really have performance data to determine which of these > setups will be better for you; you'd have to experiment. Each OSD > daemon will take up between 200 and 800MB of RAM to do its work, but > any extra will be used by the kernel to cache file data, and depending > on your workload that can be a serious performance advantage. > It's not like you need to manually partition the RAM or anything, though! > >> Does BTRFS support a situation if a disk in a logical volume fails? >> Any RAID 5-like features where it could continue running wit a failed >> disk and rebuild once the failed disk is replaced? > Hmm, I don't know. I'm sure somebody on the list does though, if you > want to move the discussion back on-list. :) (We don't get enough > traffic to need discussions to stay off-list for traffic reasons or > anything, and if you keep it on-list Sage [lead developer] will see it > all.) > >> Any performance gains with larger number of disks in a logical BTRFS volume? > Not sure. I think btrfs can stripe across disks but depending on your > network connection that's more likely to be a limiting factor. :) > -Greg > -- Roland Rabben Founder & CEO Jotta AS Cell: +47 90 85 85 39 Phone: +47 21 04 29 00 Email: roland@xxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html