On 22-6-2017 03:59, Christian Balzer wrote: >> Agreed. On the topic of journals and double bandwidth, am I correct in >> thinking that btrfs (as insane as it may be) does not require double >> bandwidth like xfs? Furthermore with bluestore being close to stable, will >> my architecture need to change? >> > BTRFS at this point is indeed a bit insane, given the current levels of > support, issues (search the ML archives) and future developments. > And you'll still wind up with double writes most likely, IIRC. > > These aspects of Bluestore have been discussed here recently, too. > Your SSD/NVMe space requirements will go down, but if you want to have the > same speeds and more importantly low latencies you'll wind up with all > writes going through them again, so endurance wise you're still in that > "Lets make SSDs great again" hellhole. Please note that I know little about btrfs, but its sister ZFS can include caching/log devices transparent in its architecture. And even better, they are allowed to fail without much problems. :) Now the problem I have is that first Ceph journals the writes to its log, then hands the write over to ZFS, where its gets logged again. So that are 2 writes, (and in the case of ZFS, they only get read iff the filesystems had a crash) Bad thing about ZFS is that the journal log need not be very big: about 5 sec of max required diskwrites. I have 'm a 1Gb and they never filled up yet. But the used bandwidth is going to doubled due to double the amount of writes. If logging of btrfs is anything like this, then you have to look at how you architecture the filesystems/devices underlying Ceph. --WjW _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com