On 2017-01-29 22:09, Willem Jan Withagen wrote: > The disadvantage is that there will be a double write per original write: > (ceph) first write is to the journal-file > (zfs) write is stored in the write queue > (zfs) write to ZIL(ssd) if write is synced write > (zfs) async write to disk when write slot is available > (ceph) read from zfs-store, > (zfs) delivers data either arc(ram) or l2arc(ssd) or HD > (ceph) writes data to filestore. > (zfs) write is stored in the write queue > (zfs) write to ZIL(ssd) if write is synced write > (zfs) async write to disk when write slot is available > > And I hoped to forgo the Ceph journal write/read cycle. You've got a slight misconception there, not that it matters much to your problem. A Ceph OSD will never read its journal under normal operation. OSD will only commit data to the journal before it commits them to its filestore. The journal is only replayed after an OSD crash or otherwise abnormal termination. The way Ceph OSDs w/ filestore has been put in place I don't think there is a way to fully utilize zfs features. -K. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html