In a ceph cluster, flashcache with writeback is considered safe? In case of SSD failure, the flashcache contents should be already been replicated (by ceph) in other servers, right? I'm planning to use this configuration: Supermicro with 12 spinning disks e 2 SSD. 6 spinning disks will have ceph journal on SSD1, the other 6 disks will have ceph journal on disks2. One OSD for each spinning disk (a single XFS filesystem for the whole disk). XFS metadata to a parition of SSD1 XFS flashcache to another partition of SSD1 So, 3 partitions for each OSD on the SSD. How big should be these partitions? Any advice? No raid at all, except for 1 RAID-1 volume made with a 10GB partitions on each SSD, for the OS. Log files will be replicated to a remote server, so writes on OS partitions are very very low. Any hint? Adivice? Critics? -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html