On Wed, 16 Jan 2013, Gandalf Corvotempesta wrote: > In a ceph cluster, flashcache with writeback is considered safe? > In case of SSD failure, the flashcache contents should be already been > replicated (by ceph) in other servers, right? This sort of configuration effectively bundles the disk and SSD into a single unit, where the failure of either results in the loss of both. >From Ceph's perspective, it doesn't matter if the thing it is sitting on is a single disk, an SSD+disk flashcache thing, or a big RAID array. All that changes is the probability of failure. The thing to watch out for *knowing* that the whole is lost when one part fails (vs plowing ahead with a corrupt fs). > I'm planning to use this configuration: Supermicro with 12 spinning > disks e 2 SSD. > 6 spinning disks will have ceph journal on SSD1, the other 6 disks > will have ceph journal on disks2. > > One OSD for each spinning disk (a single XFS filesystem for the whole disk). > XFS metadata to a parition of SSD1 > XFS flashcache to another partition of SSD1 > > So, 3 partitions for each OSD on the SSD. > How big should be these partitions? Any advice? > > No raid at all, except for 1 RAID-1 volume made with a 10GB partitions > on each SSD, for the OS. Log files will be replicated to a remote > server, so writes on OS partitions are very very low. > > Any hint? Adivice? Critics? I would worry that there is a lot of stuff piling onto the SSD and it may become your bottleneck. My guess is that another 1-2 SSDs will be a better 'balance', but only experiementation will really tell us that. Otherwise, those seem to all be good things to put on teh SSD! sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html