Oscar, a few thoughts: o I think you might have some misunderstandings about how Ceph works. Ceph is best deployed as a single cluster spanning multiple servers, generally at least 3. Is that your plan? It sort of sounds as though you're thinking of Ceph managing only the drives local to each of your converged VDI hosts, like local RAID would. Ceph doesn't work that way. Well, technically it could but wouldn't be a great architecture. You would want to have at least 3 servers, with all of the Ceph OSDs in a single cluster. o Re RAID0: > Then, may I understand that your advice is a RAID0 for each 4TB? For a > balanced configuration... > > 1 osd x 1 disk of 4TB > 1 osd x 2 disks of 2TB > 1 odd x 4 disks of 1 TB For performance a greater number of smaller drives is generally going to be best. VDI desktops are going to be fairly latency-sensitive and you'd really do best with SSDs. All those desktops thrashing a small number of HDDs is not going to deliver tolerable performance. Don't use RAID at all for the OSDs. Even if you get hardware RAID HBAs, configure JBOD/passthrough mode so that OSDs are deployed directly on the drives. This will minimize latency as well as manifold hassles that one adds when wrapping drives in HBA RAID volumes. o Re CPU: > The other question is considering having one OSDs vs 8 OSDs... 8 OSDs will > consume more CPU than 1 OSD (RAID5) ? > > As I want to share compute and osd in the same box, resources consumed by > OSD can be a handicap. If the CPU cycles used by Ceph are a problem, your architecture has IMHO bigger problems. You need to design for a safety margin of RAM and CPU to accommodate spikes in usage, both by Ceph and by your desktops. There is no way each of the systems you describe is going to have enough cycles for 100 desktops concurrently active. You'd be allocating each of them only ~3GB of RAM -- I've not had to run MS Windows 10 but even with page sharing that seems awfully tight on RAM. Since you mention PProLiant and 8 drives I'm going assume you're targeting the DL360? I suggest if possible considering the 10SFF models to get you more drive bays, ditching the optical drive. If you can get rear bays to use to boot the OS from, that's better yet so you free up front panel drive bays for OSD use. You want to maximize the number of drive bays available for OSD use, and if at all possible you want to avoid deploying the operating system's filesystems and OSDs on the same drives. With the numbers you mention throughout the thread, it would seem as though you would end up with potentially as little as 80GB of usable space per virtual desktop - will that meet your needs? One of the difficulties with converged architectures is that storage and compute don't necessarily scale at the same rate. To that end I suggest considering 2U 25-drive-bay systems so that you have room to add more drives. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com