Re: HW Raid vs. Multiple OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
thanks for the explanation, but...
Twisting the Ceph storage model as you plan it is not a good idea :
- You will decrease the support level(I'm not sure many people will
build such an architecture)
- You are certainly going to face strange issues with HW Raid on top of
Ceph OSD
- You should'nt want to go to size=2. I know the counterparts of size=3
(IOPS, Usable space), but it seems not really safe to downgrade to size=2.
- Your servers seem to have enough horsepower regarding CPU,RAM and
disks. But you havent't told us about Ceph replication Network. At least
10Gbe, i hope.
- Your public network should be more than 1Gbe too, far more..
- How will you export VM ? single KVM samba server ? Ceph authx clients ???
- Rapidly, with size=3, you have, with 4 servers : 4*8*2/3=22TB usable
space. With 100 VDI, 220 GB per VM.. Is it enough to expand those VM sizes ?


In a conclusion,i fully understand the issues doing a complete test lab
before buying a complete cluster. But, you should do a few tests before
to tweak the solution to your needs.

Good luck
Best regards


Le 14/11/2017 à 11:36, Oscar Segarra a écrit :
> Hi Anthony,
> 
> 
> o I think you might have some misunderstandings about how Ceph works. 
> Ceph is best deployed as a single cluster spanning multiple servers,
> generally at least 3.  Is that your plan?   
> 
> I want to deply servers for 100VDI Windows 10 each (at least 3 servers).
> I plan to sell servers dependingo of the number of VDI required by my
> customer. For 100 VDI --> 3 servers, for 400 VDI --> 4 servers
> 
> This is my proposal of configuration:
> 
> *Server1:*
> CPU: 2x16 Core
> RAM: 512
> Disk: 2x400 for OS and 8x1.9TB for VM (SSD)
> 
> *Server2:*
> CPU: 2x16 Core
> RAM: 512
> Disk: 2x400 for OS and 8x1.9TB for VM (SSD)
> 
> *Server3:*
> CPU: 2x16 Core
> RAM: 512
> Disk: 2x400 for OS and 8x1.9TB for VM (SSD)
> 
> *Server4:*
> CPU: 2x16 Core
> RAM: 512
> Disk: 2x400 for OS and 8x1.9TB for VM (SSD)
> ...
> *ServerN:*
> CPU: 2x16 Core
> RAM: 512
> Disk: 2x400 for OS and 8x1.9TB for VM (SSD)
> 
> If I create an OSD for each disk and I pin a core for each osd in a
> server I wil need 8 cores just for managing osd. If I create 4 RAID0 of
> 2 disks each, I will need just 4 osd, and so on:
> 
> 1 osd x 1 disk of 4TB
> 1 osd x 2 disks of 2TB
> 1 odd x 4 disks of 1 TB
> 
> If the CPU cycles used by Ceph are a problem, your architecture has IMHO
> bigger problems.  You need to design for a safety margin of RAM and CPU
> to accommodate spikes in usage, both by Ceph and by your desktops. 
> There is no way each of the systems you describe is going to have enough
> cycles for 100 desktops concurrently active.  You'd be allocating each
> of them only ~3GB of RAM -- I've not had to run MS Windows 10 but even
> with page sharing that seems awfully tight on RAM.
> 
> Sorry, I think my design has not been correctly explained. I hope my
> previous explanation clarifies it. The problem is i'm in the design
> phase and I don't know if ceph CPU cycles can be a problem and that is
> the principal object of this post.
> 
> With the numbers you mention throughout the thread, it would seem as
> though you would end up with potentially as little as 80GB of usable
> space per virtual desktop - will that meet your needs?
> 
> Sorry, I think 80GB is enough, nevertheless, I plan to use RBD clones
> and therefore even with size=2, I think I will have more than 80GB
> available for each vdi.
> 
> In this design phase where I am, every advice is really welcome!
> 
> Thanks a lot
> 
> 2017-11-13 23:40 GMT+01:00 Anthony D'Atri <aad@xxxxxxxxxxxxxx
> <mailto:aad@xxxxxxxxxxxxxx>>:
> 
>     Oscar, a few thoughts:
> 
>     o I think you might have some misunderstandings about how Ceph
>     works.  Ceph is best deployed as a single cluster spanning multiple
>     servers, generally at least 3.  Is that your plan?  It sort of
>     sounds as though you're thinking of Ceph managing only the drives
>     local to each of your converged VDI hosts, like local RAID would. 
>     Ceph doesn't work that way.  Well, technically it could but wouldn't
>     be a great architecture.  You would want to have at least 3 servers,
>     with all of the Ceph OSDs in a single cluster.
> 
>     o Re RAID0:
> 
>     > Then, may I understand that your advice is a RAID0 for each 4TB? For a
>     > balanced configuration...
>     >
>     > 1 osd x 1 disk of 4TB
>     > 1 osd x 2 disks of 2TB
>     > 1 odd x 4 disks of 1 TB
> 
> 
>     For performance a greater number of smaller drives is generally
>     going to be best.  VDI desktops are going to be fairly
>     latency-sensitive and you'd really do best with SSDs.  All those
>     desktops thrashing a small number of HDDs is not going to deliver
>     tolerable performance.
> 
>     Don't use RAID at all for the OSDs.  Even if you get hardware RAID
>     HBAs, configure JBOD/passthrough mode so that OSDs are deployed
>     directly on the drives.  This will minimize latency as well as
>     manifold hassles that one adds when wrapping drives in HBA RAID volumes.
> 
>     o Re CPU:
> 
>     > The other question is considering having one OSDs vs 8 OSDs... 8 OSDs will
>     > consume more CPU than 1 OSD (RAID5) ?
>     >
>     > As I want to share compute and osd in the same box, resources consumed by
>     > OSD can be a handicap.
> 
> 
>     If the CPU cycles used by Ceph are a problem, your architecture has
>     IMHO bigger problems.  You need to design for a safety margin of RAM
>     and CPU to accommodate spikes in usage, both by Ceph and by your
>     desktops.  There is no way each of the systems you describe is going
>     to have enough cycles for 100 desktops concurrently active.  You'd
>     be allocating each of them only ~3GB of RAM -- I've not had to run
>     MS Windows 10 but even with page sharing that seems awfully tight on
>     RAM.
> 
>     Since you mention PProLiant and 8 drives I'm going assume you're
>     targeting the DL360?  I suggest if possible considering the 10SFF
>     models to get you more drive bays, ditching the optical drive.  If
>     you can get rear bays to use to boot the OS from, that's better yet
>     so you free up front panel drive bays for OSD use.  You want to
>     maximize the number of drive bays available for OSD use, and if at
>     all possible you want to avoid deploying the operating system's
>     filesystems and OSDs on the same drives.
> 
>     With the numbers you mention throughout the thread, it would seem as
>     though you would end up with potentially as little as 80GB of usable
>     space per virtual desktop - will that meet your needs?  One of the
>     difficulties with converged architectures is that storage and
>     compute don't necessarily scale at the same rate.  To that end I
>     suggest considering 2U 25-drive-bay systems so that you have room to
>     add more drives.
> 
> 
> 
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux