Re: Ceph Sizing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would suggest you forget about 15k disks, there probably isn't much point in using them vs SSD's nowdays. For 10K disks, if cost is a key factor I would maybe look at the WD Raptor disks.

In terms of numbers of disks, it's very hard to calculate with the numbers you have provided. That simple formula is great if the IO load is constant, but what you will often find is that not all VM's will all be doing 150iops at once and so your actual total figure will be a lot less.

But yes if you have 3x replication, you will need 3 times the number of disk iops. Without knowing your read/write split, I would imagine this would be very hard to calculate though.

Do you have any current systems running to be able to get a rough idea of how much IO you might generate? Otherwise other people with similar sized VM workloads might be able to give example usage patterns.

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Sam Huracan
> Sent: 03 December 2015 09:02
> To: Srinivasula Maram <Srinivasula.Maram@xxxxxxxxxxx>
> Cc: Nick Fisk <nick@xxxxxxxxxx>; ceph-users@xxxxxxxx
> Subject: Re:  Ceph Sizing
> 
> I'm following this presentation of Mirantis team:
> http://www.slideshare.net/mirantis/ceph-talk-vancouver-20
> 
> They calculate CEPH IOPS = Disk IOPS * HDD Quantity * 0.88 (4-8k random
> read proportion)
> 
> And  VM IOPS = CEPH IOPS / VM Quantity
> 
> But if I use replication of 3, Would VM IOPS be divided by 3?
> 
> 2015-12-03 7:09 GMT+07:00 Sam Huracan <nowitzki.sammy@xxxxxxxxx>:
> IO size is 4 KB, and I need a Minimum sizing, cost optimized
> I intend use SuperMicro Devices
> http://www.supermicro.com/solutions/storage_Ceph.cfm
> 
> What do you think?
> 
> 2015-12-02 23:17 GMT+07:00 Srinivasula Maram
> <Srinivasula.Maram@xxxxxxxxxxx>:
> One more factor we need to consider here is IO size(block size) to get
> required IOPS, based on this we can calculate the bandwidth and design the
> solution.
> 
> Thanks
> Srinivas
> 
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> Nick Fisk
> Sent: Wednesday, December 02, 2015 9:28 PM
> To: 'Sam Huracan'; ceph-users@xxxxxxxx
> Subject: Re:  Ceph Sizing
> 
> You've left out an important factor....cost. Otherwise I would just say buy
> enough SSD to cover the capacity.
> 
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> > Of Sam Huracan
> > Sent: 02 December 2015 15:46
> > To: ceph-users@xxxxxxxx
> > Subject:  Ceph Sizing
> >
> > Hi,
> > I'm building a storage structure for OpenStack cloud System, input:
> > - 700 VM
> > - 150 IOPS per VM
> > - 20 Storage per VM (boot volume)
> > - Some VM run database (SQL or MySQL)
> >
> > I want to ask a sizing plan for Ceph to satisfy the IOPS requirement,
> > I list some factors considered:
> > - Amount of OSD (SAS Disk)
> > - Amount of Journal (SSD)
> > - Amount of OSD Servers
> > - Amount of MON Server
> > - Network
> > - Replica ( default is 3)
> >
> > I will divide to 3 pool with 3 Disk types: SSD, SAS 15k and SAS 10k
> > Should I use all 3 disk types in one server or build dedicated servers
> > for every pool? Example: 3 15k servers for Pool-1, 3 10k Servers for Pool-2.
> >
> > Could you help me a formula to calculate the minimum devices needed
> > for above input.
> >
> > Thanks and regards.
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux