I'm following this presentation of Mirantis team:
http://www.slideshare.net/mirantis/ceph-talk-vancouver-20
They calculate CEPH IOPS = Disk IOPS * HDD Quantity * 0.88 (4-8k random read proportion)
And VM IOPS = CEPH IOPS / VM Quantity
But if I use replication of 3, Would VM IOPS be divided by 3?
2015-12-03 7:09 GMT+07:00 Sam Huracan <nowitzki.sammy@xxxxxxxxx>:
IO size is 4 KB, and I need a Minimum sizing, cost optimizedI intend use SuperMicro DevicesWhat do you think?2015-12-02 23:17 GMT+07:00 Srinivasula Maram <Srinivasula.Maram@xxxxxxxxxxx>:One more factor we need to consider here is IO size(block size) to get required IOPS, based on this we can calculate the bandwidth and design the solution.
Thanks
Srinivas
-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Nick Fisk
Sent: Wednesday, December 02, 2015 9:28 PM
To: 'Sam Huracan'; ceph-users@xxxxxxxx
Subject: Re: Ceph Sizing
You've left out an important factor....cost. Otherwise I would just say buy enough SSD to cover the capacity.
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> Of Sam Huracan
> Sent: 02 December 2015 15:46
> To: ceph-users@xxxxxxxx
> Subject: Ceph Sizing
>
_______________________________________________> Hi,
> I'm building a storage structure for OpenStack cloud System, input:
> - 700 VM
> - 150 IOPS per VM
> - 20 Storage per VM (boot volume)
> - Some VM run database (SQL or MySQL)
>
> I want to ask a sizing plan for Ceph to satisfy the IOPS requirement,
> I list some factors considered:
> - Amount of OSD (SAS Disk)
> - Amount of Journal (SSD)
> - Amount of OSD Servers
> - Amount of MON Server
> - Network
> - Replica ( default is 3)
>
> I will divide to 3 pool with 3 Disk types: SSD, SAS 15k and SAS 10k
> Should I use all 3 disk types in one server or build dedicated servers
> for every pool? Example: 3 15k servers for Pool-1, 3 10k Servers for Pool-2.
>
> Could you help me a formula to calculate the minimum devices needed
> for above input.
>
> Thanks and regards.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com