Re: Ceph Sizing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would be a lot more conservative in terms of what a spinning drive can
do. The Mirantis presentation has pretty high expectations out of a
spinning drive, as they¹re ignoring somewhat latency (til the last few
slides). Look at the max latencies for anything above 1 QD on a spinning
drive.

If you factor in a latency requirement, the capability of the drives fall
dramatically. You might be able to offset this by using NVMe or something
as a cache layer between the journal and the OSD, using bcache, LVM cache,
etc. In much of the performance testing that we¹ve done, the average isn¹t
too bad, but 90th percentile numbers tend to be quite bad. Part of it is
probably from locking PGs during a flush, and the other part is just the
nature of spinning drives.

I¹d try to get a handle on expected workloads before picking the gear, but
if you have to pick before that, SSD if you have the budget :) You can
offset it a little by using erasure coding for the RGW portion, or using
spinning drives for that.

I think picking gear for Ceph is tougher than running an actual cluster :)
Best of luck. I think you¹re still starting with better, and more info
than some of us did years ago.

Warren Wang




From:  Sam Huracan <nowitzki.sammy@xxxxxxxxx>
Date:  Thursday, December 3, 2015 at 4:01 AM
To:  Srinivasula Maram <Srinivasula.Maram@xxxxxxxxxxx>
Cc:  Nick Fisk <nick@xxxxxxxxxx>, "ceph-users@xxxxxxxx"
<ceph-users@xxxxxxxx>
Subject:  Re:  Ceph Sizing


I'm following this presentation of Mirantis team:
http://www.slideshare.net/mirantis/ceph-talk-vancouver-20

They calculate CEPH IOPS = Disk IOPS * HDD Quantity * 0.88 (4-8k random
read proportion)


And  VM IOPS = CEPH IOPS / VM Quantity

But if I use replication of 3, Would VM IOPS be divided by 3?


2015-12-03 7:09 GMT+07:00 Sam Huracan <nowitzki.sammy@xxxxxxxxx>:

IO size is 4 KB, and I need a Minimum sizing, cost optimized
I intend use SuperMicro Devices
http://www.supermicro.com/solutions/storage_Ceph.cfm


What do you think?


2015-12-02 23:17 GMT+07:00 Srinivasula Maram
<Srinivasula.Maram@xxxxxxxxxxx>:

One more factor we need to consider here is IO size(block size) to get
required IOPS, based on this we can calculate the bandwidth and design the
solution.

Thanks
Srinivas

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Nick Fisk
Sent: Wednesday, December 02, 2015 9:28 PM
To: 'Sam Huracan'; ceph-users@xxxxxxxx
Subject: Re:  Ceph Sizing

You've left out an important factor....cost. Otherwise I would just say
buy enough SSD to cover the capacity.

> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
> Of Sam Huracan
> Sent: 02 December 2015 15:46
> To: ceph-users@xxxxxxxx
> Subject:  Ceph Sizing
>
> Hi,
> I'm building a storage structure for OpenStack cloud System, input:
> - 700 VM
> - 150 IOPS per VM
> - 20 Storage per VM (boot volume)
> - Some VM run database (SQL or MySQL)
>
> I want to ask a sizing plan for Ceph to satisfy the IOPS requirement,
> I list some factors considered:
> - Amount of OSD (SAS Disk)
> - Amount of Journal (SSD)
> - Amount of OSD Servers
> - Amount of MON Server
> - Network
> - Replica ( default is 3)
>
> I will divide to 3 pool with 3 Disk types: SSD, SAS 15k and SAS 10k
> Should I use all 3 disk types in one server or build dedicated servers
> for every pool? Example: 3 15k servers for Pool-1, 3 10k Servers for
>Pool-2.
>
> Could you help me a formula to calculate the minimum devices needed
> for above input.
>
> Thanks and regards.








_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com












This email and any files transmitted with it are confidential and intended solely for the individual or entity to whom they are addressed. If you have received this email in error destroy it immediately. *** Walmart Confidential ***
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux