Re: Guidelines for Calculating IOPS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 19, 2012 at 12:07 PM, Mike Dawson <mdawson@xxxxxxxxxxxxx> wrote:
> I don't know if this use fits into buffered sequential IOPS or random IOPS
> territory. Need to learn more about the NVR software probably.

Video cameras are almost certainly not doing synchronous writes, which
means the data would be buffered and coalesced. You need to handle
~43MB/s of writes.
(At least, if that's not the case, then either something very strange
is happening or it's shockingly poorly written software.)

> What role would putting the journal on an SSD play here?

Not a ton — journals can absorb spikes in workload but everything
still needs to get written out to the main data store as individual
ops, so your long-term IOPs and throughput are capped at what your
main storage can provide (modulo replication, of course).


>> My guess is that the IOPS may drive the number of
>> servers, and the drives per server will be the
>> capacity divided by the number of required servers.
>>
>> So how many IOPS can you get per server?
>
>
> I don't know the answer, but would love input here! Assuming I use 7200rpm
> drives and conservatively estimate 75 IOPS/drive. I think I need:
>
> 5250 / 75 * 3 = 210 drives for 3x replication
> 5250 / 75 * 2 = 140 drives for 2x replication
>
> Assuming that's a good starting point, then it's all about how to bring
> those drives online. Ignoring extra servers for spare capacity / failures,
> you could do:
>
> 18 12-drive servers
>  9 24-drive servers
>  6 36-drive servers
>
> Is there any reason to assume any other bounding factors like CPU, RAM, etc
> that would make more servers a better architecture?

Well, each OSD takes up some memory and CPU for its own processes. We
generally recommend ~1GHz of modern CPU and 1GB of RAM per process;
these are very ballpark estimates but seem to work. The memory is
usually devoted mostly to page cache (the daemon itself is generally
at ~100MB last I checked) but it can balloon under certain failure
handling scenarios.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux