On 07/02/15 17:53, Steffen Tilsch wrote: > > Hello Cephers, > > Whenever I read about HDDs for OSDs it is told that "they will deliver > around 130 IOPS". > Where does this number come from and how it was measured (random/seq, > how big where the IOs, which queue-dephat what latency) or is it more > a general number depending on disk seek times? > 7200 rpm drives do 120 rotations per second. In theory if the drive could move its head at an infinite speed, and an IO would be limited to a track length (which should be around 1MB on recent hardware) even in the worst case scenario the drive could do 120 IOPS. In practice from the top of my head: - the drive can't move its head at infinite speeds so it might have to wait for an additional rotation to reach the zone to read or write, - some drives (probably most of them now as it's mandatory to get real benefits from NCQ/TCQ) can make very fast short seeks which enable multiple reads/writes from tracks close to one another during a single rotation, - the OS or controller can coalesce multiple sequential IOs into one, - NCQ/TCQ can reorder accesses to minimize the total time wasted seeking and waiting for the platters to rotate. So the actual reachable number of IOPS is difficult to estimate but it should be close to 120 IOPS or even higher because there are more optimizations available that can make this number rise than conditions that make it fall. So it's a rule of thumb to compute the capacity for random accesses. In practice the accesses are rarely completely random so this should be used as a first step when designing storage system when you lack concrete experience of the expected workload and the behavior of the whole storage stack. When you have running hardware, you can get more realistic numbers with dstat for example (dstat --disk-util which shows an estimation of the percentage of IO used for disks) or careful study of IO waits. Lionel _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com