Experience with 5k RPM/archive HDDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

don't go there, we tried this with SMR drives, which will slow down to 
somewhere around 2-3 IOPS during backfilling/recovery and that renders 
the cluster useless for client IO. Things might change in the future, 
but for now, I would strongly recommend against SMR.

Go for normal SATA drives with only slightly higher price/capacity ratios.

- mike

On 2/3/17 2:46 PM, Stillwell, Bryan J wrote:
> On 2/3/17, 3:23 AM, "ceph-users on behalf of Wido den Hollander"
> <ceph-users-bounces at lists.ceph.com on behalf of wido at 42on.com> wrote:
> 
>>
>>> Op 3 februari 2017 om 11:03 schreef Maxime Guyot
>>> <Maxime.Guyot at elits.com>:
>>>
>>>
>>> Hi,
>>>
>>> Interesting feedback!
>>>
>>>   > In my opinion the SMR can be used exclusively for the RGW.
>>>   > Unless it's something like a backup/archive cluster or pool with
>>> little to none concurrent R/W access, you're likely to run out of IOPS
>>> (again) long before filling these monsters up.
>>>
>>> That?s exactly the use case I am considering those archive HDDs for:
>>> something like AWS Glacier, a form of offsite backup probably via
>>> radosgw. The classic Seagate enterprise class HDD provide ?too much?
>>> performance for this use case, I could live with 1?4 of the performance
>>> for that price point.
>>>
>>
>> If you go down that route I suggest that you make a mixed cluster for RGW.
>>
>> A (small) set of OSDs running on top of proper SSDs, eg Samsung SM863 or
>> PM863 or a Intel DC series.
>>
>> All pools by default should go to those OSDs.
>>
>> Only the RGW buckets data pool should go to the big SMR drives. However,
>> again, expect very, very low performance of those disks.
> 
> One of the other concerns you should think about is recovery time when one
> of these drives fail.  The more OSDs you have, the less this becomes an
> issue, but on a small cluster is might take over a day to fully recover
> from an OSD failure.  Which is a decent amount of time to have degraded
> PGs.
> 
> Bryan
> 
> E-MAIL CONFIDENTIALITY NOTICE:
> The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux