On 2/3/17, 3:23 AM, "ceph-users on behalf of Wido den Hollander" <ceph-users-bounces@xxxxxxxxxxxxxx on behalf of wido@xxxxxxxx> wrote: > >> Op 3 februari 2017 om 11:03 schreef Maxime Guyot >><Maxime.Guyot@xxxxxxxxx>: >> >> >> Hi, >> >> Interesting feedback! >> >> > In my opinion the SMR can be used exclusively for the RGW. >> > Unless it's something like a backup/archive cluster or pool with >>little to none concurrent R/W access, you're likely to run out of IOPS >>(again) long before filling these monsters up. >> >> That¹s exactly the use case I am considering those archive HDDs for: >>something like AWS Glacier, a form of offsite backup probably via >>radosgw. The classic Seagate enterprise class HDD provide ³too much² >>performance for this use case, I could live with 1Ž4 of the performance >>for that price point. >> > >If you go down that route I suggest that you make a mixed cluster for RGW. > >A (small) set of OSDs running on top of proper SSDs, eg Samsung SM863 or >PM863 or a Intel DC series. > >All pools by default should go to those OSDs. > >Only the RGW buckets data pool should go to the big SMR drives. However, >again, expect very, very low performance of those disks. One of the other concerns you should think about is recovery time when one of these drives fail. The more OSDs you have, the less this becomes an issue, but on a small cluster is might take over a day to fully recover from an OSD failure. Which is a decent amount of time to have degraded PGs. Bryan E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com