Re: Experience with 5k RPM/archive HDDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 3 februari 2017 om 11:03 schreef Maxime Guyot <Maxime.Guyot@xxxxxxxxx>:
> 
> 
> Hi,
> 
> Interesting feedback!
> 
>  > In my opinion the SMR can be used exclusively for the RGW.
>  > Unless it's something like a backup/archive cluster or pool with little to none concurrent R/W access, you're likely to run out of IOPS (again) long before filling these monsters up.
> 
> That’s exactly the use case I am considering those archive HDDs for: something like AWS Glacier, a form of offsite backup probably via radosgw. The classic Seagate enterprise class HDD provide “too much” performance for this use case, I could live with ¼ of the performance for that price point.
> 

If you go down that route I suggest that you make a mixed cluster for RGW.

A (small) set of OSDs running on top of proper SSDs, eg Samsung SM863 or PM863 or a Intel DC series.

All pools by default should go to those OSDs.

Only the RGW buckets data pool should go to the big SMR drives. However, again, expect very, very low performance of those disks.

Wido

> Cheers,
> Maxime
> 
> On 03/02/17 09:40, "ceph-users on behalf of Wido den Hollander" <ceph-users-bounces@xxxxxxxxxxxxxx on behalf of wido@xxxxxxxx> wrote:
> 
>     
>     > Op 3 februari 2017 om 8:39 schreef Christian Balzer <chibi@xxxxxxx>:
>     > 
>     > 
>     > 
>     > Hello,
>     > 
>     > On Fri, 3 Feb 2017 10:30:28 +0300 Irek Fasikhov wrote:
>     > 
>     > > Hi, Maxime.
>     > > 
>     > > Linux SMR is only starting with version 4.9 kernel.
>     > >
>     > What Irek said.
>     > 
>     > Also, SMR in general is probably a bad match for Ceph.
>     > Drives like that really want to be treated more like a tape than anything
>     > else.
>     >  
>     
>     Yes, they are damn slow.
>     
>     > 
>     > In general, do you really need all this space, what's your use case?
>     > 
>     > Unless it's something like a backup/archive cluster or pool with little to
>     > none concurrent R/W access, you're likely to run out of IOPS (again) long
>     > before filling these monsters up.
>     > 
>     
>     I fully agree. These large disks have very low IOps specs and will probably work very, very bad with Ceph.
>     
>     Wido
>     
>     > Christian
>     > > 
>     > > С уважением, Фасихов Ирек Нургаязович
>     > > Моб.: +79229045757
>     > > 
>     > > 2017-02-03 10:26 GMT+03:00 Maxime Guyot <Maxime.Guyot@xxxxxxxxx>:
>     > > 
>     > > > Hi everyone,
>     > > >
>     > > >
>     > > >
>     > > > I’m wondering if anyone in the ML is running a cluster with archive type
>     > > > HDDs, like the HGST Ultrastar Archive (10TB@7.2k RPM) or the Seagate
>     > > > Enterprise Archive (8TB@5.9k RPM)?
>     > > >
>     > > > As far as I read they both fall in the enterprise class HDDs so **might**
>     > > > be suitable for a low performance, low cost cluster?
>     > > >
>     > > >
>     > > >
>     > > > Cheers,
>     > > >
>     > > > Maxime
>     > > >
>     > > > _______________________________________________
>     > > > ceph-users mailing list
>     > > > ceph-users@xxxxxxxxxxxxxx
>     > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     > > >
>     > > >  
>     > 
>     > 
>     > -- 
>     > Christian Balzer        Network/Systems Engineer                
>     > chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
>     > http://www.gol.com/
>     > _______________________________________________
>     > ceph-users mailing list
>     > ceph-users@xxxxxxxxxxxxxx
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux