Re: Ceph with SSD and HDD mixed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mario,

in the end your workload defines which option(s) can be considered. They are different trade offs between read/write performance and the price That depend on your workload. E.g.
 - distribution of reads/writes
 - size of IO Requests (4k IO-Operations or 4MB..)
 - „locality“ of the IO-Operations (is there a small set of Data that is heavily used and other data more or less unused or. all stored data is used more or less equally).
 - required Bandwidth and latency
...

Usually SSD for the OSD-Journal in a replicated pool with size=3 is a setup that works reasonably well for most applications. But for an optimized setup you’ll have to analyze your requirements and then fit the setup to your needs (or hire someone who helps you).

greetings

Johannes

> Am 22.07.2015 um 02:58 schrieb Mario Codeniera <mario.codeniera@xxxxxxxxx>:
> 
> Hi Johannes,
> 
> Thanks for your reply.
> 
> I am naive for this, no idea how to make a configurations or where I can starts? based on the 4 options mentioned.
> Hope you can expound it further if possible.
> 
> Best regards,
> Mario
> 
> 
> 
> 
> 
> On Tue, Jul 21, 2015 at 2:44 PM, Johannes Formann <mlmail@xxxxxxxxxx> wrote:
> Hi,
> 
> > Can someone give an insights, if it possible to mixed SSD with HDD? on the OSD.
> 
> you’ll have more or less four options:
> 
> - SSDs for the journals of the OSD-process (SSD must be able to perform good on synchronous writes)
> - an SSD only pool for „high performance“ data
> - Using SSDs for the primary copy (fast reads), can be combined with the first
> - Using a cache pool with an SSD-only pool in front of the main disk-pool
> 
> > How can we speed up the uploading for file for example, as per experience it took around 18mins to load 20Gb images (via glance), in 1Gb network. Or it is just normal?
> 
> That’s about 20MB/s, for (I guess) sequential writes on a disk only cluster that’s ok. But you can improve that with SSDs, but you have to choose the best option for your setup, depending on the expected workload.
> 
> greetings
> 
> Johannes
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux