Re: Is it possible to have One Ceph-OSD-Daemon managing more than one OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vikrant,

 

In the crush map, you can assign certain pools to certain OSDs. For example, you could put the images pool on the OSDs that are in charge of the HDD and the volumes pool on the OSDs that are in charge of the SSD.

 

You can find a guide here: http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/

 

I also use Ceph as a backend storage for Openstack and I thought about that solution when I built our cloud. However I ended up using the SSD as journals for the OSD. With that solution, dd can write on cephfs at 460 MB/s with the option conv=fdatasync (I used a 12 GB file for the tests)

 

A little context might help: we received 10 servers (4 computes and 6 storages). The storage have 12 slots for SAS/SATA drives and the SSD are on PCIe (384 MB of SSD per node). The PCIe SSD are special because they are seen as 16 drives of 24 GB by udev. We use 2 of them in Raid1 for the OS and the rest in Raid0 for the journals. Why Raid0? Because alone they weren’t that fast.

 

I hope that might help you choose the right solution for your needs.

 

Kind regards,

Alexandre Bécholey

 

From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Vikrant Verma
Sent: vendredi 14 février 2014 10:08
To: Kurt Bauer
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Is it possible to have One Ceph-OSD-Daemon managing more than one OSD

 

Hi All,

 

I was trying to define QoS on volumes in the openstack setup. Ceph Cluster is configured as Storage back-end for images and volumes.

 

As part of my experimentation i thought of clubbing few disks (say HDD) with one type of QoS and other few disks (say SSD) with another type of QoS.

But the configuration/design does not seems to be efficient as you suggested, rather i am trying now to put QoS on volumes itself.

 

Thanks for your suggestions.

 

Regards,

Vikrant

 

On Thu, Feb 13, 2014 at 7:28 PM, Kurt Bauer <kurt.bauer@xxxxxxxxxxxx> wrote:

Hi,


12. Februar 2014 19:03

yes, I want to use multiple hard drives with a single OSD.

Is it possible to have it?

It' s perfectly possible, but at the expense of redundancy, resilience and/or speed. You can use some RAID, then loosing one hardrive (or more, depending on the RAID Level) is not big deal regarding redundancy, but it slows down the whole system while recovering the RAID, after changing the faulty drive. If you do something like LVM to "pool" multiple physical disks, even loosing one disk brings down the whole OSD.
On the contrary using one OSD per physical disk gives you all the flexibility, redundancy and speed you want to have, so I wonder why you don't want to do that?

Best regards,
Kurt

 

Regards,

Vikrant



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

12. Februar 2014 17:44

On 12/02/2014 12:28, Vikrant Verma wrote:
Hi All,
 
I have one quick question - 
 
Is it possible to have One Ceph-OSD-Daemon managing more than one Object Storage Device in a Ceph  Cluster?
Hi,
 
Do you want to use multiple hard drives with a single OSD ?
 
Cheers
 
Regards,
Vikrant
 
 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

12. Februar 2014 12:28

Hi All,

 

I have one quick question - 

 

Is it possible to have One Ceph-OSD-Daemon managing more than one Object Storage Device in a Ceph  Cluster?

 

Regards,

Vikrant

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux