Re: Dell R515 performance and specification question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Barry, I have a similar setup and found that the 600GB 15K SAS drives work well. The 2TB 7200 disks did not work as well due to my not using SSD. Running the journal and the data on big slow drives will result in slow writes. All the big boys I've encountered are running SSDs.

Currently, I'm using two 515s with 5 600GB drives in a RAID0x1 configuration on each host. I do not currently have a cluster network setup, but my OSDs have 8 bonded 1gbe nics.

Even with the fastest drives that you can get for the R515, I am considering trying to get an SSD sometime in the near future. This is due to performance issues I've run into trying to run an Oracle VM on Openstack Folsom. To not make this sound all doom and gloom. I am also running a website consisting of six vms that does not have as heavy random read write I/O and it runs fine.

Here's a quick performance display with various block sizes on a host with 1 public 1Gbe link and 1 1Gbe link on the same vlan as the ceph cluster. I'm using RBD writeback caching on this VM. To accomplish this, I had to hack the libvirt volume.py file in Openstack, and enable it in the /etc/ceph/ceph.conf file on the host I was running it on. I know nothing of OpenNebula, so I can't speak to what it can or cannot do, and how to enable writeback caching in it.

The rbd caching settings for ceph.conf can be found here.
http://ceph.com/docs/master/rbd/rbd-config-ref/
Ex.
;Global Client Setting
[client]
rbd cache = true


4K:
[root@optog3 temp]# dd if=/dev/zero of=here bs=4k count=50k oflag=direct
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 7.09731 seconds, 29.5 MB/s
[root@optog3 temp]#

8K
[root@optog3 temp]# dd if=/dev/zero of=here bs=8192 count=50k oflag=direct
51200+0 records in
51200+0 records out
419430400 bytes (419 MB) copied, 7.36243 seconds, 57.0 MB/s
[root@optog3 temp]#


4MB blocks.
[root@optog3 temp]# dd if=/dev/zero of=here bs=4M count=500 oflag=direct
500+0 records in
500+0 records out
2097152000 bytes (2.1 GB) copied, 23.5803 seconds, 88.9 MB/s
[root@optog3 temp]#

1GB blocks:
[root@optog3 temp]# dd if=/dev/zero of=here bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 12.0053 seconds, 89.4 MB/s
[root@optog3 temp]#



This article by Hastexo, which I wish I would've seen before going to production may help you greatly with this decision.

http://www.hastexo.com/resources/hints-and-kinks/solid-state-drives-and-ceph-osd-journals

Dave Spano
Optogenics
Systems Administrator



From: "Mark Nelson" <mark.nelson@xxxxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, May 7, 2013 9:17:24 AM
Subject: Re: Dell R515 performance and specification question

On 05/07/2013 06:50 AM, Barry O'Rourke wrote:
> Hi,
>
> I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
> which I intend to run in 3 x replication. I've opted for the following
> configuration;
>
> 2 x 6 core processors
> 32Gb RAM
> H700 controller (1Gb cache)
> 2 x SAS OS disks (in RAID1)
> 2 x 1Gb ethernet (bonded for cluster network)
> 2 x 1Gb ethernet (bonded for client network)
>
> and either 4 x 2Tb nearline SAS OSDs or 8 x 1Tb nearline SAS OSDs.

Hi Barry,

With so few disks and the inability to do 10GbE, you may want to
consider doing something like 5-6 R410s or R415s and just using the
on-board controller with a couple of SATA disks and 1 SSD for the
journal.  That should give you better aggregate performance since in
your case you can't use 10GbE.  It will also spread your OSDs across
more hosts for better redundancy and may not cost that much more per GB
since you won't need to use the H700 card if you are using an SSD for
journals.  It's not as dense as R515s or R720XDs can be when fully
loaded, but for small clusters with few disks I think it's a good
trade-off to get the added redundancy and avoid expander/controller
complications.

>
> At the moment I'm undecided on the OSDs, although I'm swaying towards
> the second option at the moment as it would give me more flexibility and
> the option of using some of the disks as journals.
>
> I'm intending to use this cluster to host the images for ~100 virtual
> machines, which will run on different hardware most likely be managed by
> OpenNebula.
>
> I'd be interested to hear from anyone running a similar configuration
> with a similar use case, especially people who have spent some time
> benchmarking a similar configuration and still have a copy of the results.
>
> I'd also welcome any comments or critique on the above specification.
> Purchases have to be made via Dell and 10Gb ethernet is out of the
> question at the moment.
>
> Cheers,
>
> Barry
>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux