Re: Performance test on Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 22, 2012 at 01:39, madhusudhana
<madhusudhana.u.acharya@xxxxxxxxx> wrote:
> Hi
> I have finally configured a ceph cluster with 8 nodes. I have 2 MDS
> servers and 3 monitors and rest of 3 nodes are OSD. Each system has
> 2T SATA drives. I have 3 partitions created, one for root file
> system, another for CEPH journal and the rest of the space is for
> OSD. I was able to get 5.6T space from three nodes.
>
> All the machines are of same type (HP DL160 G7) with 48 of RAM and
> quad core dual cpu's.
>
> I am using iozone for testing the performance against NetApp filer
> Below is the command what I am using for iozone test
>
> /opt/iozone/bin/iozone -R -e -l i -u 1 -r 4096k -s 1024m -F /mnt/ceph-
> test/ceph.iozone

1. Make sure you have only 1 active MDS, multi-MDS is an extra
complication you're better skipping right now.
2. What underlying filesystem are you using for the OSDs?
3. What Linux kernel version is running on the OSDs?
4. What machine is the client, running iozone? How is it connected to
the others? Kernel client or FUSE?
5. What Linux kernel version is running on the client?

Going forward, in a setup like that you could put OSDs on the machines
that run ceph-mon; ceph-mon is very lightweight and doesn't need
dedicated machines in a small setup.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux