Re: low performance of ceph, why ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/26/2012 12:15 PM, 马四 wrote:
Hi, Mark
  Thanks for your help. May I know some details about your cluster that
could support 250 000 metadata operations per second as described in the
paper abstract: 
"Performance measurements under a variety of workloads show that Ceph
has excellent
 I/O performance and calable metadata management, supportingmore than
250,000
metadata operations per second."
How many mdses and osds are there in the cluster?

It looks like the 250,000 number likely came from the number of openssh+include and openssh+lib opeartions/MDS/second that are shown on page 10 when there are 128 MDSes active. I believe Multi-MDS setups are considered experimental at this time, so it isn't something you probably want to do in production.

Is the OSD supported by btrfs ?

I believe these tests were done with a filesystem that was originally developed for ceph but replaced later with btrfs/others.

Now, I have known that by separating journal and data of OSD and
replacing ext3 with btrfs will
speed up the system, could you prompt me more tips on optimizing the
perfomance and
may I have your config file ceph.conf ?

Putting the journal and data disks on seperate drives will certainly help. You will also want to mount the underlying filesystem with noatime. Turning off the filestore flusher may help or hurt performance depending on what is going on.

A simple example conf file can be found here:
http://ceph.com/wiki/Cluster_configuration

BTW, in P10, ie. Part 6.2, the paper talks about diskless MDS and MDS
without a local disk, but
look up through the configuration items of MDS, I just could not find
the item for configure
the local disk for MDS, could you tell me how to configure this ? Thanks.

Honestly I'm not sure.  Sage may have more input here.

Sincerely,
Hosfore

Thanks,
Mark
--
Mark Nelson
Performance Engineer
Inktank
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux