Re: migrating cephfs metadata pool from spinning disk to SSD.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 4, 2015 at 10:36 PM, Bob Ababurko <bob@xxxxxxxxxxxx> wrote:
> My writes are not going as I would expect wrt to IOPS(50-1000 IOPs) & write
> throughput( ~25MB/s max).  I'm interested in understanding what it takes to
> create a SSD pool that I can then migrate the current Cephfs_metadata pool
> to.  I suspect that the spinning disk metadata pool is a bottleneck and I
> want to try to get the max performance out of this cluster to prove that we
> would build out a larger version.  One caveat is that I have copied about 4
> TB of data to the cluster via cephfs and dont want to lose the data so I
> obviously need to keep the metadata intact.

I'm a bit suspicious of this: your IOPS expectations sort of imply
doing big files, but you're then suggesting that metadata is the
bottleneck (i.e. small file workload).

There are lots of statistics that come out of the MDS, you may be
particular interested in mds_server.handle_client_request,
objecter.op_active, to work out if there really are lots of RADOS
operations getting backed up on the MDS (which would be the symptom of
a too-slow metadata pool).  "ceph daemonperf mds.<id>" may be some
help if you don't already have graphite or similar set up.

> If anyone has done this OR understands how this can be done, I would
> appreciate the advice.

You could potentially do this in a two-phase process where you
initially set a crush rule that includes both SSDs and spinners, and
then finally set a crush rule that just points to SSDs.  Obviously
that'll do lots of data movement, but your metadata is probably a fair
bit smaller than your data so that might be acceptable.

John
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux