On Wed, Jul 10, 2019 at 7:22 PM Jongyul Kim <yulistic@xxxxxxxxx> wrote: > > > > On Wed, Jul 10, 2019 at 5:16 PM Yan, Zheng <ukernel@xxxxxxxxx> wrote: >> >> On Wed, Jul 10, 2019 at 2:35 PM Jongyul Kim <yulistic@xxxxxxxxx> wrote: >> > >> > Hi, I'm Jongyul Kim and interested in the performance of Ceph. >> > I tried to figure out the advantage of using 2 MDS daemons instead of a single MDS under massive metadata operations(rename). But the result was that two MDS daemons performed worse than a single MDS daemon. I'd like to ask your advice why this happens. >> > >> > Here is what I did. >> > >> > I wrote a micro benchmark that 1) creates a file, 2) writes 4KB to the file and 3) rename it to another directory. These three steps are done by each process and I measured the throughput (operations/sec) of Ceph increasing the number of processes in the benchmark. The experimental setup is like below. >> > >> >> Are auth mds of the rename source directory and rename dest directory >> different? rename file across auth mds is very slow. > > They are the same. A source directory and a target directory of one process are authorized by the same MDS. That is, if a process P1 is running on Node A, then its source directory (src_dir_p1) and target directory (tar_dir_p1) are pinned to the MDS running on Node A (mds_a). In the same way, if a process P2 is running on Node B, then its source directory (src_dir_p2) and target directory (tar_dir_p2) are pinned to the MDS running on Node B (mds_b). And so on, ... > Maybe caused by the process that gather directory size. try not doing rename in subtree bound. For example, If you pin /a to mds.1, do rename in /a/testdir. _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx