Re: cephfs vs rbd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was testing with appending mailbox files. But the principle of getting data from a mds server that has almost everything in cache instead of reading this from different osd's I would assume is always faster. 

> 
> That is odd- I am running some game servers (ARK Survival) and the RBD
> mount starts up in less than a minute, but the CEPHFS mount takes 20
> minutes or more.    It probably depends on the workload.
> 
> > I was wondering about performance differences between cephfs and rbd,
> > so I deviced this quick test. The results were pretty surprising to
> me.
> >
> > The test: on a very idle machine, make 2 mounts. One is a cephfs
> > mount, the other an rbd mount. In each directory, copy a humongous
> > .tgz file
> > (1.5 TB) and try to untar the file into the directory. The untar on
> > the cephfs directory took slightly over 2 hours, but on the rbd
> > directory it took almost a whole day. I repeated the test 3 times and
> > the results were similar each time. Is there something I'm missing? Is
> > RBD that much slower than cephfs (or is cephfs that much faster than
> > RBD)? Are there any tuning options I can try to improve RBD
> performance?
> >
> 
> When I was testing between using cephfs or rbd in a vm, I noticed that
> cephfs was around 25% faster, was on Luminous.
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux