Hi All,
What I saw after enabling RBD cache it is working as expected, means sequential write has better MBps than random write. can somebody explain this behaviour ? Is RBD cache setting must for ceph cluster to behave normally ?
Thanks
sumit
On Mon, Feb 2, 2015 at 9:59 AM, Sumit Gaur <sumitkgaur@xxxxxxxxx> wrote:
Hi Florent,Cache tiering , No .** Our Architecture :vdbench/FIO inside VM <--> RBD without cache <-> Ceph Cluster (6 OSDs + 3 Mons)Thankssumit[root@ceph-mon01 ~]# ceph -scluster 47b3b559-f93c-4259-a6fb-97b00d87c55ahealth HEALTH_WARN clock skew detected on mon.ceph-mon02, mon.ceph-mon03monmap e1: 3 mons at {ceph-mon01=192.168.10.19:6789/0,ceph-mon02=192.168.10.20:6789/0,ceph-mon03=192.168.10.21:6789/0}, election epoch 14, quorum 0,1,2 ceph-mon01,ceph-mon02,ceph-mon03osdmap e603: 36 osds: 36 up, 36 inpgmap v40812: 5120 pgs, 2 pools, 179 GB data, 569 kobjects522 GB used, 9349 GB / 9872 GB avail5120 active+cleanOn Mon, Feb 2, 2015 at 12:21 AM, Florent MONTHEL <fmonthel@xxxxxxxxxxxxx> wrote:Hi Sumit
Do you have cache pool tiering activated ?
Some feed-back regarding your architecture ?
Thanks
Sent from my iPad
> On 1 févr. 2015, at 15:50, Sumit Gaur <sumitkgaur@xxxxxxxxx> wrote:
>
> Hi
> I have installed 6 node ceph cluster and to my surprise when I ran rados bench I saw that random write has more performance number then sequential write. This is opposite to normal disk write. Can some body let me know if I am missing any ceph Architecture point here ?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com