ocfs2 with RBDcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, all

   I notice this sentence "Running GFS or OCFS on top of RBD will not work with caching enabled." on http://docs.ceph.com/docs/master/rbd/rbd-config-ref/. why? Is there any way to open rbd cache with ocfs2 based on? Because I have a fio test with qemu-kvm config setting cache=none, which give a terrible result of IOPS less than 100 ( fio --numjobs=16 --iodepth=16 --ioengine=libaio --runtime=300 --direct=1 --group_reporting --filename=/dev/sdd --name=mytest --rw=randwrite --bs=8k --size=8G)
, while other non-ceph cluster could give a result of IOPS to 1000+. Would disabling rbd cache cause this problem?
-------------------------|-------------------
|  RedHat VM -- |------------------|----------------------------------------based on ocfs2
|----------------------- |          |
| Linux with qemu-kvm(qcow2) |
---------------------------------------------
          | |
                  | |tgt
          | |
                      Ceph cluster(3)
----------------------------------------------------------------------------------------------------------------------
|            |        ocfs2 shared filesystem (Rbd Image)   |             |
|            |________________________________________|             |
|                       |                      |                   |
|                       |                      |                   |
|                       |                      |                   |
-----------------------------------------------------------------------------------------------------------------------
---------------------------------------------
wukongming ID: 12019
Tel:0571-86760239
Dept:2014 UIS2 ONEStor

-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux