Re: rbd ioengine for fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Somnath,

Thank you for your reply!!
The if script is:

[global]

#logging

#write_iops_log=write_iops_log

#write_bw_log=write_bw_log

#write_lat_log=write_lat_log

ioengine=rbd

clientname=client.admin

pool=ecssdcache

rbdname=imagecacherbd

invalidate=0

rw=randwrite

bs=4k


[rbd_iodepth32]

iodepth=32


The pool and rbd image names are correct. 

Ceph -s from the rbd client and the monitor server both shows as:

cluster e414604c-29d7-4adb-a889-7f70fc252dfa

     health HEALTH_WARN clock skew detected on mon.h02, mon.h05

     monmap e3: 3 mons at {h02=130.4.240.102:6789/0,h05=130.4.240.105:6789/0,h08=130.4.240.78:6789/0}, election epoch 3212, quorum 0,1,2 h08,h02,h05

     osdmap e23689: 39 osds: 35 up, 35 in

      pgmap v3174229: 16126 pgs, 8 pools, 132 GB data, 198 kobjects

            545 GB used, 29224 GB / 29769 GB avail

               16126 active+clean

I've also checked that connection to the monitor hosts from the rbd client looks good too.


Really not sure what's going on..


Thanks in advance all!


Best,

Mavis



On Thu, Jun 16, 2016 at 4:52 PM, Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote:

What is your fio script ?

 

Make sure you do this..

 

1. Run say ‘ceph-s’ from  the server you are trying to connect and see if it is connecting properly or not. If so, you don’t have any keyring issues.

 

2. Now, make sure you have given the following param value properly based on your setup.

 

pool=<ceph-pool-name>

rbdname=<rbd-image-name>

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Mavis Xiang
Sent: Thursday, June 16, 2016 1:47 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: rbd ioengine for fio

 

Hi all,

I am new to the rbd engine for fio, and ran into the following problems when i try to run a 4k write with my rbd image: 

 

 

rbd_iodepth32: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32

fio-2.11-17-ga275

Starting 1 process

rbd engine: RBD version: 0.1.8

rados_connect failed.

fio_rbd_connect failed.

 

It seems that the rbd client cannot connect to the ceph cluster. 

Ceph health output:

cluster e414604c-29d7-4adb-a889-7f70fc252dfa

     health HEALTH_WARN clock skew detected on mon.h02, mon.h05

     

But it should not affected the connection to the cluster. 

Ceph.conf: 

[global]

fsid = e414604c-29d7-4adb-a889-7f70fc252dfa

mon_initial_members = h02

mon_host = XXX.X.XXX.XXX

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

filestore_xattr_use_omap = true

osd_pool_default_pg_num = 2400

osd_pool_default_pgp_num = 2400

public_network = XXX.X.XXX.X/21

 

[osd]

osd_crush_update_on_start = false

 

 

 

Should this be something about keyring? i did not find any options about keyring that can be set in fio file. 

Can anyone please give some insights about this problem?

Any help would be appreciated!

 

Thanks!

 

Yu

 

 

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux