Re: FW: Reg:.[CBT] Unable to do a CBT Clean Run.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vish,

Thanks for the report!

I seem to remember seeing something very similar to this a while back and it turned out to be an issue on the monitor where crushtool wasn't in the path.

The way I had to solve it was to set the "crushtool" path via ceph.conf:

https://github.com/ceph/ceph/blob/master/src/common/config_opts.h#L29

Please let me know if that helps.

Thanks,
Mark

On 02/05/2016 10:26 AM, Vish (Vishwanath) Maram-SSI wrote:
Hi,

Sorry I am forwarding again as the mails are not going through the list.

Thanks,

-Vish

*From:*Vish (Vishwanath) Maram-SSI
*Sent:* Thursday, February 04, 2016 3:45 PM
*To:* 'ceph-devel-owner@xxxxxxxxxxxxxxx'; 'ceph-devel@xxxxxxxxxxxxxxx'
*Cc:* 'Brent Compton'; 'Kyle Bader'
*Subject:* Reg:.[CBT] Unable to do a CBT Clean Run.

Hi All,

We are trying to do a clean run with CBT and please find below all the
details:

1.OS – CentOS7.2

2.CBT – Commit ID “1203c2b9d25344d1e15b28236ea2f19aa3103e0b” from “git log”

3.CEPH Code – Hammer Version – 0.94.5

4.Test.yaml – Please find below

5.Ceph.conf – Please find below

*__*

*_Issue:_*

Command - ./cbt.py --archive="Archive" --conf=./ceph.conf ./test.yaml >
cmd_log.txt

After the above command is executed we were able to check the Cluster is
being created on the Server and able to see the status/OSD Tree as well.
All those show correct, but we didn’t see any IO going on the disk
(iostat –t 5). So we started debugging the CBT code ceph.py under
cluster directory. When we debugged and we observed that pool creation
is not happening and we inserted break in ceph.py just before pool
creation and executed the pool creation command on Server and get the
below error:

*Error EINVAL: error running crushmap through crushtool: (125) Operation
canceled*

And because of the above issue we weren’t able to see any FIO run.

Appreciate any inputs.

Thanks,

-Vish

Test.yaml –

   1  cluster:

   2   user: 'root'

   3   head: "Server"

   4   clients: ["client"]

   5   osds: ["Server"]

   6   mons:

   7     Mon1:

   8       a: "10.10.10.150:6789"

   9   osds_per_node: 1

10   fs: 'xfs'

11   mkfs_opts: '-f -i size=2048 -n size=64k'

12   mount_opts: '-o inode64,noatime,logbsize=256k'

13   conf_file: '/usr/local/ceph-cbt/ceph.conf.1osd'

14   iterations: 1

15   use_existing: False

16 #  clusterid: "8eda02e2-04b7-4eed-a85a-8471ea51528c"

17   clusterid: "cbttest"

18   tmp_dir: "/tmp/cbt"

19   pool_profiles:

20     rbd:

21       pg_size: 256

22       pgp_size: 256

23       replication: 1

24 benchmarks:

25   librbdfio:

26     time: 300

27     vol_size: 16384

28     mode: [write]

29     op_size: [1048576]

30     concurrent_procs: [1]

31     iodepth: [64]

32     osd_ra: [4096]

33     cmd_path: '/usr/local/bin/fio'

34     pool_profile: 'rbd'

Ceph.conf –

[global]

         osd pool default size = 1

         auth cluster required = none

         auth service required = none

         auth client required = none

         keyring = /tmp/cbt/ceph/keyring

         osd pg bits = 8

         osd pgp bits = 8

         log to syslog = false

         log file = /tmp/cbt/ceph/log/$name.log

         public network = 10.10.10.0/24

         cluster network = 10.10.10.0/24

         rbd cache = true

         osd scrub load threshold = 0.01

         osd scrub min interval = 137438953472

         osd scrub max interval = 137438953472

         osd deep scrub interval = 137438953472

         osd max scrubs = 16

         filestore merge threshold = 40

         filestore split multiple = 8

         osd op threads = 8

         mon pg warn max object skew = 100000

         mon pg warn min per osd = 0

         mon pg warn max per osd = 32768

[mon]

         mon data = /tmp/cbt/ceph/mon.$id

[mon.a]

         host = Mon1

         mon addr = 10.10.10.150:6789

[osd.0]

         host = Server

         osd data = /tmp/cbt/mnt/osd-device-0-data

         osd journal = /dev/disk/by-partlabel/osd-device-0-journal

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux