Re: Interpreting rados / print order in teuthology output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thanks Andrew, Sage & Dmick for the help! The answer is that the rados tasks runs in parallel by default. It is enough to nest it in a sequential task to make it sequential. A contrario you can use the parallel task around a sequential task to run it in the background. The trick is to figure out what tasks are running the background by default and which one do not.

Cheers

On 26/03/2015 23:44, Loic Dachary wrote:
> Hi Zack,
> 
> I'm trying to figure out why
> 
>   http://pulpito.ceph.com/loic-2015-03-26_21:18:14-upgrade:firefly-x:stress-split-erasure-code-hammer---basic-multi/822784/
> 
> failed when the same run succeeded at
> 
>   http://pulpito.ceph.com/loic-2015-03-26_15:22:43-upgrade:firefly-x:stress-split-erasure-code-hammer---basic-multi/822782/
> 
> I'm kind of blocked trying to figure out why the print statement is run before the first rados run completes at:
> 
> 2015-03-26T13:33:38.178 INFO:teuthology.orchestra.run.burnupi31.stderr:osds 0,1,2,3,4,5,6,7,8,9,10,11,12,13 instructed to deep-scrub
> 2015-03-26T13:33:38.189 INFO:teuthology.run_tasks:Running task rados...
> 2015-03-26T13:33:38.209 INFO:tasks.rados:Beginning rados...
> 2015-03-26T13:33:38.210 INFO:teuthology.run_tasks:Running task print...
> 2015-03-26T13:33:38.210 INFO:teuthology.task.print:**** done rados ec-cache-agent (part 1)
> 2015-03-26T13:33:38.210 INFO:teuthology.run_tasks:Running task install.upgrade...
> 2015-03-26T13:33:38.211 INFO:teuthology.task.install:project ceph config {'osd.0': None} overrides {'sha1': '6994648bc443429dc2edfbb38fbaaa9a19e2bdd1'}
> 2015-03-26T13:33:38.211 INFO:teuthology.task.install:extra packages: []
> 2015-03-26T13:33:38.211 INFO:teuthology.task.install:remote ubuntu@xxxxxxxxxxxxxxxxxxxxxxxxxxxx config {'sha1': '6994648bc443429dc2edfbb38fbaaa9a19e2bdd1'}
> 2015-03-26T13:33:38.212 INFO:teuthology.orchestra.run.mira119:Running: 'sudo lsb_release -is'
> 2015-03-26T13:33:38.213 INFO:tasks.rados:clients are ['client.0']
> 2015-03-26T13:33:38.213 INFO:tasks.rados:starting run 0 out of 1
> 2015-03-26T13:33:38.214 INFO:teuthology.orchestra.run.burnupi31:Running: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --max-ops 4000 --objects 500 --max-in-flight 16 --size 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op copy_from 50 --pool ecbase'
> 2015-03-26T13:33:38.229 INFO:tasks.rados.rados.0.burnupi31.stdout:adding op weight read -> 100
> 2015-03-26T13:33:38.230 INFO:tasks.rados.rados.0.burnupi31.stdout:adding op weight write -> 100
> 2015-03-26T13:33:38.230 INFO:tasks.rados.rados.0.burnupi31.stdout:adding op weight delete -> 50
> 
> although the config reads:
> 
> ...
>     - ceph osd pool set eccache target_max_objects 250
> - exec:
>     client.0:
>     - ceph osd deep-scrub '*'
> - rados:
>     clients:
>     - client.0
>     objects: 500
>     op_weights:
>       copy_from: 50
>       delete: 50
>       read: 100
>       write: 100
>     ops: 4000
>     pools:
>     - ecbase
> - print: '**** done rados ec-cache-agent (part 1)'
> - install.upgrade:
>     osd.0: null
> - ceph.restart:
> 
> ...
> 
> I'm kind of worried I'll make a fool of myself once more. This time it spells "task" and not "tasks" ;-)
> 
> Cheers
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux