Re: teuthology task waiting for machines (> 8h)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Technically yes.

If queue is busy - patience is needed.

Assuming that there are no runs in the queue which are hung.  Zack is
diligently looking and fixing to prevent hung tests.  If we see runs
older then say one day, we kill them (altho 'teuthology-kill' is not
working for me today :( )

Another option to speed up run - use PRIO (for priority) when
scheduling it and/or use not plana machines as they are in high
demand.

Thx
YuriW

On Sat, Jun 28, 2014 at 3:27 AM, Loic Dachary <loic@xxxxxxxxxxx> wrote:
> Hi Zack,
>
> http://pulpito.ceph.com/loic-2014-06-27_18:45:37-upgrade:firefly-x:stress-split-wip-8475-testing-basic-plana/329515/
>
> seems to indicate that the tasks cannot obtain the machines it needs:
>
> 2014-06-27T17:55:19.072 INFO:teuthology.task.internal:Locking machines...
> 2014-06-27T17:55:19.110 INFO:teuthology.task.internal:waiting for more machines to be free (need 3 see 5)...
> 2014-06-27T17:55:29.175 INFO:teuthology.task.internal:waiting for more machines to be free (need 3 see 5)...
> ...
> 2014-06-28T03:22:13.745 INFO:teuthology.task.internal:waiting for more machines to be free (need 3 see 0)...
> 2014-06-28T03:22:23.787 INFO:teuthology.task.internal:waiting for more machines to be free (need 3 see 0)...
>
> Is it something expected (for instance when tasks with a higher priorty take precedence) ? If it is then all that's needed is patience right ?
>
> Cheers
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux