Also, what's the caching policy that qemu is using on the affected vms?
Is it cache=none? Or something else? You can get this information in the command line of qemu-kvm process corresponding to your vm in the ps output.
-Krutika
On Mon, May 13, 2019 at 12:49 PM Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:
What version of gluster are you using?Also, can you capture and share volume-profile output for a run where you manage to recreate this issue?Let me know if you have any questions.-KrutikaOn Mon, May 13, 2019 at 12:34 PM Martin Toth <snowmailer@xxxxxxxxx> wrote:Hi,
there is no healing operation, not peer disconnects, no readonly filesystem. Yes, storage is slow and unavailable for 120 seconds, but why, its SSD with 10G, performance is good.
> you'd have it's log on qemu's standard output,
If you mean /var/log/libvirt/qemu/vm.log there is nothing. I am looking for problem for more than month, tried everything. Can’t find anything. Any more clues or leads?
BR,
Martin
> On 13 May 2019, at 08:55, lemonnierk@xxxxxxxxx wrote:
>
> On Mon, May 13, 2019 at 08:47:45AM +0200, Martin Toth wrote:
>> Hi all,
>
> Hi
>
>>
>> I am running replica 3 on SSDs with 10G networking, everything works OK but VMs stored in Gluster volume occasionally freeze with “Task XY blocked for more than 120 seconds”.
>> Only solution is to poweroff (hard) VM and than boot it up again. I am unable to SSH and also login with console, its stuck probably on some disk operation. No error/warning logs or messages are store in VMs logs.
>>
>
> As far as I know this should be unrelated, I get this during heals
> without any freezes, it just means the storage is slow I think.
>
>> KVM/Libvirt(qemu) using libgfapi and fuse mount to access VM disks on replica volume. Can someone advice how to debug this problem or what can cause these issues?
>> It’s really annoying, I’ve tried to google everything but nothing came up. I’ve tried changing virtio-scsi-pci to virtio-blk-pci disk drivers, but its not related.
>>
>
> Any chance your gluster goes readonly ? Have you checked your gluster
> logs to see if maybe they lose each other some times ?
> /var/log/glusterfs
>
> For libgfapi accesses you'd have it's log on qemu's standard output,
> that might contain the actual error at the time of the freez.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/836554017 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/486278655 Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-devel