Re: reproducible rbd-nbd crashes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mike and Jason,

as described in my last mail i converted the filesystem to ext4, set "sysctl vm.dirty_background_ratio=0" and I put the regular workload on the filesystem (used as a NFS mount).
That seems so to prevent crashes for a entire week now (before this, the nbd device crashed after hours/~one day).

XFS on top of nbd devices really seems to add additional instability situations.

The current workaround causes very high cpu load (40-50 on a 4 cpu virtual system) and up to ~95% iowait if a single client puts a 20GB File on that volume.

What is your current state in correcting this problem?
Can we support you in testing the by running tests with custom kernel- or rbd-nbd builds?

Regards
Marc

Am 13.09.19 um 14:15 schrieb Marc Schöchlin:
>>> Nevertheless i will try EXT4 on another system.....
> I converted the filesystem to a ext4 filesystem.
>
> I completely deleted the entire rbd ec image and its snapshots (3) and recreated it.
> After mapping and mounting i executed the following command:
>
> sysctl vm.dirty_background_ratio=0
>
> Lets see, what we get now....
>
> Regards
> Marc
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux