Freezes on VM's after upgrade from Giant to Hammer, app is not responding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cephers,

On Sunday evening we are upgraded Ceph form 0.87 to 0.94. After upgrade VM’s running on Proxmox, freezes for 3-4s  in 10min periods (application is not responding on Windows). Before upgrade everything was working fine. On /proc/diskstats at field 7 (time spent reading (ms) ) and 11 (time spent writing (ms)) there are peaks from 90ms  to 2000ms on osd disks.

Is there any default settings changed? Every server have one ssd for journal and 4-6 osd. BBU are ok on controllers. Our ceph.conf below.  

 

[global]

fsid={UUID}

 

mon initial members = ceph35, ceph30, ceph20, ceph15, ceph10

mon host = 10.20.8.35, 10.20.8.30, 10.20.8.20, 10.20.8.15, 10.20.8.10

 

public network = 10.20.8.0/22

cluster network = 10.20.4.0/22

 

filestore xattr use omap = true

filestore max sync interval = 30

 

osd journal size = 10240

osd mount options xfs = "rw,noatime,inode64,allocsize=4M"

osd pool default size = 3

osd pool default min size = 1

osd pool default pg num = 2048

osd pool default pgp num = 2048

osd disk thread ioprio class = idle

osd disk thread ioprio priority = 7

 

osd crush chooseleaf type = 1

osd recovery max active = 1

osd recovery op priority = 1

osd max backfills = 1

 

auth cluster required = cephx

auth service required = cephx

auth client required = cephx

 

rbd default format = 2

 

##ceph35 osds

[osd.0]

cluster addr = 10.20.4.35

[osd.1]

cluster addr = 10.20.4.35

[osd.2]

cluster addr = 10.20.4.35

[osd.3]

cluster addr = 10.20.4.35

[osd.4]

cluster addr = 10.20.4.35

[osd.5]

cluster addr = 10.20.4.35

 

##ceph25 osds

[osd.6]

cluster addr = 10.20.4.25

[osd.7]

cluster addr = 10.20.4.25

[osd.8]

cluster addr = 10.20.4.25

[osd.9]

cluster addr = 10.20.4.25

[osd.10]

cluster addr = 10.20.4.25

[osd.11]

cluster addr = 10.20.4.25

 

##ceph15 osds

[osd.12]

cluster addr = 10.20.4.15

[osd.13]

cluster addr = 10.20.4.15

[osd.14]

cluster addr = 10.20.4.15

[osd.15]

cluster addr = 10.20.4.15

 

##ceph30 osds

[osd.16]

cluster addr = 10.20.4.30

[osd.17]

cluster addr = 10.20.4.30

[osd.18]

cluster addr = 10.20.4.30

[osd.19]

cluster addr = 10.20.4.30

[osd.20]

cluster addr = 10.20.4.30

[osd.21]

cluster addr = 10.20.4.30

 

##ceph20 osds

[osd.22]

cluster addr = 10.20.4.20

[osd.23]

cluster addr = 10.20.4.20

[osd.24]

cluster addr = 10.20.4.20

[osd.25]

cluster addr = 10.20.4.20

[osd.26]

cluster addr = 10.20.4.20

[osd.27]

cluster addr = 10.20.4.20

 

##ceph10 osd

[osd.28]

cluster addr = 10.20.4.10

[osd.29]

cluster addr = 10.20.4.10

[osd.30]

cluster addr = 10.20.4.10

[osd.31]

cluster addr = 10.20.4.10

 

 

#adresy monitorów

[mon.ceph35]

host = ceph35

mon addr = 10.20.8.35:6789

[mon.ceph30]

host = ceph30

mon addr = 10.20.8.30:6789

[mon.ceph20]

host = ceph20

mon addr = 10.20.8.20:6789

[mon.ceph15]

host = ceph15

mon addr = 10.20.8.15:6789

[mon.ceph10]

host = ceph10

mon addr = 10.20.8.10:6789

 

Thanks for help.

Regards

Mateusz

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux