performance degredation every 30 seconds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a new 3 node octopus cluster, set up on SSDs.

I'm running fio to benchmark the setup, with

fio --filename=/dev/rbd0 --direct=1 --rw=randrw --bs=4k --ioengine=libaio --iodepth=256 --numjobs=1 --time_based --group_reporting --name=iops-test-job --runtime=120 --eta-newline=1



However, I notice that, approximately every 30 seconds, performance tanks for a bit.

Any ideas on why, and better yet, how to get rid of the problem?


Sample debug output below. Notice the transitions at [eta 01m:27s] and [eta 00m:49s]
It happens again at [00m:09], but figured I didnt need to redundantly post that.



Jobs: 1 (f=1): [m(1)][2.5%][r=43.4MiB/s,w=43.3MiB/s][r=11.1k,w=11.1k IOPS][eta 01m:58s]
Jobs: 1 (f=1): [m(1)][4.1%][r=47.3MiB/s,w=47.8MiB/s][r=12.1k,w=12.2k IOPS][eta 01m:56s]
Jobs: 1 (f=1): [m(1)][5.8%][r=48.6MiB/s,w=49.3MiB/s][r=12.5k,w=12.6k IOPS][eta 01m:54s]
Jobs: 1 (f=1): [m(1)][7.4%][r=52.4MiB/s,w=53.1MiB/s][r=13.4k,w=13.6k IOPS][eta 01m:52s]
Jobs: 1 (f=1): [m(1)][9.1%][r=54.7MiB/s,w=54.1MiB/s][r=13.0k,w=13.8k IOPS][eta 01m:50s]
Jobs: 1 (f=1): [m(1)][10.7%][r=41.5MiB/s,w=42.6MiB/s][r=10.6k,w=10.9k IOPS][eta 01m:48s]
Jobs: 1 (f=1): [m(1)][12.4%][r=51.5MiB/s,w=50.6MiB/s][r=13.2k,w=12.0k IOPS][eta 01m:46s]
Jobs: 1 (f=1): [m(1)][14.0%][r=16.6MiB/s,w=16.0MiB/s][r=4248,w=4098 IOPS][eta 01m:44s]
Jobs: 1 (f=1): [m(1)][14.9%][r=33.3MiB/s,w=33.5MiB/s][r=8526,w=8579 IOPS][eta 01m:43s]
Jobs: 1 (f=1): [m(1)][16.5%][r=47.1MiB/s,w=47.4MiB/s][r=12.1k,w=12.1k IOPS][eta 01m:41s]
Jobs: 1 (f=1): [m(1)][18.2%][r=49.6MiB/s,w=49.0MiB/s][r=12.7k,w=12.8k IOPS][eta 01m:39s]
Jobs: 1 (f=1): [m(1)][19.8%][r=50.3MiB/s,w=51.4MiB/s][r=12.9k,w=13.1k IOPS][eta 01m:37s]
Jobs: 1 (f=1): [m(1)][21.5%][r=53.5MiB/s,w=52.9MiB/s][r=13.7k,w=13.5k IOPS][eta 01m:35s]
Jobs: 1 (f=1): [m(1)][23.1%][r=52.7MiB/s,w=52.1MiB/s][r=13.5k,w=13.3k IOPS][eta 01m:33s]
Jobs: 1 (f=1): [m(1)][24.8%][r=55.3MiB/s,w=54.9MiB/s][r=14.1k,w=14.1k IOPS][eta 01m:31s]
Jobs: 1 (f=1): [m(1)][26.4%][r=44.0MiB/s,w=45.2MiB/s][r=11.5k,w=11.6k IOPS][eta 01m:29s]
Jobs: 1 (f=1): [m(1)][28.1%][r=12.1MiB/s,w=11.8MiB/s][r=3105,w=3011 IOPS][eta 01m:27s]
Jobs: 1 (f=1): [m(1)][29.8%][r=16.6MiB/s,w=17.3MiB/s][r=4238,w=4422 IOPS][eta 01m:25s]
Jobs: 1 (f=1): [m(1)][31.4%][r=9820KiB/s,w=9516KiB/s][r=2455,w=2379 IOPS][eta 01m:23s]
Jobs: 1 (f=1): [m(1)][33.1%][r=6974KiB/s,w=7099KiB/s][r=1743,w=1774 IOPS][eta 01m:21s]
Jobs: 1 (f=1): [m(1)][34.7%][r=49.5MiB/s,w=49.2MiB/s][r=12.7k,w=12.6k IOPS][eta 01m:19s]
Jobs: 1 (f=1): [m(1)][36.4%][r=49.3MiB/s,w=49.8MiB/s][r=12.6k,w=12.8k IOPS][eta 01m:17s]
Jobs: 1 (f=1): [m(1)][38.0%][r=36.4MiB/s,w=35.9MiB/s][r=9326,w=9200 IOPS][eta 01m:15s]
Jobs: 1 (f=1): [m(1)][39.7%][r=43.4MiB/s,w=43.3MiB/s][r=11.1k,w=11.1k IOPS][eta 01m:13s]
Jobs: 1 (f=1): [m(1)][41.3%][r=47.1MiB/s,w=47.1MiB/s][r=12.1k,w=12.1k IOPS][eta 01m:11s]
Jobs: 1 (f=1): [m(1)][43.0%][r=47.9MiB/s,w=48.0MiB/s][r=12.3k,w=12.5k IOPS][eta 01m:09s]
Jobs: 1 (f=1): [m(1)][44.6%][r=49.9MiB/s,w=48.8MiB/s][r=12.8k,w=12.5k IOPS][eta 01m:07s]
Jobs: 1 (f=1): [m(1)][46.3%][r=46.4MiB/s,w=46.9MiB/s][r=11.9k,w=11.0k IOPS][eta 01m:05s]
Jobs: 1 (f=1): [m(1)][47.9%][r=46.7MiB/s,w=46.4MiB/s][r=11.0k,w=11.9k IOPS][eta 01m:03s]
Jobs: 1 (f=1): [m(1)][49.6%][r=55.3MiB/s,w=55.3MiB/s][r=14.1k,w=14.2k IOPS][eta 01m:01s]
Jobs: 1 (f=1): [m(1)][51.2%][r=54.1MiB/s,w=53.2MiB/s][r=13.8k,w=13.6k IOPS][eta 00m:59s]
Jobs: 1 (f=1): [m(1)][52.9%][r=53.4MiB/s,w=52.9MiB/s][r=13.7k,w=13.6k IOPS][eta 00m:57s]
Jobs: 1 (f=1): [m(1)][54.5%][r=58.8MiB/s,w=58.0MiB/s][r=15.1k,w=15.1k IOPS][eta 00m:55s]
Jobs: 1 (f=1): [m(1)][56.2%][r=60.0MiB/s,w=58.6MiB/s][r=15.4k,w=15.0k IOPS][eta 00m:53s]
Jobs: 1 (f=1): [m(1)][57.9%][r=57.7MiB/s,w=58.1MiB/s][r=14.8k,w=14.9k IOPS][eta 00m:51s]
Jobs: 1 (f=1): [m(1)][59.5%][r=14.0MiB/s,w=14.3MiB/s][r=3592,w=3651 IOPS][eta 00m:49s]
Jobs: 1 (f=1): [m(1)][61.2%][r=17.4MiB/s,w=17.4MiB/s][r=4443,w=4457 IOPS][eta 00m:47s]
Jobs: 1 (f=1): [m(1)][62.8%][r=18.1MiB/s,w=18.7MiB/s][r=4640,w=4783 IOPS][eta 00m:45s]
Jobs: 1 (f=1): [m(1)][64.5%][r=7896KiB/s,w=8300KiB/s][r=1974,w=2075 IOPS][eta 00m:43s]
Jobs: 1 (f=1): [m(1)][66.1%][r=47.8MiB/s,w=47.3MiB/s][r=12.2k,w=12.1k IOPS][eta 00m:41s]



--
Philip Brown| Sr. Linux System Administrator | Medata, Inc. 
5 Peters Canyon Rd Suite 250 
Irvine CA 92606 
Office 714.918.1310| Fax 714.918.1325 
pbrown@xxxxxxxxxx| www.medata.com
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux