Hi Sean,
Thanks for your willingness to help
I used RAID0 because HBA mode in not available on PERC H710
Did misunderstood you ?
How can you set RAID level to NONE?
Running fio with more jobs provide results closer to the expected throughput ( 450MB/s) for SSD drive
fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=20 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
fio-2.2.8
Starting 20 processes
Jobs: 20 (f=20): [W(20)] [100.0% done] [0KB/400.9MB/0KB /s] [0/103K/0 iops] [eta 00m:00s]
Steven
On 31 January 2018 at 11:25, Sean Redmond <sean.redmond1@xxxxxxxxx> wrote:
ThanksDid you test the SSD in another HBA mode server / desktop to show this is only the case when using the PERC?I have not used the SSD you are using, Did you manage to hunt out anyone else using the same one to compare the fio tests?Hi Steven,Thats interesting, I use the same card, but I do use NONE RAID mode, this is a historical decision that was made so not much to share with you on that, maybe worth doing a fio test of RAID 0 vs NONE RAID mode to see what the difference is, if any.On Wed, Jan 31, 2018 at 3:57 PM, Steven Vacaroaia <stef97@xxxxxxxxx> wrote:Raid0HardwareControllerProductName : PERC H710 Mini(Bus 0, Dev 0)SAS Address : 544a84203afa4a00FW Package Version: 21.3.5-0002Status : OptimalBBUOn 31 January 2018 at 10:48, Sean Redmond <sean.redmond1@xxxxxxxxx> wrote:ThanksAre you exposing the disks as individual RAID 0 or in NONE RAID mode?Hi,I have seen the Dell R730XD be using with a PERC controller extensively with ceph and not had any real performance issues to speak of. Can you share the exact model of PERC controller?On Wed, Jan 31, 2018 at 3:39 PM, Steven Vacaroaia <stef97@xxxxxxxxx> wrote:______________________________Hi,Is there anyone using DELL servers with PERC controllers willing to provide advise on configuring it for good throughput performance ?I have 3 servers with 1 SSD and 3 HDD eachAll drives are Entreprise gradeConnector : 00<Internal><Encl Pos 1 >: Slot 0Vendor Id : TOSHIBAProduct Id : PX04SHB040State : OnlineDisk Type : SAS,Solid State DeviceCapacity : 372.0 GBPower State : ActiveConnector : 00<Internal><Encl Pos 1 >: Slot 1Vendor Id : TOSHIBAProduct Id : AL13SEB600State : OnlineDisk Type : SAS,Hard Disk DeviceCapacity : 558.375 GBPower State : ActiveCreated OSD with separate WAL(1 GB) and DB (15 GB) partitions on SSDrados bench is abysmalThe interesting part is that testing drives with fio is also pretty bad - that is why I am thinking that my controller config might be the culpritSee below results using various configCommands usedmegacli -LDInfo -LALL -a0fio --filename=/dev/sd[a-b] --direct=1 --sync=1 --rw=write --bs=4k --numjobs=5 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-testSSD driveCurrent Cache Policy: WriteThrough, ReadAheadNone, Cached, No Write Cache if Bad BBUJobs: 5 (f=5): [W(5)] [100.0% done] [0KB/125.2MB/0KB /s] [0/32.5K/0 iops] [eta 00m:00s]Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBUJobs: 5 (f=5): [W(5)] [100.0% done] [0KB/224.8MB/0KB /s] [0/57.6K/0 iops] [eta 00m:00s]HDD driveCurrent Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBUJobs: 5 (f=5): [W(5)] [100.0% done] [0KB/77684KB/0KB /s] [0/19.5K/0 iops] [eta 00m:00s]Current Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBUJobs: 5 (f=5): [W(5)] [100.0% done] [0KB/89036KB/0KB /s] [0/22.3K/0 iops] [eta 00m:00s]rados bench -p rbd 120 write -t 64 -b 4096 --no-cleanup && rados bench -p rbd 120 -t 64 seqTotal time run: 120.009091Total writes made: 630542Write size: 4096Object size: 4096Bandwidth (MB/sec): 20.5239Stddev Bandwidth: 2.43418Max bandwidth (MB/sec): 37.0391Min bandwidth (MB/sec): 15.9336Average IOPS: 5254Stddev IOPS: 623Max IOPS: 9482Min IOPS: 4079Average Latency(s): 0.0121797Stddev Latency(s): 0.0208528Max latency(s): 0.428262Min latency(s): 0.000859286Total time run: 88.954502Total reads made: 630542Read size: 4096Object size: 4096Bandwidth (MB/sec): 27.6889Average IOPS: 7088Stddev IOPS: 1701Max IOPS: 8923Min IOPS: 1413Average Latency(s): 0.00901481Max latency(s): 0.946848Min latency(s): 0.000286236_________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com