Just an update - maybe will help others
I've downloaded and ran Intel firmware update ISO
Although the tool confirmed firmware was current ( no update necessary) , after I rebooted, the performance was similar with the other 3 servers
Odd but.. nothing is/was making sense
Steven
On Thu, 25 Oct 2018 at 12:32, Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
Thanks for suggestionHowever same "bad"server was working fine until I updated firmwareNow, all 4 server have the same firmware but one has lower performanceI will try what you suggested though although, as I said, same server with same NVME had good performance before updating server firmwareThanksOn Thu, 25 Oct 2018 at 12:20, Martin Verges <martin.verges@xxxxxxxx> wrote:Hello Steven,
You could swap the SSDs from the hosts to see if the error is migrating.
If it migrates, I would appreciate that the affected SSD simply offers
less performance than the others. Possibly an RMA reason depending on
what the manufacturer guarantees.
If not, the system must be further searched for possible sources of error.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
2018-10-25 18:06 GMT+02:00 Steven Vacaroaia <stef97@xxxxxxxxx>:
> Hi Martin,
>
> Yes, they are in the same slot - also I checked
> the BIOS and the PCI speed and type is properly negotiated
> system profile ( performance)
>
> note
> This happened after I upgrade firmware on the servers - however they all
> have the same firmware
>
> BAD server
> lspci | grep -i Optane
> 04:00.0 Non-Volatile memory controller: Intel Corporation Optane DC P4800X
> Series SSD
>
> GOOD server
> lspci | grep -i Optane
> 04:00.0 Non-Volatile memory controller: Intel Corporation Optane DC P4800X
> Series SSD
>
>
>
> On Thu, 25 Oct 2018 at 11:59, Martin Verges <martin.verges@xxxxxxxx> wrote:
>>
>> Hello Steven,
>>
>> are you sure that the systems are exact the same? Sometimes vendors
>> place extension cards into different PCIe slots.
>>
>> --
>> Martin Verges
>> Managing director
>>
>> Mobile: +49 174 9335695
>> E-Mail: martin.verges@xxxxxxxx
>> Chat: https://t.me/MartinVerges
>>
>> croit GmbH, Freseniusstr. 31h, 81247 Munich
>> CEO: Martin Verges - VAT-ID: DE310638492
>> Com. register: Amtsgericht Munich HRB 231263
>>
>> Web: https://croit.io
>> YouTube: https://goo.gl/PGE1Bx
>>
>>
>> 2018-10-25 17:46 GMT+02:00 Steven Vacaroaia <stef97@xxxxxxxxx>:
>> > Hi,
>> > I have 4 x DELL R630 servers with exact same specs
>> > I installed Intel Optane SSDPED1K375GA in all
>> >
>> > When comparing fio performance ( both read and write), when is lower
>> > than
>> > the other 3
>> > ( see below - just read)
>> >
>> > Any suggestions as to what to check/fix ?
>> >
>> > BAD server
>> > [root@osd04 ~]# fio --filename=/dev/nvme0n1 --direct=1 --sync=1
>> > --rw=read
>> > --bs=4k --numjobs=100 --iodepth=1 --runtime=60 --time_based
>> > --group_reporting --name=journal-test
>> > journal-test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
>> > 4096B-4096B, ioengine=psync, iodepth=1
>> > ...
>> > fio-3.1
>> > Starting 100 processes
>> > Jobs: 100 (f=100): [R(100)][100.0%][r=2166MiB/s,w=0KiB/s][r=554k,w=0
>> > IOPS][eta 00m:00s]
>> >
>> >
>> > GOOD server
>> > [root@osd02 ~]# fio --filename=/dev/nvme0n1 --direct=1 --sync=1
>> > --rw=read
>> > --bs=4k --numjobs=100 --iodepth=1 --runtime=60 --time_based
>> > --group_reporting --name=journal-test
>> > journal-test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
>> > 4096B-4096B, ioengine=psync, iodepth=1
>> > ...
>> > fio-3.1
>> > Starting 100 processes
>> > Jobs: 100 (f=100): [R(100)][100.0%][r=2278MiB/s,w=0KiB/s][r=583k,w=0
>> > IOPS][eta 00m:00s]
>> >
>> >
>> > many thanks
>> > steven
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com