All,
I am testing an all SSD and NVMe (journal) config for a customers first endeavor investigating Ceph for performance oriented workloads.
Can someone recommend a good performance and reliable ( under high load ) combination?
Terrible high level question I know but we have had a number of issues while stress testing.
Cent 7.1 / Infernalis / EXT4 appeared to be stable.
Cent 7.1 (229.20 kernel) / Infernalis / XFS suffered from some amount of XFS issues which I think was long running / hung kernel tasks.
Cent 7.2 ( 327.3 kernel as I recall ) / Infernalis and Jewell / XFS and BTRFS appeared to suffer from the highest frequency of hung kernel tasks / False ENOSPC osd errors.
Cent 7.1 (229.20 ) / Jewell / btrfs seems to have some nice performance characteristics but will hang a kernel task every few stress tests.
Should I just punt on Cent and go to Ubuntu 14.04 for my stated use case?
I know this is an open ended and poor question but maybe someone out there has done something similar and seen similar issues.
Thanks for reading
Wade
I am testing an all SSD and NVMe (journal) config for a customers first endeavor investigating Ceph for performance oriented workloads.
Can someone recommend a good performance and reliable ( under high load ) combination?
Terrible high level question I know but we have had a number of issues while stress testing.
Cent 7.1 / Infernalis / EXT4 appeared to be stable.
Cent 7.1 (229.20 kernel) / Infernalis / XFS suffered from some amount of XFS issues which I think was long running / hung kernel tasks.
Cent 7.2 ( 327.3 kernel as I recall ) / Infernalis and Jewell / XFS and BTRFS appeared to suffer from the highest frequency of hung kernel tasks / False ENOSPC osd errors.
Cent 7.1 (229.20 ) / Jewell / btrfs seems to have some nice performance characteristics but will hang a kernel task every few stress tests.
Should I just punt on Cent and go to Ubuntu 14.04 for my stated use case?
I know this is an open ended and poor question but maybe someone out there has done something similar and seen similar issues.
Thanks for reading
Wade
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com