On 17/11/24 18:02, Tim wrote:
The bench mark speeds are megabytes per second and the quoted disk performance speeds and the port speed is megabits per second, and looking at the performance I might try hdparm and see what it says for raw reads and cached reads, assuming my hard disk has cache (I haven't checked that spec yet).Chris Adams:That's a misunderstanding of how things work. The SATA port speed is just an upper-bound on transfer, but has nothing to do with how fast a device can actually read data (similar to having a 1G network card and even Internet service doesn't mean sites will serve data to you at 1G). Traditional spinning hard drives typically do top out in the neighborhood of 150 MB/s... and in fact, the official spec from Seagate for that drive is an average read rate of 156 MB/s.Is that really megabytes or megabits per second?
I have an issue at the moment that seems to have started with an update in F40, where SDDM takes a long time to load when the gui boot screen goes black and KDE takes a long time to load after the SDDM login screen goes black, plus KDE sits there thrashing my disk drives for several minutes after the desktop is displayed, and reducing the performance of starting up new applications. I'm trying to determine whether it really is KDE or whether my disks are not performing up to scratch.
It also doesn't help when the I/O summary display in HTOP says the disk reads are at 100% but its detail display says nothing is performing any I/O.
Yes, I understand that but when the device specs specify that the device can operate at 1Gb/s, 3Gb/s and 6Gb/s and the device is plugged into a 6Gb/s port, I expect it to operate at the faster end of the range, but a benchmark speed to 156GB/s says that it is not.And the converse applies to Stephen, remember when you're measuring one thing against another, and they use the two different things. Convert gigabits per second into megabytes per second, and it seems far less impressive. Even more so when they mix up the usage of 1024 or 1000 bits and bytes multipliers. Stephen Morris:If that is the case why does the specs for that device under performance say it will support speeds of 1Gb/s, 3Gb/s and 6Gb/s.Because big numbers are a marketing ploy... Sure, there's *something* that the SATA port can do at that speed, but it's not continuously churn your data through in the way that you'd like.
Yes, a full cache can cause performance issues, but I would expect to be able to play around with the caching algorithms to control what gets cached and what doesn't, particularly when looking at sequential vs random access, combined with disk fragmentation. I know EXT4 is a journaling file system which is supposed to make fragmentation a non-issue, but I don't know whether BTRFS is the same.If they put high speed cache between SATA port and internal storage, they can increase data through-put up to a point (to the point where it's filled the cache). So it's quick for storing one or two very large files, because they're measuring the SATA data speed. But internally, the cache is transferring over to the storage medium at a much slower rate.
I was doing that for a while where I was installing Windows 10 in a Raid 10 environment provided by my motherboard, but for the OS to be installed in that environment it required drivers to be installed at OS install time and the motherboard only provided Windows drivers, so I was running my Linux distributions in a VM. At the time the research I did specified that Fedora Workstation did not support being installed in RAID, only Fedora Server had that support.And maybe you could get a RAID device which has SATA ports to the PC, so it can spread the load internally across several drives and keep up with a very high data speed. I've never looked to see if anyone has actually done that.
I know that can be an issue and the issue can be compounded by deferred writes causing data to remain in cache for longer.I remember when IDE went over to UDMA (same two inch wide [approx] fat ribbons, with twice the wires in them). It could achieve much higher data speeds across the cable, but the drive medium was a bottle neck. Then they put cache RAM in the drives, and that allowed more data to be quickly dumped across the cable to the drive, but the same problem existed: Once the cache was full, you're down to the slow speed of the storage medium inside the drive.
regards,
Steve
Attachment:
OpenPGP_0x1EBE7C07B0F7242C.asc
Description: OpenPGP public key
Attachment:
OpenPGP_signature.asc
Description: OpenPGP digital signature
-- _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue