L. A. Walsh wrote:
Akira TAGOH wrote:
That also works but we need to support the parallel updates on
fc-cache by threading then. otherwise it's quite stressful to wait
for. but it may takes more time if the directory structure where is
updating is too deep, because there are no way to know how many sub
directories it contains until a cache is updated. so just updating
twice seems realistic at this moment. this may be a todo task for
I just reran my dedup script on the fontdir (hardlinks dup files) so
it could be faster... but I don't think fontconfig checks the
inode#'s to see if there are dups to skip reading?
Does it?
^^^^----Does fc-cache at least check for duplicate inodes? I didn't
create them, but they are different names I've downloaded or had fonts
renamed to. Rather than go with 7G of dups, I wrote my dedup script
with the fonts (a copy of them!) being a prime test ground. Having
collected
them over the past 15-20 years, I ended up w/a few...
In comparison, font-config rebuilding that
cache takes over ***14 minutes*** of cpu time (I didn't have a
wall clock running on it; so just read the cpu), but it was
at least 14 minutes, guaranteed (while dedup was only 40% cpu
bound, it is mostly cpu bound).
----
Correction ^^ it is mostly *disk* bound...
I found that in most cases computing any sort of checksum/file slowed things
down except in some corner cases. Ended up finding at least 2 different
file-io
bugs in perl having to do with large files (2-4G).
So why isn't the cache updated "offline"?
I.e. in a 2nd copy and then exchanged with the active one when it is
done building?
_______________________________________________
Fontconfig mailing list
Fontconfig@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/fontconfig