On 14 January 2018 at 23:32, Neal Gompa <ngompa13@xxxxxxxxx> wrote: [..] >> I think that at least someone in glibc team should start to consider >> drop completely use ld.so.cache. >> This "speedup" mechanism was invented more than two decades ago when >> it was the problem with VFS layer caching. >> The same "loading time issues" have been driving in latex use kpathsea. >> Both techniques today are more or less **obsolete**. >> > > I'm not sure you're aware of this, but the GNU C Library serves more > than Linux. While Linux VFS is much better than it was 20 years ago, > other OSes are not necessarily so. 1) I'm not sure are you aware that C preprocessor (cpp) provides possibility to use (or not) some parts of the C code conditionally depends on OS type 2) Could you please name operating systems on which glibc is used NOW which has no VFS or VFS like caching layer? Linux has it. We are trying discuss how to solve some Linux and rpm as package manager issues. Please stick to this context only. We don't need to solve here Earth famine problems (at least for now). [..] >> BTW it is yet another small issue with those file triggers. >> Build process described in glibc.spec build 32 or 64 bits versions of >> the binaries. >> ldconfig is part of the main glbc package and it is possible on x86 >> install glibc.i386 and glibc.x86_64 >> When both ABI versions packages are installed those file triggers will >> be executed two times. > > This is easy to fix. I could just do it using /%{_lib} and %{_libdir} instead. Nope. File triggers are using base paths. You cannot specify as the parameters of those triggers /%{_lib}/lib*.so.* and %{_libdir}/lib*.so.* BTW: currently /%{_lib} and %{_libdir} are the same so only %{_libdir} needs to be used. $ ls -ld {,/usr}/lib{,64} lrwxrwxrwx 1 root root 7 Dec 14 17:14 /lib -> usr/lib lrwxrwxrwx 1 root root 9 Dec 14 17:14 /lib64 -> usr/lib64 dr-xr-xr-x. 1 root root 804 Jan 9 13:08 /usr/lib dr-xr-xr-x. 1 root root 109892 Jan 14 03:21 /usr/lib64 In your case still rpm triggers semantics needs to changed or new triggers type needs to be introduced. In solution which I've described none of new code in rpm needs to be added. Currently /sbin/ldconfig is part of the glibc package. Depends which one package was installed/updated firs and which one second system image end ups with 32 or 64 bits /sbin/ldconfig. Depends which one binary will be present ldconfig by default goes over /lib or /lib64 directories + any other directories specified in /etc/ld.so,conf.d/ files. To handle this case correctly it would be necessary to introduce /sbin/ldconfig-{32,64} duplicate configuration files an not only /etc/ld.so.cache but as well /var/cache/ldconfig/aux-cache file. Just made some experiment: [root@domek]# rpm -qf /sbin/ldconfig glibc-2.26.9000-38.fc28.x86_64 [root@domek]# file /sbin/ldconfig /sbin/ldconfig: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=d6c3aae1d69c4ae9fe307ab58bbd2bc3f892bf38, not stripped [root@domek]# dnf install -y glibc.i686 Last metadata expiration check: 2:47:52 ago on Mon 15 Jan 2018 01:15:04 GMT. [..] Installed: glibc.i686 2.26.9000-38.fc28 Complete! [root@domek]# file /sbin/ldconfig /sbin/ldconfig: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=d6c3aae1d69c4ae9fe307ab58bbd2bc3f892bf38, not stripped [root@domek]# rpm -qf /sbin/ldconfig glibc-2.26.9000-38.fc28.x86_64 [root@domek]# rpm -ql glibc.i686 | grep /sbin/ldconfig /sbin/ldconfig [root@domek]# rpm -ql glibc.x86_64 | grep /sbin/ldconfig /sbin/ldconfig This /sbin/ldconfig only is looking for libraries to index in /lib64. With both glibc installed only 32 or 64 bit will be indexed does't matter if in both glibcs will be file trigger only for /lib or /lib and /lib64 or any other combinations of the paths. In scenario which I've presented none of those things will happen because it would be no ld.so.cache which context needs to be maintained. In your case to handle 32/64 ABI shared env it will be necessary to introduce some number of new C code when in in my case complete solution will consist almost entirely from only *remove existing code* and on packaging layer remove (only) all ldconfig executions in rpm scriplets. (Again: .. Ockham Razor) I can only repeat that after more than two decades when ld.so.cache has been introduced now using this file only slows down run-time linking. >> All this is result another rpm weakness that all global actions (aka >> in rpm semantics triggers) must be defined not package manager set of >> the trigger but only in packages. > > It's pretty trivial to do that behavior. For example, we could just > have a posttrans trigger that runs a program that decides all the > things based on what the filesystem looks like (Solus does this with > usysconf[1]). But that means the tool needs to know how to decide what > to run. > > [1]: https://github.com/solus-project/usysconf Sorry, my comment was about package manager not universal configuration tool. >> Just for the compare .. IPS has the strict and finite set of analogues >> of the triggers called actions. >> ***NONE*** of the packages definitions contains any actions >> definitions and all works perfectly causing that whoever is >> responsible for building packages has no opportunity to mess in this >> area BY DEFINITION. >> BTW .. IPS has no at all possibility to add any post/postun/trans >> scripts. IMO it is only matter of time when rpm developers will spot >> that all this stuff in spec files is only constant source of problems. >> People moving away from SySV packages which implemented IPS come to >> this brilliant observation DECADE ago. It may take on Linux next >> decade to "reinvent the wheel" but I'm 100% that all those scripts >> embedded in packages sooner or later will be removed. >> > > If we wanted less scriptlets in packages and singular set of triggers > across the board, it would technically be possible. But the penalty > for that is that the package manager must figure out how to evaluate > what to run. You don't get to get away from that logic. My comment was not about what "we wanted" or from point of view any person views is possible or not. I've only mentioned that some people analyzing existing cases caused form conclusion that scriplets or pre/post installation custom are no longer needed and 100% needs is possible to handle using finite set of actions which could be HARD CODED into package manager. Nevertheless this part is off-topic or out of the scope context which we need to discuss here. I've only covered yet another scenario not covered by what was up to now discussed adding at the end . Please do not continue commenting this part because is not relevant to the scope of ld.s, ldconfig, ld.so.cacheand what needs to be done on rpm specs area. >> Conclusion: so far I've been actively supporting add glibc file >> triggers by trying to add my comments discussion in bugzilla tickets. >> However, as I have in my head quite deep knowledge of other operating >> systems recently I found see now WAY better solution which does not >> increases current entropy and/or is Ockham Razor compliant solution >> (https://en.wikipedia.org/wiki/Occam%27s_razor) >> >> So again this solution should consist of: >> - remove use ld.so,cache by ld.so >> - remove ldconfig and all /sbin/ldconfig in all spec files >> - add crle or crle like command (knowing Linux NIH probably it will >> end up on crle like command) >> > > This is not a particularly helpful comment. Solaris' crle is not > demonstratively better than current ldconfig configuration via drop in > directories. Personally, I prefer the latter because you can preload > search paths pretty trivially without requiring an execution > environment. Sorry I don't get it. Could you please show this on some example? Are you sure what crle does and how ld.so on Solaris or BSD* is working? kloczek -- Tomasz Kłoczko | LinkedIn: http://lnkd.in/FXPWxH _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx