As a contributor to a GNU/Linux distro, I’ve seen three main ways of using libraries from other projects in a program:
- using a shared library from another package
- using a static library from another package
- copying the source of another project and building it in the same package.
It’s obvious why the third way is bad. A significant problem for the Parabola GNU/Linux-libre mips64el port was WebKit not supporting MIPS N32. This could be easily fixed by disabling JIT support, disabling assembly support and enabling alignment of allocated memory (already done for O32 and other architectures). However, different subsets of these changes needed to be done in at least three packages, one of which builds whole WebKit GTK twice (taking about 30 hours on my machine). It’s still possible that the code was copied into other packages, leading to more errors and long rebuilds.
More typical cases of such problems are fixes for security problems (e.g. in libpng or zlib, these were commonly bundled with other packages despite being installed on practically any GNU/Linux system) or removals of nonfree code in FSDG-compatible distros.
Using static libraries wouldn’t solve the above problems – they would require relinking the programs (i.e. rebuilding in distros, for simplicity and reproducibility) and knowing which programs are affected.
This leads to the following advantages of shared libraries:
- a bug can be fixed by changing just one package – reinstalling the library will make the programs use it on next run
- programs specify which libraries they use (tools like readelf or scanelf can show this easily), so it’s know what is affected by a library change
However, this assumes that the new library version is compatible with programs built for the previous one. This assumption is incorrect for many libraries (e.g. in case of libpng or poppler updates requiring rebuilding many programs in distros not supporting multiple library versions at once). Despite this problem, such shared libraries are still useful to avoid having multiple copies of them on disk or memory. This is one of the reasons for GHC to support building shared libraries of Haskell packages (any rebuild of a dependency changes the ABI there). Not needing an "evil Perl script" to not link unused functions and better plugin support are other reasons for this.
There is a problem with having both shared and static libraries for a single package: they are usually compiled differently. Static libraries and programs don’t use position independent code which could make them slower, while shared libraries need it on many architectures (x86_64 is the most popular of them). Therefore having only shared libraries for a package might make its build twice faster. In distros like Parabola where development-specific files are not split into separate packages, it would also make the packages much smaller (very useful for LiveCDs).
I know two reasons for using static libraries for typical packages:
- it might be faster (no PIC)
- this doesn’t require having the needed shared library versions installed
I don’t know any data related to the first argument. The second one is completely unimportant for users of distro packages.
Therefore I think that no package in a new GNU/Linux distro should include a static library unless it’s gcc or glibc (these have other reasons to do this).
Ulrich Drepper wrote an article arguing why static libraries should never be used on systems with glibc. His article mentions some of the above arguments, address layout randomization and interesting features of glibc using dynamic linking.
The stali project presents a completely opposite view, with multiple reasons why properly designed static libraries are better. I think it’s a sufficiently different case than typical GNU/Linux distros using glibc and having big existing programs to not apply in the cases I described.