Hacker News new | past | comments | ask | show | jobs | submit login

It also doesn't help that Rust keeps on pushing on the idea that "static linking is the only way to go". This is another cargo-cult which I wish didn't end up being engrained so deep in the toolchain because while it has some merits, it also has significant drawbacks of a typical unix distribution.

Static linking might be good for folks distributing a single server binary over a fleet of machines (pretty much like Go's primary use case), or a company that only cares about a single massive binary containing the OS itself (the browser), but it stops "being cool" very quickly on a typical *nix system if you plan to have hundreds of rust tools lying around.

Size _matters_ once you have a few thousands binaries on your system (mine has short of 6k), think about the memory requirements of caching vs cold-starting each, and running those and how patching for bug or vulnerabilities will pan out. There's a reason dynamic linkage was introduced and it's _still_ worth it today.

And rebuilding isn't cheap in storage requirements either: a typical cargo package will require a few hundred megabytes of storage, likely for _each_ rebuild.

Two days ago I rebuilt the seemingly innocuous weechat-discord: the build tree takes about 850mb on disk, with a resulting binary of 30mb. By comparison, the latest emacs binary from git rebuilds in a 1/10 of the time with all the features enabled, the build tree weights 150mb (most of which are lisp sources+elc) with a results in a binary of 9mb, not stripped.




I don't think the position of the language teams is that static linking is better than dynamic linking. I think that static linking is a significantly easier target for a relatively new language whose _novel_ features make it difficult to define a stable ABI.

Under the very specific constraints that Rust is operating in, static linking is currently the best option.

That said, I wouldn't mind seeing some kind of ABI guarantees that make dynamic linking possible. https://gankra.github.io/blah/swift-abi/ is a great, accessible read on some of the challenges of making a stable ABI, so I certainly don't expect it SOON.

I do wonder if there is some scheme that could be adopted in a performant way for making the ABI stable as of the Rust Edition, even if it were only for core and std, statically linking everything else?


Deployability really should be king. I hinted at this in my other post, but I don't think quite enough people think about this.

Static linking makes your deployability worries go away. That's not to say that isn't at least possible with dynamic linking, but the complexity and gymnastics will sink you. Everyone gets bit by it eventually.


Eh, even with Rust's static linking, that's not a guarantee that there will be no worries with deployment. Most SSL libraries in Rust, for instance, currently link to (Open/Libre)SSL dynamically. So you often can't take a Rust binary compiled on a distro with one version of the SSL library, to another distro with a different version.


So you're saying that static linking isn't a guarantee, because the build was polluted with dynamic linking.

Isn't that furthering my point?


Much of the ecosystem seems to be trending towards using rustls though, where this is of no concern.


Actually openssl is pretty compatible between versions, usually not an issue in the real world.


Just to drive the point home.

If all of your engineers die in a fiery plane crash en-route to the company offsite, or your datacenter is wiped out in a flood, at least you have your statically linked binary that can run on commodity servers somewhere.

You have the peace of mind of knowing that your code as built should be able to run somewhere else in its current state without modification. You don't have to worry about the package availability of something that may have been around when you shipped your servers but may not be when you go to ship them again, or something that only coincidentally worked because your systems were installed via a certain upgrade path that's no longer reproducible.

It's a simple matter of business risk and minimizing surprises.


> Deployability really should be king.

Not everyone values deployability as much as you (or employers, I suspect) do. On my personal computer I care about getting security patches and disk usage.


Lots of people don't measure risk very well and get hit by black swan events.

Over 65% of startups have less than 6 months of cash reserves right now and 74% have been laying off staff. It turns out the majority of people are poor long-term planners.

I personally don't care about how you manage your personal workstation, but you're not who most of us are building for. Most of us aren't building tools to support you. We're writing a big ecosystem for everyone to collaborate.

In a professional setting, problems with how people manage their computers aren't acceptable. Any decently-sized company will get rid of such a problem quickly. In smaller companies I've seen people get fired over poor management of their workstation's environment.


Not everyone who uses software is a software engineer…


Not everyone who uses a hammer is in the trades either, but that's who hammers are designed for because that's who is buying.


I think most of the people who use computers are not software engineers.


The computer isn't the tool here, the context of our conversation is about software.


Deployability is the job of the package manager, not compiler. You can always link dynamically and ship with all libraries and set rpath accordingly.

Also: static linking is not deployability if the source code is not shipped to the target - you don't have _my_ version of a library which is patched to support my hardware.


Static linking is more ergonomic for developers.

Dynamic linking is a better use of user resources, which is sort of ergonomics.

To say one metric should be king without context is missing the point.


How much do your programs really share? After libc, libm, and pthreads, the most common thing they link to is probably pcre, and I'm sure you can guess how many of your programs are using that.

A good linker will shave off the parts of a library you're not using, and the parts which are left over are usually not very big. The problem isn't with static linking, it's that some "developers" think that bundling an entire Chromium build with their app is a good idea.

Rust has a problem with big binaries (so does Go), but that's Rust's problem, not static linking's problem.


Dynamic linking is pretty common in GUI applications (xlib, GTK/QT). Server-side there's also libssl, xml libraries, zlib, curl.


Outside of OS distro packaging, most Qt apps actually copy dynamically linked Qt binaries to the same app folder for re-distribution to end users within some form of installer or disk image. The same probably applies to Gtk apps too, as I think the last time I manually installed Gtk for Windows for a Gtk app was a decade ago.

Dynamic linking only works if you can guarantee ABI stability and folks haven’t had to deal with ABI changes since C++13 to the point where if the C++ folks can’t change the ABI by C++23 we will forget it was ever a problem because we’ll make the cost of change too hard. And the current C++ ABI currently makes some parts of C++ executables sub-optimal unless you hunt down your own standard-library alternatives.

Additionally, with dynamic linking, both code authors and code users now need to agree on versions to support under assumptions that newer features in newer libraries aren’t worth adopting quickly. To that end, some OSes do update dynamic libraries more quickly, but doing so theoretically requires a lot more recompilation and potentially you’re downloading the same binaries more than once. At that point, dynamic linking is worth less than a binary-optimized compression algorithm, no? Especially for distributing changes to said executables.

Which isn’t to say, for OS distros, that dynamic linking is bad, far from it, it tends to be the only valid solution for programming against OS core components, but that in our haste to update dynamic libraries independently of code compiled for them, we tend to forget ABI compatibility and the costs to maintain API compatibility across a wide variety of dependency versions for packagers and developers (or alternatively, the lack of updates to new library features for end users).

Windows APIs never changing is the reason Windows stagnates more than macOS, where Apple is less afraid to say your older app simply won’t run any longer. Linux suffers less from this, but as pointed out in the email, part of that is because POSIX implementations are relatively stable over decades, whether or not significant improvements are still possible for more modern UX or security, for example.

The details of dynamic linking on OS platforms can be found in this recent series of posts: https://news.ycombinator.com/item?id=23059072 (in terms of stability guarantees besides libc dynamic linking)


It is also one of the reasons why macOS is not welcomed by enterprise IT for large scale deployments.

You can have dynamic linking with ABI stability with stuff like COM and UWP.

It is also the only viable way to do plugins in scenarios where IPC is too costly.


Wouldn't GUI bindings link to the underlying system library anyway, even in Rust? I think those Rust bindings are thin enough where having them as an additional dynamic library would not be helpful. Similar for the server side - in practice, Rust-based solutions are quite rare and ones that don't expose a stable C ABI for ease of using from C/C++ are even rarer.


> bundling an entire Chromium build with their app is a good idea.

Does Chromium offer any other type of linking? I mean, is the problem in the developer including it when another option is possible or the fact that no other option is possible?


The problem is reaching for Chromium to solve any problem in the first place.


But chromium offers a lot of relevant functionality. If I remember correctly one of the alleged reasons Microsoft moved to a chromium browser was to actually sort of include it in the OS and have electron apps link to it dynamically.

But before that time the chromium bloat is 100% independent from the static linking bloat.


On my computer, there's the standard libraries and frameworks that provide things like GUI widgets, cryptography routines, and access to various user databases.


> I wish didn't end up being engrained so deep in the toolchain

For what it's worth, rustc does support dynamic linking, both for system dependencies and crates. You can compile a crate with the type `dylib` to produce a dynamic rust library, that can then be linked against when producing a final binary by passing ` -C prefer-dynamic`.

I don't think it's possible to make this work with cargo currently, but there are a couple open issues about it (albeit without much activity).

And of course, rust not having a stable ABI means the exact same rust compiler (and LLVM version, I suppose) would need to be used when compiling the dynamic libraries and final binary.


Generics in Rust makes this difficult, not to mention the lack of a stable ABI. I would think it's still possible, but I'm not sure just how much easier static linking is over dynamic linking.

There has been work done on using sscache for cross crate build artifact caching to reduce the footprint when building many binaries.


I don't think static linking is a cargo cult at all. It's just a simple and good enough solution, and there hasn't been enough interest into supporting anything else well. If it would show up as an articulated obstacle, you'd see movement to resolve it.


I can see your point, but to give you a little more perspective on the breadth of the problem domain:

I wish they went as far as Go with their ability to produce fully static binaries and allow for cross compilation. Go's ability to have one CI pipeline running on Linux that then produces binaries that run on every conceivable version of Linux, Mac and Windows is a huge productivity boost.

For most of my use cases, they don't go far enough with static linking!


Note that Go implements most of its toolchain itself while Rust uses parts of the C/C++ toolchain (llvm, C++ linker instead of go's own linker, etc). Go even has its own library loader. Once you have that it's not hard to make it compile to any target you want.

Also Rust is still evolving. They still have to add major new features to the compiler instead of being able to implement features like raw dylibs which would remove the need to include .lib files.


The problem is not the toolchain itself, it’s the runtime. Go’s runtime is interfacing directly with OS kernels, Rust is linking against glibc, etc. I wish the Rust community would invest in a runtime that inreracts with the kernel directly in the same way Go does. Current approaches aim to implement libc drop in replacements in Rust. There are also the musl targets which could make building static binaries easier. A completely rusty approach without going through the libc interface would have tons of benefits, though!


I meant to include the runtime in the "toolchain" term. See how I spoke about the library loader that Go implements.

The libc is only part of the greater issue of community tolerance of C components in the stack, like openssl or host OS TLS implementations, while Go mainly seems to just use the tls implementation of the language creators. There is rustls but it's not regarded as good enough and the defaults are using native-tls or openssl. And even rustls uses C components as it builds on ring which has C components of its own...


Cargo cult - what a great pun!


You can find many articles on the origin of the term. I like this one from 1959:

https://www.scientificamerican.com/article/1959-cargo-cults-...


You're being downvoted--it's because you missed the pun. Everyone knows the term "cargo cult". The pun is combining that with that the Rust package manager is named "Cargo".


Okay, thanks for telling me.

Not everyone knows this term, surely.

And I didn't know the Rust package manager is called "Cargo".


Dynamic linking is just not very useful for rust due to lacking a stable ABI. So yes, you can build shared libraries and link to them but you'll be using the C ABI at the interaction boundary, which some devs might find unintuitive. (Or else you have to make sure that everything you link to is compiled by the same version of rustc and llvm, which is not very realistic.) Static linking sidesteps this potential issue.


A systems language should support everything regardless how useful it might be.

Unless it wants to leave some scenarios open for the systems programming languages that offer tooling for them.


It starts being cool way more quickly on typical /* nix systems as a developer

Due to subtle binary incompatibilities in shared libraries, a CD pipeline I maintain costs twice as much money to run in order to target MacOS and Debian systems. It's not "worth it" to me.

Static linking is a cornerstone of what I'd call "deterministic deployment." The cost savings of deterministic deployment are so immense that after reading your comment and reacting to it, I'm tempted to estimate how much money dynamic linking will cost my business this year and how much money we'd save by reworking our build tree to compile our dependencies from source and maintain static libs on the targets we care about.

>And rebuilding isn't cheap in storage requirements either: a typical cargo package will require a few hundred megabytes of storage, likely for _each_ rebuild.

shrug I don't care about build storage requirements until we start talking tens to low hundreds of GB and disk I/O becomes the time bottleneck or caching costs real money in CI.


Suppose a closed-source program you use has a critical vulnerability in rustls and the company that wrote it is out of business. How much effort do you need to hot-patch the binary in a safe manner? I routinely use 15-years old software which is only usable with replacing libraries with their recent counterparts and changing the names to masquerade for the old ones. And I do prefer to have access to old files.


I'm not much for Rust, but I'm essentially a deploy/tooling engineer and supporting a build system for hundreds of engineers and tens-thousands of servers. I've worked in a few dozen languages at this point and have seen 20 years of the problems in this field.

I'm going to tell you that static linking is the only way to go.

The idea that needs to die is the personal workstation. It's just another deploy target. Ideally whatever is doing builds is reproducible and throwaway. Leaving those artifacts around on disk introduces its own host of problems.

Disk is cheap and you shouldn't have tons of binaries on your systems for no good reason.

(Aside from this though, I'm in full agreement with Theo.)


> Disk is cheap

Arguable, but I'll grant you this. But RAM ain't cheap.


Single Responsibility Principle except apply it to infrastructure. We're in a world where both virtualization and containerization are the norm and with either flavor this is easy.

Each of your virtualized/containerized systems are going to have their own distinct instances of the shared libraries, so you're getting a fraction of the benefits of dynamic linking if you're doing your infrastructure properly today anyway. Stop keeping pets.

How many of these big binaries are you running on your servers at once?


> Each of your virtualized/containerized systems are going to have their own distinct instances of the shared libraries,

Certainly not mine, my memory deduplication is doing just fine.


We've had various mechanisms for this and some of them have even been deprecated. Transparent Page Sharing was deprecated in VMware as it turned out to not be super useful in large pages. We have Kernel Same-Page Merging but many folks are disabling this in a post-Meltdown/post-Spectre world as the potential vulnerabilities aren't worth it.


It’s a lot cheaper than it’s been in the past, especially for what we’re talking about: people are using more data these days but the kinds of system shared libraries where common page mapping helps has seemed like diminishing returns for years, both as a fraction of private data size and because applications don’t sync their dependencies so you end up with multiple versions of each library cutting into the savings.


It is certainly possible to use Rust with COM, like Rust/WinRT [1].

I'm currently building Firefox, it takes ages. It is marvel such complexity still manageable on customer hardware. Emacs is from another age.

[1] https://blogs.windows.com/windowsdeveloper/2020/04/30/rust-w...


The rationale is that rust can only give any guarantees at all if the resulting binary is self-contained. A shared library swap can break everything that the compiler guaranteed during compilation. This fits nicely with the generally restrictive mindset of the rust designers.


This is not "the rationale" whatsoever.

Stable ABIs are hard, and we have a lot of other work that's a higher priority for our users. There is no ideological opposition to dynamic linking, only practical blockers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: