If #git makes #Rust mandatory it will block future git versions to be ported to our niche platform. While this would not immediately lock us out of repos (the current version will likely continue to work fine some time) it eventually would complicate access (all git work would need to be circulated via some proxy setups or similar).
Needless to say I'm not thrilled by this idea.
I am not against Rust. I am against breaking change that leaves everyone not embracing Rust behind.
https://lore.kernel.org/git/20250904-b4-pks-rust-breaking-change-v1-0-3af1d25e0be9@pks.im/
@harrysintonen A platform that needs to run dev tools itself (rather than being a compilation target), but isn't supported by LLVM seems to be a niche of niches.
@harrysintonen There is Rust for m68k: https://doc.rust-lang.org/nightly/rustc/platform-support.html
Rust added PowerPC support 9 years ago. You can contribute whatever linker/ABI/libstd back-end is needed for MorphOS.
There is Rust for Dreamcast too: https://dreamcast.rs/setup.html
@kornel I appreciate the advice and the enthusiasm. However, there are problems that are not obvious to an external observer.
Currently, the only realistic way we ever get Rust is via gccrs. In fact, I can compile a working minimal binary with gccrs (while calling the C library runtime). This support is far from being ready for prime time. Also, considering the outright hostility towards the gccrs project, I don't feel much confidence it ever getting to truly usable state.
@harrysintonen What's stopping you from using LLVM's PPC codegen? rustc can emit .o files that GNU linkers usually understand. Are you blocked by ABIs, relocs, pre-main startup code?
I wasn't aware of hostility towards gccrs, but even in an ideal world a new front-end is a huge undertaking, and won't be ready anytime soon. But rustc+llvm and rustc+gcc backend are there already.
(BTW, I've had Pegasos 1 w/April, but I never needed to look into executable formats).
@kornel Our backend isn't vanilla PPC elf, so it requires a fair bit of massaging to get things going (gcc and binutils alone were a huge undertaking). We currently don't have modern llvm, and are stuck on a very old version.
@harrysintonen Do you need big changes on the codgen side? I imagined you would cross-compile from modern vanilla LLVM, using a custom rustc target (that lets you tweak ABIs, reserved registers, mangling, etc.), generate .o files, and then use your platform-specific gcc to make executables from them.
@kornel diff to gcc 15.2 is about 6000 lines and diff to binutils 2.45 is nearly 5000. So not insignificant by any measure.
@harrysintonen @kornel "upstream first" idea is not so popular in MorphOS team or something?
Maintaining such patchset must take some time...
The usual "how to eat an elephant" applies?
Split to parts, triage, start with easier ones?
@harrysintonen there is a GCC backend for Rust (a separate project from gccrs), so if you have codegen working in modern GCC, it could work there too.
For LLVM a custom r13-relative addressing seems like a harder problem. Maybe you could mark r13 as always reserved and live with less efficient code?
@kornel We of course employ cross-compiling. But still, being left out in that manner is still unfortunate, as we try to offer as complete and modern native SDK as possible.
I fully acknowledge that this is something we just have to live with. We are doing our own thing, and we can't possibly expect everyone else to cater for our platform quirks.
However, there are other platforms far from being this obscure that will be hit as well: Various legacy Debian platforms (alpha, hppa, m68k and sh4) and NonStop, AIX and Solaris come to mind at least. There are likely more.
@harrysintonen @kornel A couple of years back, there was a even proposal to change the m68k Linux ABI to cater for the internal limitations of Rust and LLVM that was ill equipped to deal with the requirements of this - admittedly, historic - architecture.
Being a compiler hacker myself, to me this meant: it is seriously immature, not ready for prime time. But for people pushing Rust, throwing away 4 decades of compatibility was completely justified, because fixing the world was more important.
@harrysintonen @kornel I think the problem was that it was a 32bit platform that had a less than 4 byte natural alignment - 2 byte in case of m68k - and this needed very deep changes to accommodate. But don't quote me on this. Maybe this was fixed since, no idea, I stopped caring.
We obviously do not speak the same languages nor share most values with these people. I tried to have an open mind, but the way it is pushed just makes me resent the whole thing. Maybe "I'm too old for this shit."
@chainq There's definitely a mismatch of values, and Rust is future-looking rather than backwards-looking.
Rust is 10 years past 1.0 release. The implementation is quite mature in what it targets.
Lack of obscure/legacy things is immaturity only if you assume having them is a goal, but Rust prioritized other things, like safe multithreading, early WASM support, modules, error handling, metaprogramming, UB prevention/detection. From Rust's perspective these are woefully immature hacks in C.
@kornel Spare me the buzzword bingo, please. I'm old enough to have heard a similar list from at least half a dozen "this is the future" tech/language advocacy group, over the past 3 decades.
Every revolution and their leaders in history thought the losses it causes are temporary, and justified for a better brighter future. And because they're building a better and brighter future, they're morally justified to make the choices for everyone. This is no exception, it seems.
@kornel @harrysintonen Sorry, but thatâs bullshit. For me, this Git change is so catastrophic that I might have to move to something else. I run Git on a lot of old platforms. Because I support them. Which means I develop on them. Which means I use Git to track my changes.
PowerPC macOS, AmigaOS, MS-DOS, HP-UX, Irix, QNX to name just a few. And that is only a fraction of the list. These work perfectly well for editing source code, compiling natively, commiting the changes to Git and to push it. That all works today. And it will stop working for no reason other than that Rust people a.) have to infect every single project and b.) seem to get some weird pleasue out of dictating people what platforms they have to use. The ârewrite it in Rust for no reasonâ was bad enough (especially when it wasnât even something security critical and would just introduce new logic bugs) - but apparently that did not force Rust on enough people so now it needs to be pushed into every single C project with as much force as possible and screw people on other platforms - how dare they use anything other than modern macOS, Linux or Windows? âItâs not what I use, so fuck those people!â
I am sure there will now be mentions of gccrs. Rust people love to point out that this will fix all these portability issues. Meanwhile, Rust people outright bully and harass gccrs people for âhow do you dare to add fragmentation to the ecosystem?â.
Sorry if I sound bitter. Itâs because I am. I liked Rust when it was new. But then Rust people decided to make half of my computers unusable. First by introducing Rust to librsvg, so I can no longer run a desktop. Then by forcing it into python-cryptography, so I could not run server stuff there anymore, either. And now theyâre trying to make the few remaining computers I have that Rust people left usable also unusable by forcing it into Git. I really wanted Rust to take off in the beginning and liked the language. But this constant and repeated behavior has made me fight it with everything Iâve got.
If you want an SVG library in Rust, write a new one.
If you want a cryptography library for Python in Rust (for some reason), start a new one.
If you want a DVCS in Rust that is Git compatible, write a new one. (The OpenBSD people did exactly that, so itâs not impossible. They did not try to force their world views on others.)
@js Yeah, I see that it is terrible for your chosen platforms. But they _are_ niche and discontinued (except QNX, which also has Rust). That's a tough spot to be in. Projects weigh costs/benefits. When supporting old platforms and old compilers costs devs' time or productivity, it is an opportunity cost affecting all users, but a benefit for vanishingly few.
You prefer using retro computers, but Git devs prefer having benefits of a modern language.
Lack of obscure/legacy things is immaturity only if you assume having them is a goal, but Rust prioritized other things
You illustrate the problem perfectly here: thinking that itâs either / or is a broken way of thinking.
Even if it were the case that prioritizing certain things precludes others (which it most certainly isnât, unless youâre doing things incorrectly), this is precisely why Rust is inappropriate for universal adoption.
If a certain corporate led group gets to decide what is âobscure/legacyâ, then people either have to maintain their own forks of Rust, or we just have to blindly accept that things that arenât popular enough or that donât get enough attention by corporations will be deemed, âobscure/legacyâ.
@kornel @chainq This is bad for the planet. We should make good use of our computing resources and not intentionally landfill them. m68k is an example of how we can maintain support for non-mainstream if we want, yet people who know nothing about m68k would love to kill it for emotional reasons. We donât need this at all. Weâre already seeing this with Debian and i386.
So when people want to lump things in to an âobscure/legacyâ category and suggest that the problem isnât Rust, that just means youâre cheerleading for reducing sustainability.
@AnachronistJohn I agree about making good use of existing hardware in general, but the platforms in question are extreme outliers. They have less compute than a smartwatch, but use relatively way more energy. Eventually extra energy used technically exceeds ecological impact of making new efficient hardware.
Enjoy retro platforms if you like, but please don't stretch it into a moral obligation.
I've used 68k Amiga as my main computer until ~2005. I know very well what I'm leaving behind.
@chainq @kornel Hereâs a good example. How many people are even aware of the existence of SuperH? Do you know about SuperH? People who advocate for the dropping of i386 make claims of âcostâ and âdeveloper timeâ - of course theyâd advocate for dropping support for processors theyâre not even aware exist. Yet if you have a working (non Rust) toolchain, you can compile thousands of open source packages that were written by people of whom a vast majority probably know little or nothing of SuperH:
https://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/sh3el/
Itâs somehow viable in 2025 to have something like this with a modern toolchain, a modern OS, and thousands of compiled packages. So should people like you, who would most certainly consider SuperH to be âobscure/legacyâ, be listened to when you advocate for Rust and the exclusion of what you somewhat dismissively consider to be âobscure/legacyâ?
Rust is fine, but the advocacy is a bit exhausting, especially from people who show little awareness of the implications of getting what you claim to want.
@AnachronistJohn it's cool that it's possible, and I don't have anything against it. But if I'm forced to choose between writing software that might run on SuperH (but probably won't due to speed & RAM requirements) vs being able to write more reliable software that runs better on still-manufactured CPUs, I choose the latter. I also gain access to thousands of new libraries, with better interfaces, and painless support for Windows. I can do more things in practice, and find them more valuable.
@phf The goal isn't to stick to one language forever, but to enable writing better software, easier.
There are switching costs, network effects, and many trade-offs, so replacing languages is a complex issue.
IMO it makes sense to switch from C to Rust now (in many, not all cases). It's also possible that in 35 years there will be something new that obsoletes Rust, and it will be a good thing.
@kornel Thatâs the thing - youâre not asked to choose between âwriting software that might run on SuperH (but probably wonât due to speed & RAM requirements) vs being able to write more reliable software that runs better on still-manufactured CPUsâ.
You invented those limitations. If you received a problem report about running on SuperH, it sounds like youâd focus more on how âirrelevantâ it is, how support is a waste of time, perhaps, rather than focusing on the idea that fixing something, even something thatâs an edge case, generally means code is better.
The Rust ecosystem is nice and all, and nobody is saying that anyone should abandon it. Of course youâll all have your npm moment when crates get backdoored, but thatâs just how these things work.
The real issue is that people in the Rust community want to replace everything with Rust versions while dismissing fallout as unimportant because âobscure/legacyâ.
Sorry, but no.
@kornel To clarify - youâre making the example as though Rust refers to, âbeing able to write more reliable software that runs better on still-manufactured CPUsâ, and C refers to, âmight run on SuperH (but probably wonât due to speed & RAM requirements)â.
Nobody should tell you that you shouldnât choose to write Rust. If that makes you happy, great!
But just think about how you shouldnât tell others that they need to adopt Rust or consider themselves a âniche of nichesâ or âobscure/legacyâ.
@AnachronistJohn There's a conflict here, but it's not that obvious to me whose wants/needs should be prioritized over the others.
Git devs decided they prefer Rust (maybe it makes them happy). Git has other users who benefit from what Rust offers.
You end up forced to have Rust to continue getting new git versions.
OTOH supporting all old platforms forces Git devs to keep writing C.
Why is it right for you to tell Git devs to use C? Why should your git use-case override other users' needs?
@AnachronistJohn Details vary depending on context and what you measure, e.g. from CO2 emissions perspective, in data centers using fossil energy sources, the effect is large enough that it may make sense to upgrade immediately, even new amd64.
There's a whole range of scenarios where the break-even point shifts. '90s hardware has 100Ă-100000Ă worse compute/Watt, but the impact (or savings potential) is also small in absolute terms, because so few units exist.
@AnachronistJohn I know, I've learned web dev on iBrowse and AWeb, and worked professionally on website performance optimization for years.
There's always bloat, and you can always optimize better if you want. It's not necessary to have old hardware to do that, unless you want to *force* it. Do you?
I recently commented about Wirth's âA Plea for Lean Softwareâ who dissed computers with 16MB of RAM as having too much. He'd have written a multi-person chat in 32KB: https://lobste.rs/s/g9z6o4/death_utilitarian_programming#c_6gpyxx
@kornel You bring up the central issue: Who gets to say whether something is written in a certain language?
If you want to write your own software, then you do, obviously :)
The thing is that if a project wants to switch from a universally accessible language to one thatâs not quite as accessible, they can do that, but if that project has made itself a part of other projects in ways that make it difficult for those other projects to change, then thatâs a bit of a dick move.
That is, if youâve made yourself indispensable, then you want to make large changes thatâll negatively affect other projects who trusted your project enough to depend on it, nobody would be in the wrong to call you out on that.
@kornel So one should ask: why shouldnât git be forked if some of the developers really want to use Rust? I think it should be forked. Unless thereâs some intractable problem with functionality, scaling or implementation that can only be fixed by switching to Rust (which, letâs be honest, there isnât), there should be an accessible version, and at the same time there can be a Rust version, and if the git team donât want to maintain separate C and Rust codebases, the fork can be completely separate.
But does the official git project get to switch to Rust, sunset the C version, then tell people they canât call a fork that sticks with C git? This is the problem with project stewardship, and is another issue altogether.
When people depend on you (âyouâ being the git project here), and when youâve cultivated that dependency, then there really is an obligation to consider the impacts of large changes.
@AnachronistJohn But *the* git developers wanted to adopt Rust themselves, and the project has reached a consensus about it.
Whether this Ship of Theseus should have been called a fork or not doesn't really change the reality of where the development is going.
They have considered the negative impact on niche platforms, but weighed it against benefits of Rust, and burdens of continuing with C, and the other factors won.
There's no rule that everyone has to use C unless absolutely necessary.
@AnachronistJohn You're making inaccurate assumptions about my experience & background. I think it's quite possible that we differ in opinion not due to ignorance, but due to judging situations differently.
For example, I don't think real Amiga hardware can help people in poverty. Even if they happen to have one already (with rare accessories it needs), they'd better of selling it to hobbyists, and getting RPI or old 64-bit PC instead, which are much faster and more capable.
@libc @harrysintonen @kornel How many times did Git segfault for you? How many times however did you have logic bugs? If we have seen anything from all those Rust rewrites, itâs that there is no improvement in stability, but a high increase in logic bugs. The mindset of âRust code canât have bugsâ isnât helping either, as it makes people sloppy and hence even results in MORE bugs. The results of all âRewrite it in Rustâ projects so far have turned out a failure.
QNX is not disingenious as you can use QNX 6.5.0 as a desktop system. Itâs a really nice and fast desktop with itâs Photon UI. I support that on pkgsrc, for example. And Iâll let you guess what tool I use to manage patches I create for it.
@js Git releases are mostly fine, but it's not apparent how much effort it takes to get there, and what needs to be sacrificed (typically multithreading is kept to minimum due to C risks).
Nobody seriously claims Rust can't have bugs, that's a strawman. It eliminates certain classes of bugs, helps with reliability, but nothing is absolute.
Rewrites are risky indeed, but e.g. librsvg fixed tons of bugs. Dropbox, Android report major successes. Fish, Firecracker, Stylo/Webrender, sudo-rs worked.
@js Nobody came. Project members decided themselves. They've tried Rust, liked what it does, and became the "Rust people".
C upgrades are tricky, not for strictly technical reasons, but expectations that any crappy compiler anyone has should work, and cause annoying busywork. I've had these pains in my C99 projects too. Users don't like being told to use a modern C compiler as much as they don't like being told to use Rust.
@kornel What evidence do you have that there is a significant amount of segfaults before Git hits a release? And no, multi-threading is not more risky in C than in other languages. Multi-threading is inherently more challenging to the human brain, no matter what language, so if itâs not necessary, it reduces complexity to avoid it.
Well, there are plenty of people who claim that just by rewriting software in Rust, it has fewer bugs. So far Iâve only seen the opposite.
@js There is a colossal difference between ease and reliability of multithreaded code in Rust vs C.
In Rust, the type system enforces shared XOR exclusive access precisely, prevents non-tread-safe data from being shared with threads, prevents UAFs due to operations finishing in a different order, and enforces use of mutexes or atomics for global or shared data. This works reliably, at compile time, globally even across 3rd party code and dynamic callbacks.
@kornel Great demonstration of the typical Rust fanboyâs attitude of âI donât need to be careful because the compiler will take care of it for meâ. This is not enough for multi-threading. Proper locks-free algorithms need proper care, no matter which language. How is the atomics situation in Rust, even? Did they even define a proper memory model for this? (Granted, C and C++ took until 11 for that, too). Just throwing a lock at everything and enforcing that is something C++ can also easily do. But that is not how you get performant multi-threaded code.
@js Even if you assume that having a hard to avoid risk forces extra diligence:
1. humans are fallible, even the smartest ones, even when maximally careful.
It's especially hard for humans to reliably gather program-wide info and uphold invariants across time and teams. Compilers can help do this reliably.
2. amount of time/effort spent on software is finite. If everything in the program can be catastrophically dangerous, the effort is spread thin across the whole program, but still has to [âŠ]
@js âŠstill has to be infallible. OTOH when the compiler can reliably check majority of the code, and prevent the most tedious and pervasive issues, the effort can be focused on the parts that the compiler can't guarantee. This gives overall higher reliability, because the compiler-checked invariants are perfect, and the dangerous parts get extra eyes on them.
It's like changing "Where's Wally?" game with a large map into a couple of "Is *this* Wally?" mugshots to identify.
@kornel 1.) I agree that tooling to help catch human errors is useful. However, this is not exclusive to Rust, and other languages have static analyzers, too. Rust just runs the static analyzer during compile time.
2.) Exactly. So it makes more sense to keep what is already there and debugged. And yet, the Rust community demands - very loudly - existing, perfectly working software to be rewritten in Rust. Because it is a religion, not a language. The outcome or whether it makes sense is irrelevant, itâs just fundamentalism that everything needs to be Rust.
@js Rust has ability to provide these guarantees only because of coding patterns it doesn't support (e.g. no reference cycles), and annotations it always requires (Send/Sync and lifetimes on everything).
C doesn't enforce code to be compatible with this static analysis. It has no machine-readable information about thread safety. This isn't solvable by writing a better static analyzer (Rice's theorem, Halting problem).
@js "Already there and debugged" is code that has already been written and tested.
When adding new code, you keep risking breaking assumptions in the existing code, thus creating bugs. The more assumptions are explicit and enforced automatically, the harder is to break them by accident when adding new code, which reduces rate of defects introduced.
@kornel I might agree with you here that in C, such annotations are a bit clunky (but possible!) to aid a static analyzer, and that itâs much easier with C++âs templates. However, the only thing Rust is different here is that the static analyzer is MANDATORY. Everything else can be done with any other languages as well. Everything can be annotated so it can be checked by a static analyzer. Heck, mrustc is the perfect example that all that is not so much part of the langauge and it IS possible for Rust too to compile it without the static analyzer pass.
In fact, Iâd argue the static analyzer pass should only be used during development. Static analyzers are slow. In any language. And always running it, even when the developers already did it, is what leads to those ridiculously atrocious build times with Rust.
@js There are hard limits of static analysis. This is a well-studied topic, with many cases having logic proofs that they are literally impossible to ever solve, and even those that are solvable in theory end up having exponential compute or memory cost.
I know "halting problem" seems abstract and irrelevant, but questions like "is this field of this object ever accessed from another thread?" can easily end up hitting the same problems. This affects most code in most languages.
@js Cases where it's possible to give whole-program guarantees are an exception, not the norm. It requires program to have certain simple-to-analyze "shape" of its data flow, and/or guarantee to have dynamic checks in all the places that may be missed statically.
mrustc is Rust -> C, but this loses information. Proving properties of C code is more like going in the other direction C -> Rust, and that's asymmetric, like generating a hash is easy, generating preimage is hard.
@kornel Rust cannot solve the halting problem either. So you are moving the goal posts. Nobody ever claimed static analysis could do this, Rust cannot do this, but now you demand C can do it or bow to Rust. All the static analysis performed during the Rust compilation can be done in another language just as well if you add annotations for the static analyzer.
@js Rust doesn't solve the halting problem, but Rust does forbid code patterns that would lead to halting problem dead-ends.
Rust famously doesn't support making doubly-linked lists from references, because that creates cycles and shared mutable ownership, which makes it a much much harder problem. Rust allows only "easy" code. C is much more flexible, and doesn't draw the line at easy-to-analyze code, and you can't expect existing C codebases to happen to fit the "easy" shape of code.
@js Maybe it will be easier to understand this way: we don't have tools that reliably add static types to dynamically-typed languages.
We have types for them, but adding them to a codebase is not always possible without changing the code, if the code actually uses duck typing, monkey patching, reflection, or is undecidable or O(2^n) expensive to evaluate.
You may need to remove/redesign the too-dynamic properties of code before you can give them non-ridiculous static types.
@js and similar problems arise in C, where you could see it as having dynamically-typed ownership and lifetimes. The dynamism may be an easy case, or a hard case, or may be dynamic to the point it's impossible to express in a reasonable way (where type annotation doesn't explode to being just as complex as the code it describes).
@kornel And just as much as Rust limits what you can do, you can have a static analyzer for C, too, that limits what you can do, and only allows you to do it if you add proper annotations to proof to the static analyzer that itâs safe.
Again. The only innovation Rust here has is that it made the static analysis and the annotations for it mandatory in the default compiler. It doesnât even go into proving code for correctness, which can be done in other languages. If you want a language where you can proof everything, Rust is even the wrong language for you, and Iâd suggest to look at languages such as Haskell and Ada. Languages way older than Rust, having way more research about proving code is ACTUALLY correct.
But, thatâs not the hype, so, obviously nobody is pushing for projects to migrate to those languages. Because, again, the entire push to Rust is not because of real outcomes. Those could be achieved by other languages just as well. No. Itâs because itâs a religion. A cult.
@js The analysis Rust can provide depends on things like lifetime annotations, traits, generics. C users generally reject that level of complexity and inflexibility, and existing code bases aren't written using these design patterns with such annotations.
Maybe this could be solved in C, but it wasn't. Rust has 200K+ packages written with these annotations and restricted patterns. How many C has?
Rust doesn't claim to be novel or first at this. See pages 6-7 http://venge.net/graydon/talks/intro-talk-2.pdf
@js In a Modern Art Gallery, you may see a canvas with a lame splatter of paint on it, and think "I could have done that!"
*But you didn't!*
It's not your canvas on the wall. Whatever it took to get there, easy or hard, someone had to do.
Maybe any language could have added the annotations and build tooling that achieve such robust thread safety guarantees, *but they didn't*. Rust already has a 10-year head start on this, and a whole ecosystem written this way.
@js Regarding speed of static analysis, this is one of C's own problems. C doesn't have good aliasing information, no ownership info, no thread safety info, weak const, it ends up requiring expensive analysis, usually whole-program, just to discover this information.
In Rust this is available locally in the source code, no search required.
Rust is known for slow compile times, but the static analysis part of it is surprisingly small. Negligible compared to LLVM optimization times.
@js When you reduce the whole issue to a "cult" or "religion", it's a dead-end for understanding. You get to call people names and feel superior, but you gain no insight this way.
*You don't have to agree* with the reasons people have, but terminating your reasoning at "it's a cult", just leaves you baffled, and it's not helping you make a convincing argument.