i don't think comparing website size to the apollo guidance computer program memory size is fair or meaningful
the AGC was doing trajectory calculations (something that is quite easy to build a computer for).
the website is putting raster images on a high-resolution screen (something that is significantly harder to build a computer for, and also not a concern whatsoever for people going to the moon in the 70s).
comparisons of AGC program memory size to USB PD controller program memory size are at least _kind of_ apples to apples in terms of the task performed, but it's still not a fair comparison because the Apollo computer did not have to deal with thousands of other vendors connecting their borderline-adversarial implementations of a protocol to it
it's clickbait
now, comparing website size to DOOM code plus asset size is basically reasonable: both are putting dynamic raster images on a screen in response to user interaction. also i wish i could hit a sequence of pseudo-consent dialogs with a BFG
@whitequark People say they want the *resource usage* of an Apollo Guidance Computer. But when you also give them the *user interface* of the Apollo Guidance Computer, they aren't very happy.
@whitequark I wonder if comparing AGC vs. say, the firmware for a thermostat controller would be a reasonable comparison?
@whitequark Maybe, 3270 terminal screens are a bit more appropriate? At least that’s a client-side front-end - while AGC was more like a microcontroller thing?
@promovicz what kind of tech was the 3270 built with?
@tedmielczarek assuming we're talking about a wireless thermostat (the AGC had a wireless link after all), I think it isn't: the thermostat operates in a hostile environment, not in a "only one place on earth gets to ever transmit messages to the device, and if someone else even tries they get arrested immediately"
@tedmielczarek on second thought i'm not sure if it had a wireless link, my memory is hazy on that part. did it?
@whitequark earliest microcontrollers. it’s a serial-controlled “character raster” with readback for form input. form submissions are like a “full-screen submit”. Only function keys go to the host - form entry in itself is local, like in a browser.
@promovicz oh right one of those
i mean that's a type of website, sure, but there are other types of websites, like ones with interactive art
@whitequark we choose to render human scripts accurately and do the other things
Not because they are easy, but because they are hard(er than going to the moon, kinda)
@whitequark I'm actually not certain, but I'm not suggesting a wireless thermostat, but a wired one.
@tedmielczarek yeah that's reasonable then
every aircon i've dug into schematics of uses some kinda ancient 8/16-bit MCU from Renesas or something, which might even be _less_ powerful
(i'm not taking the american thermostat wiring seriously because of how poorly it's able to regulate)
@whitequark What makes me sad is that everybody has a supercomputer now, but nobody is using it for nuclear tests in their back yard.
@postweber it's illegal for me to even consider responding to this message in seriousness, I think
A decent text layout engine is probably bigger than Doom. Rendering the Béziers in the font is pretty easy, though antialiasing adds a bit of complexity, but then throw in hinting, proper kerning, line breaking, hyphenation, Unicode handling, bidirectional text flow, and so on and you’ve got a big chunk of intrinsic complexity. TeX didn’t do all of them, and it also handed off the rendering to an external program.
So I initially thought this comparison was unfair because Doom didn’t do any of those things. But then I remembered that web pages get all of those things for free from the browser.
@whitequark the universe is the adversary here but i think that was mostly protected against in hardware? /j
@whitequark the rasterizing and putting on the screen is performed by the browser, not the website. Optimized SVGs are magnitudes smaller and often also faster to render than raster images most websites still use today. But the main reason websites are megabytes in size today, is that they rely on complex frameworks that do a ton of things on the client (in JavaScript) that's really not needed for getting the content on the screen.
@pixelschubsi @whitequark "Optimized SVGs are magnitudes smaller and often also faster to render than raster images most websites still use today" this is not true, the only reason svg rendering is "fast" in browser engines is that they cache the results of rasterising the svg in a regular old texture. (you can kinda build an svg-like thing that can run on the gpu and render in real-time, but it's not something that's widely deployed in browsers, same goes for text glyphs & emojis)
@dotstdy @whitequark the same caching of rendered images actually applies to raster images, except their rendering is more of a decoding. Still, the svg instruction to draw a hexagon with a gradient on it is usually significantly smaller and faster from request to render cache than a high resolution jpeg that has the very same effective rendering result.
@pixelschubsi @dotstdy people want to display things other than geometric shapes
@david_chisnall @whitequark cursed idea: SPA that reimplements a browser in JavaScript in order to render the page itself.
@Scmbradley @david_chisnall
- https://github.com/trevorlinton/webkit.js/ exists
- some startup (Medium I think?) used to do their own text rendering with all of the associated downsides
- i think Dart was it that tried to render UIs in the browser the same way?
- any Rust egui app is like this
so... people kind of do that, though not quite literally so
I actually pondered a few variations of this a while ago. Could you write a rendering engine in JavaScript using the canvas / WebGPU tag and JS accessibility hooks? If so, that significantly reduces the number of things that you actually need in a browser, and maybe the browser could just bundle this.
It would have the nice property that you’d have a much more Smalltalk-like world: everything in the browser is visible to the program except for the final rendering stage. The browser becomes a JavaScript + WebGPU implementations, which is a much smaller and more maintainable piece of code.
My starting point was actually the opposite direction. I was pondering X11 replacements and thought that (contrary to the opinions of the Wayland folks) remote display was a critical feature for a new display server. It seemed a good idea to have one implementation that ran in a browser, so that you could do remote display with a simple HTTP + WebSocket wrapper, either tunnelled over SSH or directly with HTTPS and a client TLS cert stored on the server. This would probably let you do rapid iterative prototyping and then also implement a native version that removed the browser. Having the ability to ship WebAssembly view objects to the display server and have them talk an RPC protocol to the app would give you the benefits of NeWS, but built on a more modern technology base.
@david_chisnall @Scmbradley (to be clear, Wayland works over the network today just as fine as `ssh -X`. I don't know if you know this, but it's a common misconception)
@whitequark wasn’t this Sony/Toshiba’s wheeze with the Cell? That languages and programming practices were just not up to exploiting?
@whitequark that would be psDoom but for cookie consent dialogs: https://psdoom.sourceforge.net/
@whitequark @david_chisnall I feel like it's probably a law of the internet that if you think of something cursed people could do with JavaScript, they've probably already done it.
@Scmbradley @whitequark @david_chisnall
We are still in the "CISC" age of websites.
Once somebody invents a "RISC-style" browser, all that old compability cruft can go into an legacy emulation layer... ...which will be maintained and extended for the next hundred years, if the history of x86 is any indication.
@wakame @Scmbradley @david_chisnall people keep saying that without pointing at the "old compatibility cruft in question"
Huh, I thought it required something like PipeWire, or something else that just ships the rendered images, and works for a single window or an entire desktop, but not individual apps. Do you have a pointer to some documentation? The last time I tried to read up on this it looked far too complicated to want to try.
Windows does this for WSL2 in a horrifying way involving two VMs, PipeWire, and RDP, which made me not think it was something I wanted to try to recreate for displaying Wayland apps remotely on a Mac.
I just ran
$ waypipe ssh user@server weston-terminal
and it worked as advertised
@david_chisnall @whitequark @Scmbradley to be even more clear, yes, waypipe or any other wayland pipe necessarily sends images over the network, because wayland wants the client to do *rendering*. in other words, "local rendering".
this isn't something wayland just decided to do. the modern toolkits all do client-side rendering - the design of wayland merely *reflects* this fact.
this does mean a (mere) wayland server's job is so much simpler than an x server, e.g. https://github.com/mmulet/term.everything
@david_chisnall @whitequark @Scmbradley (note that this does not mean you have to send a full rectangular video stream at some fps - wayland has damage tracking as well. but there's no such thing as "write text" "draw triangle" in wayland.)
@dramforever @david_chisnall @Scmbradley yeah, i see no actual point in doing server side font rasterization when (a) gtk and qt and wxtools aren't going to use it anyway and (b) i have _negative_ inclination to have a separate server side font database
it's similar for command queues, sure you could serialize opengl/etc commands but i've actually used virtualgl and it's not really usable over network