I was starting to write a really big thing about permacomputing--I'd already made an outline and was drafting--but then I read this article and feel like I see things very differently now: https://applied-langua.ge/posts/terminal-boredom.html
the Alan Kay "metamedium" idea that runs through this article is powerful; reframing the discussion around the concept of "what do I do with it" rather than "how old of a computer can I do my work on." more like the @stevelord "heirloom computer," less like an old DOS laptop that is still useful.
Instead, what would a "permacomputer" be if it were a way of working supported by hardware that lasted a very long time?
Plan 9, Emacs/Lisp machines, Smalltalk, hell even something like AS/400s--these kinds of "paradigm" environments seem like the thing we should be figuring out how to make "perma" instead of Linux on ARM or the Commodore 64.
The reality is that the "retro" part overtakes the "perma" and the two become intertwined. The reality is that you can buy a Pentium M laptop for $25.
Another stray thought: think about something like Open Firmware. Something about "it's a regular PPC Mac laptop or Sun workstation, but you can also just boot it straight into a Forth REPL" feels like a future we shouldn't have left behind, a basis for a different relationship with our computing tools.
@kl Hopefully I won't get in much trouble for sticking my head in; you're right the artistic element is perfectly fine, and that's plainly not something one can argue about. But there's also this association with sustainable computing and permacomputing and such which doesn't appear to make much sense. I edited the article yesterday to be more precise about what we were critiquing, since others had the same conception and we could have avoided it with just that editing, I think. Many thanks!
@nodefunallowed I see what you mean, for sure, I just don’t know that that association is near as much a motive behind its existence than the philosophy of owning one’s tools. Not really my place to speak for the creator there though. Thanks for writing the piece! Thought-provoking stuff. Does your site have an RSS feed hiding somewhere?
@kl Sure. But then there's the issue of what one can do with their tools, as you put it. It'd be nice to allow for a decently efficient implementation that is still quite simple, or something that works better with mental models (though that's subjective, all I know is I make a lot of bugs with fixed-size integers).
Someone asked yesterday for an RSS feed, so I guess we better set one up. We don't have one yet since we publish very infrequently, and just use plain org-mode to produce pages.
@nodefunallowed Makes sense, and I think that's a fair critique.
(FWIW Org is how I do all my writing, too, and then use pandoc to export Markdown from there. I def wouldn't have RSS either if Hugo wasn't doing it for me, otherwise it'd be a pain. ox-pandoc is the Emacs package that glues it all together)
@nodefunallowed it does! Only issue I see is you're missing a <pubDate> element for the entries, so they look like they were all published when I added the feed to Feedbin. It's just an RFC 2822 datetime (https://validator.w3.org/feed/docs/warning/ProblematicalRFC822Date.html).
Either way, thanks for doing this for those few of us from the neolithic period still using feed readers.
@kl Thanks. I also spotted I forgot to change an example name from the tutorial I was reading; I'll get onto that tomorrow.
@kl Somehow your thread this morning got me to click through and read your https://systemstack.dev/2022/05/waking-up and https://systemstack.dev/2022/06/things-i-might-write. Thank you!
I dislike https://applied-langua.ge/posts/terminal-boredom.html just like everything else the authors have written. But thank you for getting me to focus on the few sentences in between about the meta-medium idea. They don't ooze contempt. But here's the thing: they'll have the same problems if a community coalesces around them. This stuff is hard.
@kl Nobody building minimalist systems is saying the problems are solved. The thing the authors are missing is that minimalist systems are a _reaction_, a backlash to the problems of mainstream systems. Of course they have problems. But when you're sick of the hegemony of one set of problems, it's sometimes a relief to leave behind the bugs you know for new bugs you don't know about for a while. Above all, it's a relief to depend on software created with less inscrutable motivations.
@kl So far I've come up with a few ways to phrase the core problem:
- How to get the nice things we've gotten used to under the current regime of software development, but replace the governance process with something more democratic. Oh, and how to sustainably maintain some minimal level of grassroots citizen engagement with that process, something unsolved even outside software. Because if you don't do that last bit, your democratic machinery will quickly rot away.
- How to make over-engineering more apparent, make it pay a cost in adoption in the marketplace of ideas. The problem we have today is that we have over-engineered systems piled high on other over-engineered systems (making the latter load-bearing and not over-engineered? :grimace:). A minimalist system at least gives us some freedom to try again. Maybe even learn something. People built houses for millennia to get good at it. Imagine if all houses were `git clone`s of some trunk branch.
- How to avoid the complexity ratchet of backwards compatibility. If we can't take out features we don't use, any nice initial design will eventually get swamped.
Ok, three phrasings should be enough. The thing they all share is an acknowledgment of costs, and that is something I rarely see outside the minimalist community. I suppose that's tautological. Anyone who cares about the costs will tend to build what seem like toys. Everyone else will continue infantilizing their users.
A constructive critique of minimalism needs to at least propose how we manage costs. A VM, ok. But who controls the VM, decides how to evolve it? How do programs built on the VM interoperate? Who decides how those interoperation mechanisms evolve?
Minimalism doesn't have answers to these questions either. But at least we're playing in the kiddy pool, far away from the abyss, where we only have to deal with the easy versions of these problems. Walk before you can run and all that.
Unlike you, I'm not foundationally opposed to rent-seeking systems. Taxes are just the cost of doing business. The eternal goal of governance is to spread agency out using sustainable taxes. The real problem, as you point out, is paying rent to absentee and short-lived landlords.
Everything is coopted from its inception, but we can't let that take the fun out of our endeavors. From its inception, the computer has been a creation of large organizations. WWII, machines filling rooms, etc.
But there's also been hope, right from the start. Hints of a "shadow computer" that is possible. Vannevar Bush's memex, Licklider's "man-computer symbiosis", Engelbart's "augmenting human intellect," Ursula Franklin's "real world of technology", Bonnie Nardi's "small matter of programming," etc.
We can't choose the world we're born in, we can only choose what aspects of it we put our energies behind.
Computers are dual-sourced, much like the Internet itself.
The Internet's dual-sources are the government-backed ARPAnet and the home-grown BBSes.
Computers dual-sources are machine-room computers and the kit computers. The terminals to connect to the machine-room computers have changed, but it has always been big business that loves control.
We thought home computers fell on the side of kit computers, but these days they're just fancy terminals to the machine-room systems.
Time-sharing systems are older than kit computers. They're the bread and butter of the machine room computer, and they always have been. How we do time-sharing is different, but the overall concept hasn't changed. We pay to have our software run and are billed for just the time that our software runs. (The AWS model is the original model.)
Minimal computing, to me, is about embracing the kit computing ethos, even if only doing it in software.
@yam655 You've really got me thinking now. It's not enough to just put some software out there and say, "it's easy to hack, hack on it for anything more you need" as I do with http://akkartik.name/lines.html. I think maybe we need to refuse to provide software that just works. Our credo must be, "some assembly required".
Heh, "some assembly (but no Assembly) required."
That article you mentioned really does fit with what I was saying.
Think of software like an early automobile. It does one thing well, but the design is clean enough that anyone can add and extend it.
Some people _might_ want to swap out the engine, but others might just want to upgrade the seats.
You could go for a DIY kit-based system where you have a thorough collection of parts, but that tends to make most sense for game systems. (Some of which totally take that approach.)
@akkartik the Memex and Project Xanadu are two of the things that got me started down this whole path back when I was an undergrad. I think the search for that "shadow computer" is a tragic unrealized goal from a previous epoch, but like you say, it may still be out there somewhere, lurking.
@akkartik my 0.02 of $MYCURRENCY. In what @yam655 said ("smol" computing is about kit computers) there is a hint that you are talking past @hayley 's blog ... the blog talks about user-facing software, not hobbyist programming.
@kl Plan9 is an interesting case. It really doesn't use any high-level programming languages and the graphics seem to mostly be about better fonts, but the thinking behind, say, strings as program return values is not "smol" either.
@akkartik thanks for reading. That "Waking up from the Dream" post really sums up my relationship with a lot of this stuff recently. I think my stance there against rent-seeking is probably too strong, as you point out, and mostly has to do with who the current crop of landlords are.
@kl I'm curious what makes you think that permacomputing is NOT what you just described? Like, your 3 examples(plan9, st, etc..) are all things that are incessantly spoken about in this circle.
You seem to think that retrocomputing is permacomputing, where did you get this idea from? The wiki? the wiki has entries for lisp, st and plan9 and their relevance in that sphere.
@neauoire I think these things could be permacomputing! But things like CollapseOS, or like the CP/M stuff mentioned by @stevelord in the past, fall into that grey area between the two for me. But “retrocomputing” isn’t negative to me (I’m the guy who drafted a novel on System 7 last year, after all), just kind of a different end than what I take to be the core of “permacomputing.” This thread is also just me talking out loud to figure out what I think, so my thesis here really needs refinement.
@neauoire I think it boils down to:
I think there's a minimalism inherent in the idea, but something like what is embodied in your (on XXIIVV I mean) notion of "collapse computing" seems to be mostly a security blanket, the same as a doomsday prepper buying buckets of "food product" instead of learning to grow things. I think that's a dead end.
That's segment where the "retro" and "perma" most overlap. But again, I need to go back and actually try to write this to nail down my thoughts here.
@kl collapse collapse computing is not permacomputing tho, I've tried to split the timeline into frugal/salvage/collapse, as 3 different eras when one can practice computing degrowth. Maybe my documentation is not really clear enough too 🤔
@neauoire That chart looks more like it's categories under the "permacomputing" header rather than a timeline, fwiw. But in my mind, "making the best use of the devices that exist" is already sort of gesturing at retrocomputing in a way, in its orientation towards harvesting existing semiconductors (even if they're not necessarily "retro" yet). I'm not sure you can separate that concept from permacomputing as currently construed, and its that aspect of it that causes friction for me.
@neauoire That said, it's that friction that I still can't really explain, and don't have a better idea yet. Something about being tied to already-extant devices feels like it necessarily makes the whole endeavor less "permanent" than some new device purpose-built to last as long as, for instance, an Olivetti typewriter.
@kl yeah, I can try to rephrase this, it's a bit tricky, I think people quickly go to "what new device is permacomputing", "how can I buy permacomputing", and leave perfectly functioning devices behind.
It's that sort of way of thinking that I'm trying to protect against. The most sustainable device is the one that already exists.
We see this a lot about sailboats, people building "new green techy plastic sailboats", which is a lot less sustainable then making use of what exists.
@neauoire see, that way of putting it resonates with me… it’s the same reason I keep a 1985 Chevy pickup truck running as a hobby. Carbureted engine, no computerized anything, easy to fix, can still haul a whole lot more than what it’s rated for… no need to buy a new truck when I don’t need one often and I bought this one for $1500. It’s a tool.
@neauoire I guess for me the difference is that in computing you could make a new “truck” that has all of the same capabilities *and* limitations, but if you did that with an actual truck, it would be extremely wasteful. It gets terrible fuel mileage, has almost no safety features, a weak engine for how large it is, etc. I think permacomputing as a theory needs to be careful not to fetishize the lack of seat belts and the steel dashboard. Those aren’t the things that give the machine longevity.
"I appreciate SDF but it's a general-purpose server and the name doesn't make it obvious that it's about art." - Eugen Rochko