Argh, I reread Breaking The Complexity Barrier (Again) 1973 and Winograd's still completely right.
It would be nice if:
- we could just run "learning" on a bunch of data, and it would figure everything out without us telling it
- we could just run "crunching" on a bunch of symbols, and it would figure everything out without etc.
but neither of those have an antecedent for being a way things work.
It makes me think I actually have to do the work of using Sandewall's Leonardo System =_=
Terry's point is that rather than being a "dead end" as people would blithely tell him, SHRDLU is actually /exactly/ the kind of computer program that is meaningful to write.
Its behaviour is both deliberate and mindbogglingly complex (barely fitting in Winograd's head). His argument is that each computer program should be as intricate and expansive a contribution to the world as possible at the periphery of the author's ability.
Not get-rich-quick-scheme style learning and crunching.
LLMs are in the first class of failures. Transformers - using a feedforward network with attention on terabytes and terabytes of scraped data - is an attempt to get something without contributing a boundary-pushing program (having been allowed by world governments to steal data without recompense, they got it for free).
I actually do consider LLMs to be spooky, even beyond making public vast quantities of private historical emails.
But this is a legal innovation, not a human-authored one.
I can't say I'm happy about it but this implies that currently freecad and linux are humanity's premier contributions to the universe.
Edit:
I guess we can be happy that emacs is also up there. Compilers and languages generally maybe. I think McCLIM should be regarded as an important ongoing success. It's just that rather than McCLIM and emacs being exceptional, anyone belonging to the computer-fanciers-association should be part of contributing something like Sandewall's Leonardo System.
@screwtape
If you read Winograd's SHRDLU book, it all makes sense, but of course it only captures a fraction of the programming, which was a tour de force.
Someone resurrected it once, but not in a form that I personally was able to use. Not sure about these days.
@dougmerritt Understand Natural Language '72, right?
@amszmidt @larsbrinkhoff
You two don't happen to be aware of an intact source for historical SHRDLU? The 90s clisp port, "SHRDLU after suffering a stroke" sounds kind of unappealing.
@screwtape @amszmidt @larsbrinkhoff
I don't, but the port had problems because the original ran in a heavily hacked Lisp (the contents says "Planner" but I don't think it was even standard Planner), not anything like a standard one, so they had to work quite hard to get anything to work at all -- and I didn't recall that comment, but I guess they wrapped up the resurrection far short of perfection.
@dougmerritt
Mm, I thought about it and my hunch is that Sandewall's Leonardo system was intended as something like a spiritual grandchild of knowledge-based works like SHRDLU and I have a reasonable amount of Sandewall's work 2005-2014 collected, so I will read the book and then think about Sandewall's work (which is only a decade old).
@amszmidt @larsbrinkhoff
@screwtape @amszmidt @larsbrinkhoff
Incidentally, since you're exploring such paths, the new research that Anthropic just did shows that LLMs use significantly more sophisticated planning / reasoning / "internal thinking" than most experts thought up until now, probably making the whole topic significantly more interesting (especially since many have been writing off the whole subject of LLMs heretofore).
I myself haven't been watching the subject *too* closely for the usual reasons.
" they plan ahead when writing poetry, use the same internal blueprint to interpret ideas regardless of language, and sometimes even work backward from a desired outcome instead of simply building up from the facts."
Similarly with internal circuits that perform simple addition.
Circuit Tracing: Revealing Computational Graphs in Language Models
https://transformer-circuits.pub/2025/attribution-graphs/methods.html
On the Biology of a Large Language Model
https://transformer-circuits.pub/2025/attribution-graphs/biology.html
Tracing the thoughts of a large language model
https://www.anthropic.com/research/tracing-thoughts-language-model
@dougmerritt
I had that general hunch ever since that paper showing one using a frequency space approach to normal arithmetic questions.
@amszmidt @larsbrinkhoff
@dougmerritt
I mean, this is (even just the early academic llms-doing-math paper) the counterexample to Winograd's point that straightforwardly applying transformer learning to a sufficiently giant pile of low quality data isn't going to magically solve everything. Stuff like natural arithmetic approaches turning up in the model.
@dougmerritt However, I'm under the impression that the fact that they will do stuff like form cognitive-like arithmetic approaches is seen as basically a bug and not a feature by their vendors (without having reviewed your links yet), and that the moderne approach is to mix the LLM chatbot with something like a problem classification and pass-through to non-LLM math etc software.
@screwtape
They certainly are not the 100% AGI replacement for humans that thousands of CEOs are currently salivating over
But I agree that they are promising as a component.
@dougmerritt thousands of CEOs / the governments of NZ and the UK
@dougmerritt @screwtape @amszmidt It was a subset called MicroPLANNER. I heard the stories for years that SHRDLU would only run after being binary hacked in DDT. But now that Eric made it run, it didn't seem too bad.
@larsbrinkhoff @screwtape @dougmerritt @amszmidt As I remember the story, Winograd made some changes directly to the machinecode, shortly before the thesis defense. It was not that it didn’t work before, but the exact conversation done as part of the demo could not be reproduced without.
@screwtape @dougmerritt @amszmidt Yes, @eswenson recently fixed an old set of SHRDLU source files and made them run on the latest Maclisp. It's on GitHub now. Wlnograd has a collection of old code on his home page.
@screwtape
My copy is in storage, but looking at his publications, that appears to be the one.
Aha, looking at a digital copy, finally in section 7.1 it gets around to mentioning the robot is named SHRDLU, so yes, that's the one.
Winograd, Terry, Understanding Natural Language, (191 pp.) New York: Academic Press, 1972. Also published in Cognitive Psychology, 3:1 (1972), pp. 1-191.
https://hci.stanford.edu/winograd/publications.html
https://hci.stanford.edu/winograd/