A lot of #genai and #llm discourse is incredibly shallow. It boils down to "this will replace us all" or "AI can never replace humans because unlike humans, AI often produces incorrect answers".
I enjoyed this post for a more in depth look at the types of problems that LLMs aren't just bad at, but they're completely incapable of doing today.
On goal drift and lower reliability. Or, why can't…
Strange Loop Canon