Oblique Strategies
Not a doomerist yet
I remember the first time I heard about LLMs. It was the post “GPT-2 As a Step Toward General Intelligence” by Scott Alexander, in February 2019. That model can now be reproduced from scratch in 20 minutes. Like many non-specialists, I didn’t start using these models until November 2022. After three years, or six since my first exposure to them, my working life has changed. I have multiple open sessions of Codex and Claude Code, plus scattered windows of ChatGPT 5.2 Thinking, Sonnet 4.5, Gemini 3.0 in my browser. The last time in my life when I looked forward to dispensing with the company of humans, go home and be alone with my computer to play was in 2007, when I was finishing Portal which, maybe not coincidentally, featured an evil AI. To varying degrees, everyone is going through the same experience. It feels qualitatively different from the first time I used a PC (1980), “the internet” (pre-browser, 1987), iphone. We are now getting used to alien technology. I was at MIT last October, and a math professor friend was telling me how the answers ChatGPT was giving him had lots of subtle mistakes (he’s a topologist), like “a graduate student who knows everything but understands nothing.” It’s now January 2026, and Gemini has become a very useful graduate student for world-class mathematicians (read p.3, “The role of AI in the results of this paper”), or an entire framework for other world-class mathematicians. This reminds me of another small instructive story, which predates AI by 30 years. In the early 90s, a statistician at Stanford (easily one of the top 5 statisticians alive) was working on a problem in applied harmonic analysis, whose solution had been eluding him for years. A student from Europe, fresh BS grad, was spending a gap year at Stanford. He chats with the prof., the prof. describes the problem, student goes home, and solves the problem. Prof is enthusiastic. Then, solution in hand, he realizes he was solving the wrong problem. There is a happy ending. Prof. takes the student as a PhD, student becomes an equally good or even better statistician.
Another real-life episode from work. We had to simulate a portfolio-of-portfolios to make decisions about its design parameters. My report writes the problem formulation, I review/accept, and the next day the simulation, inclusive of a dashboard, is ready. I don’t expect that Claude got it right right away, but it took a few hours to make the whole thing. The conceptualization is the hard part: discussing with business owners, identifying the problem they didn’t know they had, translating this into mathematics.
In the process, LLMs still make mistakes. Some LLMs are proactive: instead of just answering, they will propose a next action. I expect the mistakes to become fewer, and the proposals to become better. I expect the solved problems to be wrong posed by me, more often than not. And I expect their proposals to be often wrong too, but in interesting directions. In summary LLMs will do the easy things well and quickly, and will help conquest the hard things.
This leaves open the question: should we still hire junior developers or graduate students? Will we be missing a step in the long ladder that takes a person from adolescence to fully-fledged, autonomous adulthood? I am unqualified to even try to answer the question in the large. Right now, I am thinking about the small decisions that involve who to hire next. This is what comes to my mind. I think I’ll be in need of three qualities, to start with.
First, people with a chance at abstract thinking: write questions well, write abstract formulations, etc. I sometimes ask “what’s the language you had the most fun with?” and would give extra point for a functional language in the ML or LISP families. If the answer is C++, I have an exorcist on retainer.
Second, people who are good at the human side of things: communication, motivation, and most important of all, listening. It’s not only the Emotional Quotient that matters, although that’s very important as a survival tool. It’s the ability to serve as a conduit between people who have problems and people who can solve them. The superpower to observe and describe of the (un)known world is one that silicon has not mastered yet.
Third, and most important, creative people. I can imagine LLMs conquer higher levels of abstraction, because they have already made great strides. I have a harder time seeing LLMs being really good at summoning truly original ideas from the void. AI is bad at art. It write poor poems. Its visual work is derivative and little more than a parlor trick. You may counter that people like AI poetry better than human poetry, or visual art, for that matter. But that is because the vast majority of people don’t care about poetry, and doesn’t visit an art museum. A trained eye and ear can tell the difference, and it is immense. In 1975, Brian Eno (an important musician, music producer, and visual artist of the past 50 years) created a deck of cards aimed at stimulating creativity in its users. The cards are a bit like zen koans; short bursts of unexpected advice:
Be extravagant
The most easily forgotten thing is the most important
Use cliches
Do something boring
Ask people to work against their better judgement
Once in a while, I feed these recommendations to an LLM of choice, and see what happens. Usually, not much. We still need humans to come up with new ideas, to make wild analogies and to make true progress. If you work in research, you need these people.



"If the answer is C++, I have an exorcist on retainer" - cracked me up.
agree with the sentiment on functional languages, always get push back from infra folks, they insist on their for loops :-p
Thanks, Gappy, you're hitting a key point.
It made me think about the old Thomas Edison quote, "Invention is 2% inspiration and 98% perspiration." People may well be better at the inspiration part, but that leaves a lot the AI can do.