It made me think about the old Thomas Edison quote, "Invention is 2% inspiration and 98% perspiration." People may well be better at the inspiration part, but that leaves a lot the AI can do.
This all seems about right to me. But I want to understand the *mechanisms*. If these things are Bayesian inferencing engines that will continue to improve at reasoning, then indeed it seems like figuring out how to feed them more and better priors (writing questions well, listening carefully to other people for cues about what is needed, bringing in unusual connections) to work from is where we end up in a human-AI intelligence world.
The other set of questions that interest me relate to how their weights reflect "common knowledge" in the technical sense given the constitutive social role that common knowledge seems to play and the sensitivity that certain social life seems to have to the partition between private and common knowledge. How can we leverage these machines to help coordinate human behavior, help reduce violence, help increase pleasing collaboration? I feel like nobody is yet asking those questions.
"If the answer is C++, I have an exorcist on retainer" - cracked me up.
agree with the sentiment on functional languages, always get push back from infra folks, they insist on their for loops :-p
Thanks, Gappy, you're hitting a key point.
It made me think about the old Thomas Edison quote, "Invention is 2% inspiration and 98% perspiration." People may well be better at the inspiration part, but that leaves a lot the AI can do.
Great note...
My rank order for hiring:
1) Character (integrity, grit)
2) Intelligence (higher-order thinking, EQ)
3) Knowledge (skills)
Knowledge can always be acquired if you have the other traits.
When the tractor was invented, you just needed to learn how to operate it, not 'establish your foundation' plowing fields first.
This all seems about right to me. But I want to understand the *mechanisms*. If these things are Bayesian inferencing engines that will continue to improve at reasoning, then indeed it seems like figuring out how to feed them more and better priors (writing questions well, listening carefully to other people for cues about what is needed, bringing in unusual connections) to work from is where we end up in a human-AI intelligence world.
The other set of questions that interest me relate to how their weights reflect "common knowledge" in the technical sense given the constitutive social role that common knowledge seems to play and the sensitivity that certain social life seems to have to the partition between private and common knowledge. How can we leverage these machines to help coordinate human behavior, help reduce violence, help increase pleasing collaboration? I feel like nobody is yet asking those questions.
And the next question is whether this LLM has a ceiling in its capabilities
A bit like what I expect from strategists…it will become the Best (those who bring thought leadership) vs the Rest (who may be most at risk)