I’ve been reading the archives of other blogs in preparation for writing this one. As I was digging around Sam Altman’s old posts (which I will admit felt weirdly invasive), I found a record of his thoughts on AI from about two years before he founded OpenAI. In the post, Altman writes
Yesterday at lunch a friend asked me what tech trend he should pay attention to but was probably ignoring.
Without thinking much I said “artificial intelligence”, but having thought about that a bit more, I think it’s probably right.
To be clear, AI (under the common scientific definition) likely won’t work. You can say that about any new technology, and it’s a generally correct statement. But I think most people are far too pessimistic about its chances – AI has not worked for so long that it’s acquired a bad reputation. CS professors mention it with a smirk. Neural networks failed the first time around, the logic goes, and so they won’t work this time either.
But artificial general intelligence might work, and if it does, it will be the biggest development in technology ever.
I’d argue we’ve gotten closer in lots of specific domains – for example, computers are now better than humans at lots of impressive things like playing chess and flying airplanes. But rather than call these examples of AIs, we just say that they weren’t really that hard in the first place. And to be fair, none of these really feel anything like a computer that can think like a human.
He definitely got the big question right. Altman saw AI’s potential at a time when others were consistently underrating it. He also correctly identified how the bias against AI crept in through a continuous moving of the goalpost of what counts as genuine AI. Altman eventually dove in and has now built a generational (if not civilizational) company in the space.
He then continues to explain what’s most interesting to him about AI:
The biggest question for me is not about artificial intelligence, but instead about artificial consciousness, or creativity, or desire, or whatever you want to call it. I am quite confident that we’ll be able to make computer programs that perform specific complex tasks very well. But how do we make a computer program that decides what it wants to do? How do we make a computer decide to care on its own about learning to drive a car? Or write a novel?
It’s possible–probable, even–that this sort of creativity will be an emergent property of learning in some non-intuitive way. Something happened in the course of evolution to make the human brain different from the reptile brain, which is closer to a computer that plays pong. (I originally was going to say a computer that plays chess, but computers play chess with no intuition or instinct–they just search a gigantic solution space very quickly.)
And maybe we don’t want to build machines that are concious in this sense. The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.
Altman gives the impression that he thinks a lack of consciousness/qualia/agency/etc might turn out to be a constraint on computer creativity, contrasting the “specific complex tasks” an artificial intelligence might tackle with creative pursuits better suited to artificial consciousnesses, if not just humans. He worries that “we [might] never figure out how to make computers creative”. Isn’t it incredible, then, that Altman is best known today for creating an AI that isn’t especially good at any specific complex task but rather excels at producing human-like thought, including explicitly creative pursuits like poetry and philosophical speculation? And he accomplished this not by endowing an artificial consciousness with qualia or free will but by scaling and improving upon existing AI methods.
Altman got the headline right (AI was underrated) but got the details wrong (what would make AI interesting). Sometimes founders work out a complete armchair theory of the world ex ante that usefully serves as their startup playbook, but often it’s enough to end up in the right area and feel your way to the best path forward.
Leave a comment