Sam Altman: GPT4 is embarrassing, the dumbest model you will ever have to use again

Sam Altman gave an interview in Stanford University, which was posted on May 1, 2024. He discussed various aspects of artificial general intelligence (AGI), its implications, and the future of AI technology.

Key points of the interview

Today is the luckiest time in several centuries. The degree to which the world is going to change and the opportunity to impact that, starting a company, doing AI research is quite remarkable.

  • It is the best time to start a company since the internet at least and, maybe, in the history of technology.
  • The things you can do with AI is going to get more remarkable every year.
  • The greatest companies, the most impactful new products get created at times like this.
  • If Sam was a 19-year old today he would go to work on AI research.
  • You learn a lot just by starting a company. There is not pre-startup, like pre-med. You learn how to run a startup, by running a startup.

You should never take this kind of advice about what startup to start ever from anyone. By the time there is something that is obvious enough, that me or somebody else will say it, it's probably not that great of a startup idea.

sam altman stanford interview
  • You should be skeptical about things that a lot of people are going to do anyway.
  • The most important muscle to build is coming up with ideas, that are not the obvious ones to say.
  • "How to build a very big computers" - the question OpenAI is probably looking at through a lens that no one else is quite imagining yet.
  • AI infrastructure is going to be the most important inputs to the future, and that is: energy, data centers, chips, chip design, new kinds of networks, the entire ecosystem.
  • Cost of developing new AI models grows multiplicatively with each subsequent iteration.
  • Giving people capable tools and let them figure out how they're going to use this to build the future is a super good thing to do and is super valuable.

ChatGPT is not phenomenal, its mildly embarrassing at best. GPT 4 is the dumbest model  any of you will ever have to use again.

  • It's important to ship early and often. Sam believes in iterative deployment.
  • Put the product in the people's hands and let the society co-evolve with the technology, let society tell what it collectively and people individually want from the technology.
  • Ship imperfect products, but have a very tight feedback loop. Learn and get better. It does kind of suck to ship a product, that you're embarrassed about, but it's much better, than the alternative.
  • AI and surprise don't go well together. People don't want to be surprised, they want a gradual rollout and the ability to influence these systems.

Whether we burn $500 million a year or $5 billion or $50 billion a year - I don't care. As long as we can stay on a trajectory, where eventually we create way more value for society than that and as long as we figure out how to pay the bills. We are making AGI, it's going to be expensive, it's totally worth it.

  • Future models will be able to run autonomous researches.
  • New systems will get dramatically more capable every year.
  • If you are good at a lot of things, you can seek connections across them. You can then come up with the ideas that are different, than everybody else has.
  • ChatGPT 5 is going to be a lot smarter. GPT 6 will be smarter than 5. We are not near the top of this curve. Gtavity of this statement is underrated.

Key points from Q&A session, from Sam's perspective:

  • I don't believe in stifling human innovation, I believe, that people will surprise us upside with the better tools. If you give people more leverage, they do more amazing things.
  • As models get more capable, we'll have to deploy even more iteratively and have even tighter feedback loop.
  • Core of our mission is that we make ChatGPT available for free to as many people as want to use it with the exception of certain countries where either can't or don't, for a good reason, want to operate.
  • Countries are going to increasingly realize the importance of having their own AI infrastructure.
  • Learning to trust yourself, your own intuition, your own thought process gets much easier over time.

We did not think we were going to have a product, when we started. We were just going to be like an AI research lab. We had no idea about a language model or an API or ChatGPT. We just wanted to push AI research forward.

  • Life is not a problem set. You don't solve everything nicely all at once.
  • Trust yourself to adapt, as you go.
  • You are dramatically more smarter and capable that your great-great grandparents, because the infrastructure of society is way smarter and through that it made you. Internet and huge amount of knowledge is available at your fingertips. You can do things, that your predecessors would find absolutely breathtaking.
  • Society is smarter, than you are. Society is an AGI.