Unsupported arguments in AI hype articles


For the many articles about AI that I read, this is my attempt to poke holes at their arguments, and to save them in one place:

The Age of Abundance:

  • Glossing over why self-driving cars are taking a long time for adoption.
  • Waving his hand saying it’s so easy to replace your support people with a chatbot.
  • He exaggerates how good the current LLMs are at making products today.
  • He assumes a exponential curve will continue to make Gen AI super intelligent soon, and he thinks GPT 3.5 -> 5 in 3 last years is a huge diff
  • He makes the mistake that we go to medical or therapy to gain more ‘knowledge’.
  • He doesn’t talk about the expectation of better software products that will happen as things get cheaper, or how a LLM trained on the ‘average’ will make better products.
  • He thinks openAI/Anthropic will stay the winners and doesn’t talk about open-source/on-prem models destroying their moat or how they will survive the competition with their crazy high burn rate.
  • He doesn’t talk about how generating code has always been the easy task and maintaining is harder.
  • He says agents will ‘set the direction of the product’ and agents will ‘write the acceptance tests’ but doesn’t explain how that will work, nor say that defining the ‘what’ is the core of what a developer has always done and everything else is just syntax.
  • He thinks physical manufacturing is a bigger bottleneck than culture, politics, codifying human processes.
  • He doesn’t touch on the obvious argument that if AI is so smart, then why can’t I make my own version of all the software I ever need, including my own AI - in his analogy of ‘devs are farmers and now the combine was invented’, he doesn’t explain that AI also means these farmers can essentially make their own combines too and don’t need to buy them - so then what happens to the combine makers?
  • His attention is on the replacement of AI replacing core societal functions like doctors, lawers, banking, finance, insurance - but doesn’t emphasize what we see with self-driving cars, which is that even with a higher safety rate, all it takes is one crash for the replacement to be slowed down. The reality is all the Y-combinator companies he’s talking about which are getting valued at millions of dollars from ideas being built withing weeks using 100% AI generated code, those are the companies that will be easily replaced. Not your doctor or lawyer.
  • Too much emphasis on benchmarks, and not actualy usefulness. Nor mentioning about all the drawbacks to beating tests that contains answers within the training set.
↩ More posts