“…hot air, pumped through a combination of executive bullshitting and a compliant media…”


Ed Zitron takes apart that brutal WSJ interview with the CTO of OpenAI.

I have tried to use AI in many forms — for image generation for this website, for content help (rewriting or looking for ideas), even for things like planning a trip or a day somewhere I’m unfamiliar with. The only application that it has proven useful for in my experiments is coding and solving WordPress issues. For every other task, there’s either a human-based service (Unsplash, Reddit, Google) that is more efficient and useful.

This is his thesis:

What if what we’re seeing today isn’t a glimpse of the future, but the new terms of the present? What if artificial intelligence isn’t actually capable of doing much more than what we’re seeing today, and what if there’s no clear timeline when it’ll be able to do more? What if this entire hype cycle has been built, goosed by a compliant media ready and willing to take career-embellishers at their word?

He also gets into the serious issues that legal challenges from content owners pose for AI companies — specifically, if you can’t articulate what data it’s trained on, and a content owner successfully challenges the usage of their material for training, you have to start from scratch.

These models can’t really “forget,” possibly necessitating a costly industry-wide retraining and licensing deals that will centralize power in the larger AI companies that can afford them. And in the event that Sora and other video models are actually trained on copyrighted material from YouTube and Instagram, there is simply no way to square that circle legally without effectively restarting training the model.

There’s a lot more at the post. Zitron is always a good read.