2023-05-21
There seems to be a lot of chatter about how Sam Altman is evil and OpenAI is pulling up the ladder trying to engage in regulatory capture. This was a topic of conversation on Twitter amongst many AI personalities, and it was a good chunk of the All-In Podcast this week. I think the reaction is ridiculous. It strikes me as one of these hot button issues where people don’t read the details, don’t listen to what was really said, and jump to the first emotional conclusion that triggers their deepest ideological fears.
It seems like there a few camps that people fall into with respect to this topic:
I think any intellectually honest person will fall into camp 3, which most emotional people will fall into 1 or 2.
To those in camp 1: Society can’t function in an “anything goes” environment. Furthermore, the recommendation was to require licensing for large model training. If large model training costs more than a few million dollars, surely licensing costs are a minor inconvenience. It’s hard to see how this would really stifle any innovation at all. And pulling out the pickaxes every time someone suggests that an important technology needs some level of oversight is a surefire way to have your opinion tuned out. It’s entirely unhelpful rhetoric and it makes it all the more effective to paint you as a zealot.
To those in camp 2: I’m hesitant to even touch this topic because I really have no idea what may or may not happen with AGI. But it’s nowhere near settled that LLMs even exhibit emergent behavior. This is something that is still a theoretical discussion. Surely there are smart people on both sides of this argument. I personally find it sensationalist to be so worried about LLM overlords suddenly emerging and taking over the world within days (often called an “AI Takeover” scenario). Heavily policing LLM training just because they produce convincing sounding text seems a bit dramatic.
To everyone in camp 3: You’re probably not alone, you’re just being shouted over by louder midwits. But you should probably be aware that there are risks to your ideal, logical outcome materializing.
- daug
--