Everybody Hates Sam

2023-05-21


There seems to be a lot of chatter about how Sam Altman is evil and OpenAI is pulling up the ladder trying to engage in regulatory capture. This was a topic of conversation on Twitter amongst many AI personalities, and it was a good chunk of the All-In Podcast this week. I think the reaction is ridiculous. It strikes me as one of these hot button issues where people don’t read the details, don’t listen to what was really said, and jump to the first emotional conclusion that triggers their deepest ideological fears.

It seems like there a few camps that people fall into with respect to this topic:

  1. This is a travesty. Any mention of regulation will kill open source model development. It’s a Orwellian scheme to require everyone to have a license to run software. We have to fight this tooth and nail.
  2. This is required. All of these tech oligarchs are trying to kill all jobs, control the narrative, and don’t care about creating emergent AI overlords that will enslave us all.
  3. This is worth discussing and finding a landing space that works for the economy and society. It’s more nuanced than the ideologues are willing to acknowledge.

I think any intellectually honest person will fall into camp 3, which most emotional people will fall into 1 or 2.

To those in camp 1: Society can’t function in an “anything goes” environment. Furthermore, the recommendation was to require licensing for large model training. If large model training costs more than a few million dollars, surely licensing costs are a minor inconvenience. It’s hard to see how this would really stifle any innovation at all. And pulling out the pickaxes every time someone suggests that an important technology needs some level of oversight is a surefire way to have your opinion tuned out. It’s entirely unhelpful rhetoric and it makes it all the more effective to paint you as a zealot.

To those in camp 2: I’m hesitant to even touch this topic because I really have no idea what may or may not happen with AGI. But it’s nowhere near settled that LLMs even exhibit emergent behavior. This is something that is still a theoretical discussion. Surely there are smart people on both sides of this argument. I personally find it sensationalist to be so worried about LLM overlords suddenly emerging and taking over the world within days (often called an “AI Takeover” scenario). Heavily policing LLM training just because they produce convincing sounding text seems a bit dramatic.

To everyone in camp 3: You’re probably not alone, you’re just being shouted over by louder midwits. But you should probably be aware that there are risks to your ideal, logical outcome materializing.

  1. Our politicians are ill-equipped for this task. They are transparently selfish, keying in on issues that obviously are only important to them and not the broader society they claim to protect. And there seems to be no effort to get real technologists involved in any of this decision making. You’d think that with something so uncertain where only those who really understand how the tech works can imagine the possibilities and outcomes, the brain trust would include a heavy dose of deep tech minds. Instead, it seems to include some of the more tech illiterate folks that the legislative and executive branches have to offer.
  2. I don’t think Sam Altman pushed back at all on anything, and that’s a strange approach. You can be conciliatory without giving in on everything. It’s useful to at least correct misperceptions, clarify things that are prone to misunderstanding, and at least present a forceful stance that others can hear and evaluate. Instead, Sam seemed to agree with the full narrative being peddled by the fear-mongering politicians about the evils of technology. I found that to be a bit too one-sided a view and a missed opportunity for those in the know to drive the conversation.
  3. I am not sure where the line will be drawn on who needs to get a license for training large models, how big the models would have to be, how much money they’d have to cost to train, etc. I’m also not sure if this will be a wedge politicians will use to expand their reach. These are all valid concerns that people in camp 1 will press to get answers to.

- daug

--