Skip to content
nlsmdn

Your Redlines Suck

I’m way late to this story, but I’m sure you’ve all eagerly awaited my two cents, so here you go!

Quick recap in case you missed it or forgot all about it after the newscycle went vrooom: Anthropic has some redlines about how their LLMs can be used by the US military. They are:

  1. Thou shalt not use the LLM to do mass surveillance on US citizens.
  2. Thou shalt not let the LLM have final authority on the use of weapons (yet).

More details on both.

The Pentagon did not take it well that some upstart company wanted to tell it what it can and cannot do with its own equipment. Which is to say, they will of course often take action on behalf of companies, but they at least want to pretend to be in charge. So they threatened to declare Anthropic a so-called supply chain risk, which is usually reserved for those nefarious foreign companies. This would mean not only losing all defense contracts, but also making other companies stop using Anthropic products on their military projects. After some back and forth (Pentagon: we already got laws, telling us what’s allowed; Anthropic: your laws suck) Anthropic ultimately did not back down, and the Pentagon followed through, replacing Anthropic with OpenAI. Somewhere along the way, OpenAI started claiming that they actually also have those two redlines, which poses the question: what the hell is going on?

Amodei and Altman at the AI Impact Summit Amodei and Altman not touching this. Source: Bloomberg Television

There are some claims that this was a sham, Anthropic was getting the boot one way or another, and it was all just good old fashioned corruption politics. OpenAI co-founder Greg Brockman donated $25 million to Trump’s MAGA Inc. super PAC. That’s plausible, sure. In fact, this is how you do it. You don’t try to tell the Pentagon what to do, you bribe lobby a politician to tell the Pentagon what to do. But that’s really not what I am here for. Let some actual journalist dig into it. What you say? Journalism is crumbling because of content collapse due to AI? Well, anyway. Let’s talk about the redlines themselves.

No mass surveillance on US citizens? What about… citizens of other countries? If it’s morally questionable to do it to Americans, maybe, and I’m just spitballing here, it’s also questionable to do it to other people? Yes, yes, Anthropic are patriots, and of course we got the NSA all up our unamerican backsides anyway, but it would be nice if Anthropic didn’t help them to get further up in there? And the one about requiring a human in the loop for all targeting and engagement decisions?

Simpsons bird pressing keyboard button

Arise, ye AI researchers

Why did Anthropic stand firm on this? Is it because of the shiny spine and rockhard moral principles of Amodei? I have my doubts. Executives are, by and large, psychopaths or at least psychopath-adjacent. They do not get to that level if they cared about anything besides the success of the company too, too much. And even if Amodei is a fluke exception, the CEO with the heart of gold, Anthropic’s shareholders surely are not. Yes, Anthropic surged in the App Store, but that marketing boost does not make up for losing the monetary firehose of US government contracts. So why not give in?

“I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!). It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees. Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees.” — Dario Amodei, leaked internal memo

Amodei is pretty much spelling it out in this internal memo. He spends most of it talking about what OpenAI and Anthropic employees think.

Anthropic advertises itself as the ethical AI company, and attracts at least some people who care about working for the ethical AI company. Caving to the Pentagon on this would have not looked good. And OpenAI employees, self-selected as they are to work for someone like Sam Altman, who will surely go on his own Elon arc eventually, may be persuaded by this to work for Anthropic instead and sleep at night. You can’t win the AI race if you don’t have the best AI researchers and developers.

Research Engineer, Virtual Collaborator — New York City, San Francisco, or Seattle. Design RL pipelines for enterprise workflows and train Claude for productivity tasks. Annual Salary: $500,000 – $850,000 USD.

And that means if you are one of those researchers or developers, you actually collectively have some power here. AI is definitely useful for some things, but it’s also causing many problems. Electricity usage and implications for global warming, dead internet, artists having their work stolen and losing their jobs, the financial bubble bursting and taking the rest of the economy with it, no one can afford a gaming PC anymore. The list is long and full of terrors.

Executives don’t care. Investors don’t care. Politicians, even the few that do care, are too slow, technologically illiterate and powerless to do much of anything about many of those problems. AI company employees are one of the few groups with both the power and the know-how to take action. So if you’re one of them, don’t settle for a lukewarm Don’t Be Evil or those weakass redlines.

Get together. Demand more. Not just for Americans, but also for immigrants and the rest of the world.


Next Post
Skill Showcase: research-online

Subscribe to get new posts in your inbox.