- Sam Altman admits OpenAI rushed into a deal with the Department of Defense, causing public backlash.
- The agreement is being revised to explicitly prohibit domestic surveillance of U.S. persons.
- Altman expresses support for Anthropic, urging the Department of Defense to offer them similar terms.
- The controversy highlights ongoing tensions between AI companies and the government regarding ethical AI deployment.
Great Success, Then Great Embarrassment
Jagshemash, my name is Borat Sagdiyev, and I am here to tell you about very exciting news from America. OpenAI, the maker of ChatGPT – very nice! – has made a deal with the American military. Like catching wife, this seemed like great success, yes? But then, like discovering your neighbor is wearing a ladies' undergarment, things became… awkward.
Altman's Apology: "I Make a Funny"
Sam Altman, the head of OpenAI, he say, "I shouldn't have rushed." Like when I tried to learn English in one day before visiting America, it did not go to plan. Altman is now saying that the deal, which I am hearing is similar to US Budget Deficit Shrinks Amid Tariff Revenue Surge in complexity, was a bit hasty. He worries it looked "opportunistic and sloppy." Like when I try to kiss Pamela Anderson, I too am seen as opportunistic, but I only want to show her my culture.
No Surveillance, I Swear On My Mother
The big problem is that people think OpenAI will use their AI to spy on Americans. This is bad, like finding out your sister is a prostitute. Altman promises this will not happen. He say the AI system will not be used for "domestic surveillance of U.S. persons and nationals." This is good, because nobody wants Big Brother – or Big Borat – watching them.
Anthropic: The Jilted Lover
There is another AI company called Anthropic. They also wanted to work with the military, but it did not go so well. It’s like when I wanted to marry Pamela Anderson, but she ran away with Kid Rock. Anthropic was concerned about their AI being used for bad things, like autonomous weapons. Altman now says he wants the military to treat Anthropic fairly. Very nice of him, I think.
Government Loves OpenAI More
It seems the American government prefers OpenAI to Anthropic. Some say Anthropic is too worried about safety. In Kazakhstan, we are not so worried about safety. We just drive our tractors into the nuclear test zone, and everything is fine. But maybe Americans are more sensitive, like when they see me in my Mankini.
Is This Great Success or Great Failure?
So, what does all this mean? OpenAI made a mistake but is trying to fix it. They are promising no spying on Americans. They are also being nice to Anthropic. Maybe this is a win for everyone. Or maybe it is just another example of crazy American politics. As I always say, "Chenqui!"
Comments
- No comments yet. Become a member to post your comments.