AI in 10
The most important AI story—explained in 10 minutes.
Every day, I break down the biggest AI story in just 10 minutes - what it is, why it matters, and how you can actually use it. No tech jargon, just AI made simple.
AI in 10
OpenAI Executive Quits Over Military AI Deal: Industry Splits
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Referenced Links:
Future of Life Institute - AI Ethics Petitions
Anthropic's Claude - Alternative to OpenAI
Hugging Face - Open Source AI Models
ChatGPT - OpenAI's Consumer Platform
OpenAI Official Website
Want to go deeper with AI? A community of professionals is learning AI together right now at aihammock.com — show notes, links, tools, and real conversations about how to actually use AI in your life.
Welcome to AI in 10. I'm Chuck Getchell, and every day I break down the biggest AI story in just 10 minutes. What it is, why it matters, and how you can actually use it. While tech companies are no stranger to executive departures, something happened on March 9th that's got the entire AI world talking, and it wasn't about money or stock options. OpenAI's robotics and hardware leader Caitlin Kalinowski walked away from one of the most coveted jobs in Silicon Valley because her company decided to help the Pentagon build military AI. This isn't your typical corporate reshuffling. This is about the moment when AI stopped being just about chatbots and photo filters and became a tool of war. Let me walk you through what actually happened here. Kalinowski wasn't just any employee. She was the person responsible for turning OpenAI's software into real physical systems. Think robots, hardware integration, the stuff that makes AI move and act in the real world instead of just sitting on your computer screen. Then OpenAI announced they were partnering with the Pentagon, not just selling them some software licenses, but actively developing AI for military applications. We're talking surveillance systems, autonomous operations, defense infrastructure, the kind of AI that doesn't just write your emails, it potentially decides who gets targeted by a drone. Kalinowski looked at this deal and said, not for me, she resigned immediately. Which is like a head chef walking out of a five-star restaurant because they started serving fast food. Now, OpenAI's CEO, Sam Altman, defended this as necessary for national security. The company line is that America needs the best AI to stay competitive with countries like China. Fair point. But here's where it gets complicated. OpenAI started as a company with strict limits on military use. They positioned themselves as the good guys, building AI to benefit humanity, not to blow things up. This Pentagon partnership represents a complete 180-degree turn from those early principles, and Kalinowski wasn't alone in her concerns. Reports suggest there were significant internal tensions at the company. Other employees weren't thrilled about their work potentially ending up in weapon systems either. Here's what makes this story even more interesting. While OpenAI is jumping into bed with the military, their competitor Anthropic took the opposite approach. Anthropic flat out refused Pentagon contracts for AI surveillance and autonomous weapons. The result? The Department of Defense blacklisted them. And Anthropic turned around and sued the Pentagon. So now we have this fascinating split in the AI world. Some companies are saying, sure, we'll help you build robot soldiers, others are saying, absolutely not, and we'll see you in court. But let's talk about why this matters to you, because this isn't just some Silicon Valley drama playing out in boardrooms. First, your tax dollars are funding these deals. We don't know the exact numbers, but we're talking about billions of public money going toward AI that might never benefit regular Americans. That's money that could be going toward AI for healthcare, education, or infrastructure instead. Second, the technology being developed for military use doesn't stay there. It spills over into civilian applications. Those autonomous navigation systems being refined for military drones, they're gonna end up in your delivery apps and self-driving cars. Which sounds great until you realize they were originally designed to track and eliminate targets. Third, we're entering an era where AI errors and military applications could erode trust in everyday AI. If an autonomous military system makes a catastrophic mistake, how comfortable are you gonna feel letting AI diagnose your medical condition or drive your kids to school? And here's the privacy angle nobody's talking about. Pentagon AI systems need training data, lots of it. Where do you think that data comes from? It comes from the same sources that train civilian AI, including data about regular people like you and me. So when the military develops better surveillance and monitoring tools, those capabilities have a way of finding themselves in civilian applications too. Your local police department, your kids' school, your workplace, but they all love adopting military grade tech. The line between national security AI and domestic surveillance AI is thinner than a smartphone screen. Now let me give you something practical you can do about this, because sitting around worrying never solved anything. First, you can make your voice heard on the ethical side. Organizations like the Future of Life Institute are running petitions against military AI development. You can find them at futuroflife.org and actually participate in this debate instead of just watching it happen. Second, vote with your digital wallet. If you're uncomfortable with OpenAI's military partnerships, there are alternatives. Anthropics Claude is actively refusing these deals. You can try it at Claude.ai. There are also open source AI tools through Hugging Face that you can run on your own computer. No corporate partnerships required. Third, stay informed and engaged. This stuff is moving fast and public opinion actually matters. Companies like OpenAI are watching social media, tracking sentiment, listening to their users. Join the conversations on Reddit. Search for OpenAI Pentagon and you'll find thousands of people debating this stuff. Your voice adds to that chorus. Fourth, test these AI systems yourself and report problems. If you're using ChatGPT and you notice it responding to military-style prompts in ways that seem concerning, use those feedback buttons, report biases. These companies need to know when their AI is behaving in ways that make users uncomfortable. But here's my bigger point. We're at a crossroads moment in AI development. The next year is going to determine whether AI becomes primarily a tool for human flourishing or primarily a tool for government and military control. And that decision isn't being made in some backroom by politicians or tech executives. It's being made by public reaction, market forces, and individual choices. Companies follow the money and the public sentiment. If you're just getting started with understanding all this, my AI Explained course walks you through everything in about 30 bite-sized videos. Because the more people understand this technology, the better our collective decisions become. The Caitlin Kalinowski story matters because it shows us that even people at the highest levels of AI development are wrestling with these ethical questions. She had one of the most prestigious jobs in tech, and she walked away because her conscience told her to. That takes courage. But it also shows us that individuals still have power in this system. Your choices matter, your voice matters, your participation in this conversation matters. This isn't about being anti-military or anti-security. America does need to stay competitive in AI. But there's a difference between defensive AI capabilities and offensive AI weapons. There's a difference between protecting national infrastructure and building killer robots. The question is whether we can have that conversation as a society or whether these decisions are just going to be made for us behind closed doors. What happened with Kalinowski suggests that even inside these companies, that conversation is far from over. The fact that she resigned publicly, knowing it would generate headlines, tells us she wanted this debate to happen in the open. And that's exactly where it needs to happen. Because AI is too important to leave entirely to the AI companies. It's too important to leave entirely to the Pentagon. It's too important to leave entirely to politicians who can barely send an email without help. This technology is going to reshape every aspect of our lives over the next decade. The decisions being made right now about military AI partnerships are going to influence everything from your job security to your privacy to your kids' future. So pay attention, get informed, make your voice heard, and remember that in a free market system, companies ultimately serve customers. If enough customers care about these ethical questions, companies will respond. The resignation of one hardware executive might seem like a small story in the grand scheme of things, but it's actually a signal flare, showing us that we're at a pivotal moment where individual conscience still matters, where ethical stands can still influence billion dollar corporations, and where the future of AI isn't predetermined. The question is what we do with that opportunity while we still have it. That's today's AI Inten. If you want to go deeper and learn AI with a community of people just like you, join us at aihammock.com. I'll see you tomorrow, my friends.