Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.
Anthropics stance on autonomous weapons may not survive the future
Much of the AI world is watching closely as Anthropic tangles with the Pentagon over how the government can use the Claude models. Anthropic has a $200 million contract with the Pentagon, but the contract says the military cant use the AI companys models as the brains for autonomous weapons or for mass surveillance of Americans. Defense Secretary Pete Hegseth insists, after the fact, that the military should be able to use the Anthropic models for all lawful purposes.
Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for a Tuesday morning meeting, in which he reportedly gave Anthropic until 5:01 p.m. Friday to comply with the Pentagons demand. If Anthropic fails to do so, Hegseth threatened to invoke the Defense Production Act to compel the AI company to supply its models with no guardrails. Hegseth also said the government would declare Anthropic models to be a supply chain risk, meaning that all government suppliers would be directed to avoid or discontinue use of Anthropic models.
Amodei said in an interview after the Hegseth meeting that his company has no intention of complying with Hegseths demands. (Hes got a strong case: After all, government officials agreed to the terms.) Amodei explained that the military relies on human judgement to avoid violating peoples constitutional rights. If AI is making the decisions, there will be no human being to object.
Amodei is right, and his companys willingness to stand up for its values is laudable. The trouble is, were rapidly heading for a future where autonomous systems become the norm in warfare.
For years, the defense establishment talked about keeping the human in the loop in AI weapons systems. Often that human is a government lawyer who can make calls on rules-of-engagement issues on the battlefield. Today the Pentagon is talking more about fully autonomous weapons that can manage more of the kill chain, or the series of communications and decisions around the destruction of a target. Military leaders often say that whoever can use technology to shorten the kill-chain will win wars.
Things like electronic warfare (cyberwar), hypersonic missiles, and drone swarms are making war faster and response times shorter. This may eventually preclude the opportunity for human review and decision-making. Increasingly, the U.S. military may be forced to take humans out of the loop in order to stay competitive with its adversaries.
So the result of Anthropics standoff with the Pentagon may be that a safety-conscious AI lab is forced out, and a generally less scrupulous company like xAI is chosen as the alternative.
Trump rips off Mark Kellys idea for powering new data centers
In his State of the Union address, Donald Trump spent a few minutes on the subject of new data centers for AI, which has over the past few months become a hot button issue for voters. While the tech industry says it needs hundreds of new data centers to support all the AI it’s building, a growing number of voters now understands that the power grid improvements needed to power the data centers may increase their energy bills. I have negotiated the new Ratepayer Protection Pledge, Trump crowed. We’re telling the major tech companies that they have the obligation to provide for their own power needs.
Politicos might recognize that message, as it closely echoes what Arizona Senator Mark Kelly, a Democrat, has been saying for months now. Kellys AI for America plan would create an industry-financed AI Horizon Fund to pay for energy-grid upgrades and workforce reskilling.
According to Kellys plan, Congress could require data center developers to buy or lease enough land to contain both their facilities and the renewable energy infrastructure to power and cool them. The data center operators could also be required to pay to connect the renewable sources to the local grid, should the power they generate go unutilized.
Trumps idea is more of a suggestion. As of now its non-binding, just words. And there was no mention of how the tech companies would generate their own power. Elon Musks xAI, for example, brought its own power to its massive Colossus data center in Memphis. Unfortunately, they were dirty methane-powered turbines, and the facility quickly became one of the areas biggest polluters.
High numbers of young tech job seekers AI-cheated on skills tests
Cheating on technical hiring assessments went through the roof in 2025, with fraud attempts more than doubling, according to new research from CodeSignal, which runs a developer-skills evaluation platform used in hiring software engineers. The research found that 35% of proctored assessments showed signs of cheating or fraud last year, up from just 16% in 2024. The biggest culprits? Plagiarism, having someone else take the test for you, and sneaking in AI tools that aren’t allowed.
The jump was especially noticeable among entry-level candidates. Fraud rates for junior roles nearly tripled year over yeargoing from 15% to 40%making early-career hiring a particularly vulnerable spot in the recruiting pipeline. In a press release accompanying the report, CodeSignal CEO and cofounder Tigran Sloyan partly blamed the normalization of AI tools, noting that 80% of Gen Z reportedly uses AI in daily life, which has made the line between acceptable help and outright cheating much blurrier. Accessibility to AI also makes unauthorized assistance harder to detect and raises the stakes for maintaining fair and reliable skill evaluation, he noted.
CodeSignal’s detection systemswhich combine AI analysis, human review, and digital monitoringidentified a few common patterns across flagged assessments. About 35% of candidates frequently looked off-screen, suggesting they were consulting outside resources during the test. Another 23% showed unusually linear typing patterns, where complex solutions just appeared with barely any pauses or debugging. And 15% had answers that looked a lot like known solutions or leaked content. (It’s worth noting that these numbers reflect attempts that were actually caught, not cases where someone successfully slipped through.)
The data also surfaced some geographic and procedural gaps. Fraud attempt rates hit 48% in the Asia-Pacific region, compared to 27% in North America. Testing conditions made a big difference, too: Candidates in unproctored environments shoed score jumps more than four times larger than those being actively monitored, which pretty clearly shows that proctoring works as a deterrent.
As for how CodeSignal catches all this: the company says it’s spent a decade building out its fraud-prevention infrastructure, which it’s now applied across millions of assessments. It uses a proprietary “Suspicion Score” and leak-resistant test design to flag things like plagiarism, proxy test-taking, unauthorized AI use, and identity fraud.
More AI coverage from Fast Company:
Harvard study shows AI stock trading rivals many picks made by fund managers
He built a hit podcast about the Epstein files. Its entirely AI-generated
What if the SaaSpocalypse is a myth?
This AI note-taking startup thinks its building the steering wheel for chatbots
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Artificial intelligence chipmaker Nvidia on Wednesday announced another quarter of astounding quarterly growth as investors try to decipher whether technology’s latest craze is overblown hyperbole or a springboard into a new era of prosperity and productivity.The results for the November-January period blew past the analyst projections that shape investors’ perceptions, as has been the case since Nvidia’s high-end chips emerged as AI’s best building blocks three years ago.Nvidia’s fiscal fourth-quarter revenue surged 73% from the previous year to $68.1 billion while its profit nearly doubled to roughly $43 billion, or $1.76 per share.“No quarter has had more riding on it than this one,” said Jake Behan, head of capital markets for the investment firm Direxion. “The AI trade needed some positive news and Nvidia’s earnings report brought plenty of it.”The Santa Clara, California, company also provided a forecast exceeding analyst projections while its CEO Jensen Huang reinforced the demand for the company’s chips is still “skyrocketing.” That description feeds into Huang’s thesis that the AI boom is still in the early stages of a buildout that will reshape society. If Nvidia hits its revenue target for the February-April period, it will translate into a 77% increase from last year a sign that the company’s already phenomenal growth rate is still accelerating.“AI is here, AI is not going to go back,” Huang said during a conference call with analysts. “AI is only going to only get better from here.”Despite the stellar results and still-rosy outlook, many investors still evidently are worried about a jarring comedown after a three-year boom that has seen Nvidia’s market value soar from $400 billion at the end of 2022 to nearly $4.8 trillion now. After initially rising 4% in extended trading after the latest quarterly numbers came out, Nvidia’s stock price backtracked and was slightly down following Huang’s upbeat conference call.Nvidia has regularly cleared the bar set by analysts in the past three years, often by a wide margin, but that hasn’t always been enough to satisfy investors who have become increasingly skeptical about whether AI will justify the trillions of dollars that are being spent to develop the technology.After Nvidia delivered a stellar performance that far exceeded analyst forecasts in its last quarterly report, its stock price still fell by 3% during the next day’s trading.The AI fervor has escalated again during the past month as the four companies leading the AI charge Amazon, Microsoft, Google parent Alphabet and Facebook parent Meta Platforms collectively made commitments to spend about $650 billion this year ramping up their AI computing power.A significant amount of the money is expected to be earmarked to buy more Nvidia chips required to power their AI factories, just as has been the case for much of the past three years as Nvidia’s annual revenue soared from $27 billion to $216 billion. Analysts expect the chipmaker’s revenue to surpass $330 billion during the company’s next fiscal year, a more than 50% increase from the past year.“We want to take the great opportunity that we have as we’re in the beginning of this new computing era, this new computing platform shift, to put everybody on Nvidia,” Huang said.
Michael Liedtke, AP Technology Writer
There are few things that unite the world like animal videos. There are also few things that are so readily commoditized.
Both have occurred in the case of Punch, a baby monkey at the Ichikawa City Zoo in Japan. Punch captured hearts around the world after a viral post showed him hugging a stuffed orangutan toy after being rejected by other monkeys.
E-commerce sellers act quickly with monkey merch
Now, the young Japanese macaque and his stuffed friend are available as everything from toys on Etsy to adecide for yourself if its AIchildren’s book on Amazon.
Theres also an official Punch Monkey store with products like stickers, shirts, and mugs.
Some of the merchandise even contains hopeful sayings, like Small, but brave, alongside imagery of the pair.
In fact, the original plush orangutan doll is available for $19.99, as its one of the Djungelskog soft toys from Ikea.
The Swedish retailer has gone so far as to make an advertisement based on Punch and shared to its social channels.
In it, a stuffed monkey holds the orangutan while real monkeys appear in the background. The copy reads, Sometimes, family is who we find along the way. It then refers to the stuffed toy as Punchs comfort orangutan.
View this post on Instagram
Fast Company has reached out to Ikea for more information on the retailer’s orangutan soft toy sales. We will update this post if we hear back.
Meanwhile, a new video appears to show Punch having made some progress with his fellow monkeys. But the young creature has already reached the same status as its fellow infamous animals like Moo Deng, the pygmy hippo.
Yet another powerful person has stepped down after being named in the Epstein files.
Brge Brende, president and CEO of the World Economic Forum (WEF), best known for hosting an annual summit of world leaders in Davos, Switzerland, has stepped down after an internal investigation into his ties to convicted sex offender Jeffrey Epstein.
In a statement released Thursday, Brende announced that after eight years in his role, hed be resigning in the wake of the latest batch of files released from the federal investigation into Epstein.
I am grateful for the incredible collaboration with my colleagues, partners, and constituents, and I believe now is the right moment for the Forum to continue its important work without distractions, Brende said.
WEF co-chairs André Hoffmann and Larry Fink also released a statement on behalf of the Board of Trustees, thanking Brende for his years of service and respecting his choice to step down.
His dedication and leadership have been instrumental during a pivotal period of reforms for the organization, leading to a successful annual meeting in Davos, they said. They also noted that the WEFs investigation into Brende found no additional concerns beyond what has been previously disclosed.
Though Brende had previously claimed he was completely unaware of [Epsteins] criminal acts and past in statements to the Norwegian media, the newly released collection of Epstein files tell a different story. Epstein and Brende stayed in contact long after Epstein was convicted of soliciting a minor for prostitution in 2008, with messages between the two continuing through at least mid-2019, just months before Epstein died in jail.
In one text exchange, Epstein appears to have sent Brende a letter by his lawyers that was published in the The New York Times, which included the claim, The number of young women involved in the investigation has been vastly exaggerated. Brende replied to the letter with a thumbs-up emoji.
Brendes resignation comes less than a year after the last shakeup at the WEF. In April 2025, founder Klaus Schwab stepped down as chair of its board, and a month later in May, the board opened an investigation into Schwab after an anonymous letter accused him of misusing funds and making inappropriate comments toward women. Between the two scandals, the WEFs reputation as a mecca for world leaders has taken a massive hit.
In Brendes absence, the WEFs managing director Alois Zwinggi will serve as interim president and CEO.
Brende is far from the only executive to step down after appearing in the Epstein files. Since the newest batch of files released on January 30, business leaders including Hollywood agent Casey Wasserman and former general counsel for Goldman Sachs Kathryn Ruemmler have resigned from their positions, while political figures including Britain’s Andrew Mountbatten-Windsor, formerly known as Prince Andrew, and Peter Mandelson, the countrys ambassador to Washington, have been arrested for their ties to Epstein.
Everyone who has tried to code with Anthropic’s Claude Code AI agents runs into the same usability problem: If you run two or three concurrent artificial intelligence sessionssay, one rewriting your server code, another generating tests, a third doing background researchyou are forced to manually hunt through separate terminal tabs, each one generating a relentless stream of machine-readable log entries, just to figure out what each program is actually doing at any given moment. Not only is it hard to follow whats really going on, but not checking constantly can also lead to problems, as agents might stop to ask you something and you wont notice it for minutes or hours. Developer Pablo De Lucca thought there had to be another way: What if you could create a control panel and alert system that bridges the AI coding agents with your brain in an intuitive way, allowing you to control at a glance whats going on? Thats how Pixel Agents was born.Pixel Agents is an extension that runs inside Visual Studio Code, the most popular code editor on the planet. If you have no idea what Im talking about, thats okay. The important thing to know here is that the UX of agentic coding could someday soon look a lot different.While it looks like an adorable 8-bit video game, Pixel Agents is not something you can play. Rather, it transforms the user experience of coding with Anthropic’s Claude Code agentic AIs by turning them into sprite characters who live, work, and interact in an office doing your bidding.The extension draws directly from the language of video games because its something everyone understands. “I envision a future where agent-based user interfaces resemble a video game more than a traditional IDE, he said in the Reddit thread introducing his tool. Projects like AI Town have demonstrated the appeal of visualizing agents as characters within a tangible space, which I find much more engaging than just viewing endless lines of terminal text.”[Image: Pixel Agents]How Pixel Agents worksThe extension achieves this transformation by acting as a silent observer. Think of Anthropic’s Claude Code as a worker who keeps a detailed, timestamped diary of every action it takes: every file it opens, every command it runs, every moment it waits. These diaries are stored in a format called JSONL transcript files, essentially a structured log that records the machine’s activity in real time. Pixel Agents reads these logs continuously, without touching or modifying Claude Code itself, and uses the entries as triggers to update the state of the corresponding character, animating them on screen and making them talk using speech bubbles when needed.Developers can customize the virtual office where these characters live to better suit their needs. A built-in layout editor lets them design their own workspace on a grid that can be expanded to up to 64 by 64 tiles, with furniture, walls, and floors arranged to taste. Then, each concurrent Claude Code session spawns one of six distinct animated pixel art character designs into that space. The layout persists across VS Code windows so the office retains its configuration between work sessions. The result is a spatial map of your entire active workload.“Each character moves around, takes a seat at a desk, and visually represents the actions of the agent, De Lucca describes on Reddit. For instance, when coding, the character types; when searching for files, it appears to read; and if it’s waiting for input, a speech bubble appears.[Source Image: Pixel Agents]Love them bubblesOne of the most persistent frustrations in AI-assisted development is the blocked agent. That’s when a program that has paused its work to request human authorization (for example, permission to execute a potentially destructive system command) sits completely idle. It’s usually invisible inside a minimized terminal tab until the developer happens to notice it. Pixel Agents converts that invisible pause into a visual and audio event: an amber bubble over the character’s head, with an optional sound notification.The extension also tackles a second, subtler problem: the spawning of sub-agents. Modern AI coding tools routinely break large tasks into smaller pieces, launching temporary child processes to handle discrete sub-problems before terminating. In a text terminal, the birth and death of these ephemeral processes is nearly invisible and cognitively taxing to follow. Inside the Pixel Agents office, each sub-agent physically materializes as a separate character visually linked to its parent, then disappears with a dedicated exit animation the moment its job is complete. De Lucca says that the sub-agents enter and exit with neat animations reminiscent of the Matrix. That way, the workload hierarchy becomes something you can see rather than something you have to infer from logs.The extension is free but the furniture and office tile graphics come from a commercial asset pack called Office Interior Tileset (16×16) by an artist named Donarg, which is available on itch.io for $2. De Lucca has publicly called for community contributions of public domain art assets to fully open and extend the visual ecosystem.Hopefully people will contribute. Pixel Agents is one of those happy ideas that solve a real problem in a fun way, making the invisible visible and turning the annoying into entertainment. Translating the abstract, parallel labor of multiple autonomous machines into a spatial, ambient picture that a human brain can monitor at a glance is definitely something to admire. Whether that constitutes the beginning of a broader shift in how we design interfaces for AI tools remains to be seen, but as a proof of concept, it is hard to argue with.
OpenAI has emerged as one of the governments leading providers of artificial intelligence. According to the company, 37 federal agencies now have access to its tech, and about 80,000 government employees are now using it regularly.
This makes OpenAI a frontrunner in the race between the top AI companies to get their tech in front of government users. These workers are just a small fraction of these frontier labs’ total customer bases, but theyre symbolically valuable. Wooing the U.S. government is important enough to these companies that theyre offering their technology at a steep discount. And, in another bid to speed up the administrations use of the tech, several of those labsOpenAI, Perplexity, and Googlehave now earned a fast-track to offer their AI on a government-approved cloud.
Of course, working with the U.S. government brings a host of logistical challenges. Between arduous cybersecurity requirements and arcane procurement rules, getting technology to federal agencies can be a real chore. Federal agencies also operate on far tighter budgets than the commercial sector, and are slow to adapt to new tech, which is why OpenAI, like other companies, is offering them access to ChatGPT for basically nothing.
Government contracting can also put tech companies under a microscope. Working for government agencies, particularly more polarizing ones (like the Department of Homeland Security) has become politically toxicnot just to the broader public, but also to tech workers. And as Anthropic is learning in real-time, the government can be a troublesome customer. The Pentagon, which has grown highly reliant on Claude, is now threatening to deem Anthropic a supply chain risk, should the company not accede to its demands for essentially unlimited usage terms.
Felipe Millon, who leads government sales at OpenAI, spoke with Fast Company about why the AI giant wants to work with the U.S. government, and its progress in getting federal employees to use its tech. This interview has been edited for length and clarity.
I can’t imagine that government sales are determinative for the success of OpenAI’s business model. Why do this? Why work with the government, if its so hard and there are all these extra complications involved with it.
I joined two years ago as our first government hire before we had anything here. It is absolutely very hard. It is alsoI won’t say not materialbut we don’t ever expect government sales to be a very large percentage of OpenAIs revenue. If you want to think of it purely from a financial perspective, the reason is very mission-aligned, right? OpenAI has a mission as a public benefit corporation now, that is to ensure this technology called AGI, Artificial General Intelligence, benefits all of humanity.
And what we have discussed internally with our leadership team is that . . . creating a technology, AGI, that is better than humans at most economically viable tasks and deploying that to the world will not happen without the U.S. government being involved. They can’t understand it unless they’re users of the technology, right?
The best way to understand what’s happening in AI is to be a user and to see it for yourself, whether thats a chatbot, coding, or other tools. We’re ready to start seeing where it can add value. And so part of our mission is really to ensure that the U.S. government understands what is coming by being able to unlock that for government use cases. If our mission is to ensure AGI benefits all humanity, one of the ways that [humanity] is benefited is by the delivery of citizen serviceswhether it be someone who is reliant on food stamps or someone who is getting housing support from Housing and Urban Development, or whether they are paying their taxes in an effective way with the IRS.
So youre now able to host your own AI as a cloud service. Why does that matter, and how does it impact government users?
With the advent of cloud computing, a lot of government tools have moved to the cloud and so off a government-hosted computer. Previously, government [agencies] would host their own mainframes and their servers and their own personal data in their own data centers. . . . Business models emerged with cloud computing, where large hyperscalers, mainly Amazon, Microsoft, Google, Oracle. [They] said, Hey, we can run this at scale, and you can just use this capacity from us on demand as a service. So rather than owning your server, you get compute and storage and things like that . . . and you pay for it.
We use cloud-based services to host our tools, whether that be the models we operate and provide in an API service to developers, or as ChatGPT Enterprise. We would like to use that enterprise version of ChatGPT, at, for example, the Treasury or at HHS or at the State Department. But in order to do so, we need to be compliant with these cybersecurity rules. This accreditation means that now the government agencies are allowed to use our tools with real data and are able to really start getting value.
I understand that you don’t work on the defense side of OpenAIs government business. Obviously we’ve seen in the news, there can be tensions between AI companies or any software company selling to the government what the government wants to do, and what you know a company might be interested or comfortable with. Can you talk a little bit about weighing that when you’re thinking about selling to the civilian side of the government?
I’m not going to cover a lot of the national security side that is outside of my specific purview. I focus on the civilian and state and local side. On the civilian side, we rarely encounter these things. It’s rare that these things will come up at places like the Treasury, etc. If they do come up, really, I think it’s just a good faith discussion and negotiation with the government.
I’m wondering about the penetration of OpenAI technology in the government right now, particularly after the OneGov deal, which saw you offer ChatGPT to the government at a major discount.
We have a commercial tool that is available . . . and anyone can download it on their phone. We saw that over 100,000 people had a government email address in ChatGPT, before we even launched an enterprise product. We also have a relationship with Microsoft. It’s a very complicated relationship, but they . . . deploy their own products called Azure OpenAI, which is our model hosted and run by Microsoft. But that’s a Microsoft product, and that product has been used in government for some time, because Microsoft has a very large and established government business. We want to work directly with the government as well. There’s two main barriers that have blocked government adoption of AI: authorization, which we’re just getting with FedRAMP, and then the other one is procurement and budgeting.
HHS, for example, is a very large user of ChatGPT Enterprise. They have tens of thousands of users. The U.S. Treasury also has tens of thousands of users through ChatGPT Enterprise. I would say around 50 or so federal agencies have taken advantage of our OneGov deal and have used it. It has been painful because they have to provide agency level authorization. So their authorizing officials and their security have to do their own cybersecurity revieweither that or they don’t use he tool. We actually have our only on-premises deployment with Los Alamos, which was kind of a separate work that we had done. The majority of the national labs are enterprise customers.
Iran and the United States were holding indirect negotiations Thursday in Geneva as talks over Tehran’s nuclear program hang in the balance following Israel’s 12-day war on the country in June and the Islamic Republic carrying out a bloody crackdown on nationwide protests.U.S. President Donald Trump has kept up pressure on Iran, moving an aircraft carrier and other military assets to the Persian Gulf and suggesting the U.S. could attack Iran over the killing of peaceful demonstrators or if Tehran launches mass executions over the protests. A second aircraft carrier now is in the Mediterranean Sea.Trump has pushed Iran’s nuclear program back into the frame as well after the June war disrupted five rounds of talks held in Rome and Muscat, Oman, last year. Two rounds of talks so far have yet to reach a deal, though.Mideast nations fear a collapse in diplomacy could spark a new regional war. U.S. concerns also have gone beyond Iran’s nuclear program to its ballistic missiles, support for proxy networks across the region and other issues.Iran has said it wants talks to focus solely on the nuclear program. Iranian President Masoud Pezeshkian has insisted that his nation was “not seeking nuclear weapons. and are ready for any kind of verification.” However, the United Nations’ nuclear watchdog the International Atomic Energy Agency has been unable for months to inspect and verify Iran’s nuclear stockpile.Trump began the diplomacy initially by writing a letter last year to Iran’s 86-year-old Supreme Leader Ayatollah Ali Khamenei to jump start these talks. Khamenei has warned Iran would respond to any attack with an attack of its own, particularly as the theocracy he commands reels following the protests.Here’s what to know about Iran’s nuclear program and the tensions that have stalked relations between Tehran and Washington since the 1979 Islamic Revolution.
Trump writes letter to Khamenei
Trump dispatched the letter to Khamenei on March 5, 2025, then gave a television interview the next day in which he acknowledged sending it. He said: “I’ve written them a letter saying, ‘I hope you’re going to negotiate because if we have to go in militarily, it’s going to be a terrible thing.'”Since returning to the White House, the president has been pushing for talks while ratcheting up sanctions and suggesting a military strike by Israel or the U.S. could target Iranian nuclear sites.A previous letter from Trump during his first term drew an angry retort from the supreme leader.But Trump’s letters to North Korean leader Kim Jong Un in his first term led to face-to-face meetings, though no deals to limit Pyongyang’s atomic bombs and a missile program capable of reaching the continental U.S.
Oman mediated previous talks
Oman, a sultanate on the eastern edge of the Arabian Peninsula, has mediated talks between Araghchi and U.S. Mideast envoy Steve Witkoff. The two men have met face to face after indirect talks, a rare occurrence due to the decades of tensions between the countries.It hasn’t been all smooth, however. Witkoff at one point made a television appearance in which he suggested 3.67% enrichment for Iran could be something the countries could agree on. But that’s exactly the terms set by the 2015 nuclear deal struck under former U.S. President Barack Obama, from which Trump unilaterally withdrew America. Witkoff, Trump and other American officials in the time since have maintained Iran can have no enrichment under any deal, something to which Tehran insists it won’t agree.The first attempt at negotiations ended, however, with Israel launching the war in June on Iran. A new effort has seen two new rounds of talks in Oman and Geneva so far.
The 12-day war and nationwide protests
Israel launched what became a 12-day war on Iran in June that included the U.S. bombing Iranian nuclear sites. Iran later acknowledged in November that the attacks saw it halt all uranium enrichment in the country, though inspectors from the IAEA, the U.N. nuclear watchdog, have been unable to visit the bombed sites.Half a year later, Iran saw protests that began in late December over the collapse of the country’s rial currency. Those demonstrations soon became nationwide, sparking Tehran to launch a bloody crackdown that killed thousands and saw tens of thousands detained by authorities.
Iran’s nuclear program worries the West
Iran has insisted for decades that its nuclear program is peaceful. However, its officials increasingly threaten to pursue a nuclear weapon. Iran now enriches uranium to near weapons-grade levels of 60%, the only country in the world without a nuclear weapons program to do so.Under the original 2015 nuclear deal, Iran was allowed to enrich uranium up to 3.67% purity and to maintain a uranium stockpile of 300 kilograms (661 pounds). The last report by the IAEA on Iran’s program put its stockpile at some 9,870 kilograms (21,760 pounds), with a fraction of it enriched to 60%. The agency for months has been unable to assess Iran’s program, raising nonproliferation concerns.U.S. intelligence agencies assess that Iran has yet to begin a weapons program, but has “undertaken activities that better position it to produce a nuclear device, if it chooses to do so.” Iranian officials have threatened to pursue the bomb.Israel, a close American ally, believes Iran is pursuing a weapon. It wants to see the nuclear program scrapped, as well as a halt in its ballistic missile program and support for anti-Israel militant groups such as Hezbollah in Lebanon and Hamas.
Decades of tense relations between Iran and the US
Iran was once one of the U.S.’s top allies in the Mideast under Shah Mohammad Reza Pahlavi, who purchased American military weapons and allowed CIA technicians to run secret listening posts monitoring the neighboring Soviet Union. The CIA had fomented a 1953 coup that cemented the shah’s rule.But in January 1979, the shah, fatally ill with cancer, fled Iran as mass demonstrations swelled against his rule. The Islamic Revolution followed, led by Grand Ayatollah Ruhollah Khomeini, and created Iran’s theocratic government.Later that year, university students overran the U.S. Embassy in Tehran, seeking the shah’s extradition and sparking the 444-day hostage crisis that saw diplomatic relations between Iran and the U.S. severed. The Iran-Iraq war of the 1980s saw the U.S. back Saddam Hussein. The “Tanker War” during that conflict saw the U.S. launch a one-day assault that crippled Iran at sea, while the U.S. later shot down an Iranian commercial airliner that the U.S. military said it mistook for a warplane.Iran and the U.S. have seesawed between enmity and grudging diplomacy in the years since, with relations peaking when Tehran made the 2015 nuclear deal with world powers. But Trump unilaterally withdrew the U.S. from the accord in 2018, sparking tensions in the Mideast that persist today.
The AssociatedPress receives support for nuclear security coverage from the Carnegie Corporation of New York and Outrider Foundation. The AP is solely responsible for all content.
Jon Gambrell, Associated Press
If you’ve been paying attention to AI at all lately, you’ve certainly seen the “Something Big Is Happening” essay by Matt Shumer, or at least some of the reaction to it. In it, Shumer describes how coding, for him, has completely transitioned from manually writing code to simply prompting and approving the near-flawless work done by AI. The piece was meant as a warning to all knowledge workers, essentially saying: AI has taken over my job, and it’s coming for yours next.
There have been countless thought pieces on the merits and flaws of Shumer’s argument, and I have no intention of adding to the pile. But journalism is knowledge work, too, and the field had its own, slightly less viral, moment of AI existential crisis this past week.
The editor of Cleveland.com, Chris Quinn, wrote a column this week, describing how a college student who had applied for a reporting job withdrew their application when they found out how the publication uses AI. Besides using AI to help generate story ideas, the newsroom developed an “AI rewrite specialist” to write stories based on the material that reporters gather. By ditching writing, according to Quinn, their reporters have been able to reclaim an extra workday each week.
{"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/media-copilot.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/fe289316-bc4f-44ef-96bf-148b3d8578c1_1440x1440.png","eyebrow":"","headline":"\u003Cstrong\u003ESubscribe to The Media Copilot\u003C\/strong\u003E","dek":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit \u003Ca href=\u0022https:\/\/mediacopilot.substack.com\/\u0022\u003Emediacopilot.substack.com\u003C\/a\u003E","subhed":"","description":"","ctaText":"SIGN UP","ctaUrl":"https:\/\/mediacopilot.substack.com\/","theme":{"bg":"#f5f5f5","text":"#000000","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#000000","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91453847,"imageMobileId":91453848,"shareable":false,"slug":""}}
The backlash was predictably vicious. On X, Axios reporter Sam Allard earned a lot of likes by comparing what Cleveland.com is doing to being an “AI content farmer,” while various veteran journalists on Substack expressed various degrees of outrage and dismay. Most of the reaction was along the lines of this piece from journalist Stacey Woelfel: “Writing is an integral part of the reporting process.”
The AI newsroom split
That’s true, but I think what Quinn describes isn’t so easily dismissed. After all, reporters often work in teams on single articles, with one of them taking the lead on the draft. Did the others then . . . not report? And I’ve certainly been in breaking-news situations where a reporter would text, email, or call in their notes to an editor or writer who would put together the piece.
It’s generally recognized that writing and reporting are different skills, and what Quinn and Cleveland.com appear to have done is use AI to fully separate them. The conventional wisdom on the “correct” way to use AI is to let it take over the tasks that it can do faster and better than humans, freeing them up to do the things that absolutely require human engagement and judgment. In the case of a reporter, that’s talking to sources, learning new things, and earning their trust.
Well, at long last, AI is actually very good at writing. Certainly, much of the text that’s come out of AI systems over the past few years hasn’t done much for its literary reputation (yes, we’re all tired of the rampant em-dashes and the “it’s not Xit’s Y” bits). But if you use the most powerful models with a modest amount of deliberate prompting, they can produce highly competent prose.
And if we’re being honest, highly competent prose is all that’s needed for a large amount of reported stories. Many, if not most, news reports are meant to convey basic information about what happened, with little judgment or opinion, and typically written in AP style, which is essentially a formula. It’s not quite code, but it’s a very functional way of writing. The most important thing is conveying the facts, accurately and with context, as quickly as possible.
Again, it’s important to understand that the reporter is not removed from the process, but their role changes significantly. Just as Shumer found himself becoming a supervisor to an AI building machine, reporters may become operators of writing bots, ensuring they’re crafting stories properly out of the raw material they’ve been given. In the case of Quinn’s newsroom, reporters have final say over the copy.
Bleeding between the lines
None of this is to say this approach will result in a perfect future. There are writers who aren’t great at reporting, and there are reporters who aren’t skilled at writing, but there are plenty who are good at both. Will they need to pick a sideeither become a feature or opinion writer, or settle for just doing the reporting part?
And what about skill building? Even if this approach is as successful as Quinn says, how will junior staff become better writers without the day-in, day-out act of writing stories? When Woelfel says writing is integral to reporting, I think he means it’s integral to storytelling, which is an act of curation, prioritization, and expressionall with an audience in mind. This is what Ben Affleck meant when he famously drew a distinction between AI as a craftsman and AI as an artist. But how do you become an artist if AI is doing all thecrafting?
The irony of Shumer’s piece is that, while he makes a solid case that AI will soon disrupt most knowledge workand even name-checks journalism as one of the areas in the crosshairshe did it with an essay with a distinctly human voice. I honestly don’t know if he used AI to fully or partially write the piece, but I’m certain that if he did, he also was meticulous about every word.
I think that’s a hopeful sign that, even if we relegate some of the craft of writing to AI, that we might not lose as much as we might think. Audiences will always demand a human touch, so that touch will need to manifest in some form. It’s true that no one wants to read AI slop. But it might turn out that the most valuable reporting skill in the future will be the ability to turn slop into stories.
{"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/media-copilot.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/fe289316-bc4f-44ef-96bf-148b3d8578c1_1440x1440.png","eyebrow":"","headline":"\u003Cstrong\u003ESubscribe to The Media Copilot\u003C\/strong\u003E","dek":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit \u003Ca href=\u0022https:\/\/mediacopilot.substack.com\/\u0022\u003Emediacopilot.substack.com\u003C\/a\u003E","subhed":"","description":"","ctaText":"SIGN UP","ctaUrl":"https:\/\/mediacopilot.substack.com\/","theme":{"bg":"#f5f5f5","text":"#000000","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#000000","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91453847,"imageMobileId":91453848,"shareable":false,"slug":""}}
As snow piled up in front of bus stops and fire hydrants during New York City’s second winter storm of the year, city workers have tried to move fast to remove it before snow hardened into ice. A new internal tool makes that job easier to track.
The city’s Department of Sanitation (DSNY) now tags infrastructure that’s been plowed in a mobile mapping tool that employees can update on the go.
“We have started the work of geotagging every single bus shelter and crosswalk,” Mayor Zohran Mamdani said Monday, and overnight, he said the city cleared more than 1,600 crosswalks, 419 fire hydrants, and nearly 900 bus stops.
[Screenshot: Courtesy of New York City Department of Sanitation]
DSNY handles trash collection, but it’s also tasked with snow removal from city streets and bike lanes, areas within its legal obligation. DSNY sometimes provides supplemental services too, plowing pedestrian infrastructure like curb ramps, unsheltered bus stops, and fire hydrants that property owners are responsible for.
In the past, this supplemental work was done piecemeal, but under Mamdani, the amount of supplemental service has “vastly increased,” says Joshua Goodman, a DSNY deputy commissioner. “That necessitated a need to formally track this work,” he says.
[Screenshot: Courtesy of New York City Department of Sanitation]
Cities from Bellevue, Washington, to Syracuse, New York, use digital maps to show residents when streets get plowed, and New Yorkers can track when their streets were last plowed on PlowNYC, a public site launched in 2013. DSNY needed its own PlowNYC, but for bus stops and more.
“We developed an internal mapping tool, and Sanitation Supervisors make live updates from the field when one of these locations in their assigned section is complete,” Goodman tells Fast Company. “So maybe it’s a bit simpler than the terminology impliesit’s essentially someone making updates to a central database on their work cell phonebut it’s a big development for us, especially so quickly.”
“This is our first storm using it, but it is allowing greater efficiency around clearing these important areas,” he adds.
[Screenshot: Courtesy of New York City Department of Sanitation]
Preparations began following the snowstorm in January, when sites were surveyed for the mapping tool. The interface looks like a typical maps app, and while perhaps simpler than what the idea of “geotagging” might conjure, the database of information the tool stores is vast. New York City has about 13,000 bus stops and about 83,000 crosswalks in commercial corridors. The tool was designed by the DSNY operations management division, which is its data and analytics team.
To handle snow from the latest storm, DSNY has delayed trash and recycling collection so its workers can prioritize snow removal, and it’s hired hundreds of emergency temporary snow shovelers for $30 per hour. That’s a pop-up snow shoveling army with tens of thousands of sites and miles of ground to cover. Tracking this work with clipboards wouldn’t be efficient. By developing an internal tool to better monitor their job, DSNY found a quick solution to solve a pressing problem.
For decades, a legal degree felt like a golden ticket, a safe career choice because a robot could never take a lawyers job. Today consumers are increasingly turning to new technologies like generative artificial intelligence for answers to their legal questions without the assistance of a lawyer.
No wonder: The high cost of legal services places them beyond the reach of most Americans. Some outside the profession see this market failure as an opportunity. Legal technology startups armed with AI agents are securing billion-dollar evaluations, and after recent leaps in AI models and new featuresincluding one from Anthropic that can help automate legal taskssome legal and tech stocks went into shock. The sense that “something big is happening” also left at least some lawyers wondering whether the robots are finally coming for their jobs, and asking if this is the beginning of the end of the legal profession?
It doesnt need to be. Lawyers could try to wage what will certainly be a losing campaign against the encroachment of new technologies on areas of American life typically dominated by lawyers. Instead, the American legal profession can learn to run with the machines and not against them, fulfilling their ethical duty to ensure all Americans have access to justice by harnessing these technologies to deliver affordable and accessible legal services at scale.
These two powerful phenomenathe emergence of new technologies and the fact that tens of millions of Americans face their legal problems without a lawyerwill certainly encourage Americans to rely on new and widely available tools regardless of whether the information and guidance these consumers receive is accurate. And it often isnt. Indeed, according to one recent report, there are nearly 1,000 documented cases of lawyers and unrepresented litigants referencing fictitious court decisions and other legal authorities in court filings because of AI hallucinations: instances where the AI fabricated the legal sources upon which those litigants relied to their detriment, resulting in fines and other punishments from the courts.
The tragic reality driving many Americans to these imperfect alternatives to professional legal help is not that consumers are choosing between a lawyer and a bot; they are all too often facing their legal problem with no lawyer at all. This is especially true in areas where the fees available to lawyers are low, yet the stakes for the consumer high: where a tenant faces eviction, an immigrant is at risk of deportation, a homeowner might lose their home to foreclosure, or a victim of identity theft faces a mountain of debt they did not accumulate themselves. Roughly 93% of low-income and half of middle-income Americans go without adequate legal representation when confronted with legal problems like these.
This access-to-justice crisis, as bad as it is, leads to larger and even more troubling concerns. When lawyers are not available to vindicate important interests, that threatens other critical values all Americans should cherish: individual liberty and dignity, civil rights, equal justice, and the rule of law.
But this isnt the first time that the legal profession has faced these sorts of challenges. At the turn of the 20th century, industrialization led to reorganization of the bar into larger and larger law firms to respond to the growing and more complex demands of their clients. Simple technologies like the telephone, telegraph, and typewriter made the practice of law more efficient, allowing lawyers to provide more comprehensive services to their corporate clients.
Ironically, many of the measures elites in the bar formulated to respond to these societal and technological changes led to the current market failure. Indeed, instead of welcoming more lawyers into the profession to meet the growing need for its services, elites in the bar erected barriers to entry where few existed before (at least if you were white and male). They built high walls and wide moats to prevent dilution of the legal services market, including requiring an expensive legal education and more challenging bar exams before an aspiring lawyer could begin to practice.
These requirements had the desired effect: limiting access to the profession and artificially inflating the cost of legal services. What is more, many of these barriers endure and continue to drive up the cost of legal services today.
This time is different though. Never before has it looked like technology could truly displace lawyers. Indeed, tools like CitizenshipWorks, an online portal that helps individuals apply for citizenship, and Depositron, which assists tenants in New York seeking a return of their security deposit from their former landlords, are meeting critical needs without the fees a lawyer might otherwise charge for these services. Think of it as the expansion of TurboTax-like products to other areas of the law.
There are certainly situations where there is no substitute for a living, breathing lawyer, like when a criminal defendant is facing a felony charge, or when a complex and novel business transaction requires unique legal skills. But when the alternative is no legal representation at all, as is the case with far too many American consumers with far simpler legal needsneeds that can be met through technological innovationsthe profession has an obligation to find ways to address those needs, even when doing so will bring down the price of legal services or displace some traditional legal jobs. In the face of such threats to their position in society, however, lawyers must remember that the point of the legal system is not to serve as a full-employment plan for lawyers; it is to help people solve their legal problems.
This market opportunity is one that lawyers can actually seize. Instead of ignoring new technologies or erecting even higher walls to their adoption, the legal profession should embrace and shape these technologies, creating an array of options for individuals, families, and businesses to address their legal problems at lower cost, and at scale.
Big Law is already adopting many of these new tools to serve their well-heeled clients; the present cost of building effective systems may mean that the widespread adoption of such technologies at the high-end of the legal services market actually makes the access-to-justice gap worse, not better.
Instead of exacerbating legal access inequality, the profession should build bridgesaided by new technologiesthat will span the chasm between those who require legal assistance and those who can afford it, even if the services that solve Americans legal problems in the not-so distant future are not always delivered by lawyers alone.
Theres plenty of legal work to go around. Lawyers should be the ones figuring out how to put new technologies to useto serve the legal needs of all Americans in creative, ethical, effective, affordable, and accessible ways. When they do that, they will serve the professions most important values and functions, and advance what should be its highest ideals.