Since taking over the coffee chain in 2024, Starbucks CEO Brian Niccol has been on a mission to go back to Starbucks and rekindle the feeling of warmth inside the coffee giant.
Thats led to new store designs, new employee training, new uniforms, new menu items, and new staffingwhich have helped the company break out of a two-year sales rut.
But as part of this deep strategic exploration, Niccol made two specific asks for Starbuckss cross-discipline design team that are being revealed today: an iconic new cup and a new plush chair.
As the literal touchpoints between the consumer and the company, they are the biggest signals we have of warmth, comfort, and generosity, says Dawn Clark, SVP of global concepts and design at Starbucks.
The new Starbucks cup (ceramic in every size)
[Photo: Courtesy of Starbucks]
The new Starbucks cup is not just one cup, but five different glazed ceramic optionseach offered to customers who stay to enjoy their coffee. Built to accommodate drinks ranging from a single shot of espresso to a venti latte, the cups come in white (inspired by their takeaway cup, with a hand-painted green siren and rim), and green (where the siren is embossed). Notably, the cups all share the same tapered silhouette.
Clark says the cup design took inspiration from a blend of Italys espresso culture and Starbuckss own mercantile and coffee trading history. The result lands somewhere between European sensibility and American utility. After concepting different designs, they came up with four frontrunners which they 3D printed and shared with various stakeholders across the companyranging from corporate executives to on-the-ground baristas. They refined the designs and rendered them in ceramic before making the final choice.
The company knew it wanted a single, strongly branded silhouette across every size, which limited what could work. Its a really big design challenge because not all those forms that looked good in a short or tall looked great in a mini or large size, Clark says. The other, perhaps bigger problem was drinkability. Different geometries affect how the coffee flows into your mouth, and those geometries dont always scale well. They also needed to survive countless rounds of dishwashers.
[Photo: Courtesy of Starbucks]
The wide-mouth, tapered design won out because it satisfied every above requirement. But most of all, Clark says it was just a really nice vessel for drinking, shaped to make the coffee go with the flow perfectly from the cup to your lips.
From what I gathered, Starbucks may eventually choose to sell these mugs as merch, and its easy to imagine the company introducing special colorways for limited-time offerings. A toasty orange version for PSL season feels almost inevitable.
The new Starbucks chair (in green this time)
[Photo: Courtesy of Starbucks]
While cups are intrinsic to coffee, the new Starbucks chair requires a bit more explanation. Even brand devotees may have forgotten a piece of lost history in Starbucks lore. In the 90s, when Starbucks took lattes mainstream across America, many stores had one or two special, extra-wide, purple velvet chairs. They were an almost Dr. Suessian take on the hyper plush living room seating of that decade, meant to shake up the rigidity of Starbuckss design at the time while urging you to stay a while.
What was great about that chair is it was oversized; it wasn’t practical. It was very much like you could maybe have two people sit in it, you could put your feet up, swing your legs over the arm. There were a lot of ways to occupy it, Clark says. That was a big part of the inspiration [for a redux]and also the lushness of the texture.
Indeed, Niccol told me last year that an updated chair needed to imbue something akin to FOMO when sitting down at Starbucks: Its got to be the seat that when you walk in, youre like, Man, I cant wait for him to get up. Im hopping in that chair the second he does.
Starbucks landed on a design that resurrects hefty 90s furniture and adds a dollop of midcentury design. I find myself sucked back into 1996 just looking at it.
You see the same voluptuous arm silhouettes from the original chair (dont worry, theyre stll fixing that ruching), but its framed in wood (albeit with far more weight than youd see in traditional midcentury designor even the rest of Starbuckss midcentury-inspired furnishings). The visual heft of the entire chair is intentional, built to exude confidence that it can accommodate your most leisurely posture.
[Photo: Courtesy of Starbucks]
Its a little overly generous in its invitation to be comfortable, Clark says.
Like the cup, Starbucks developed the new chair in-house. The process began with an adjustable ergonomic model. Built from a CMF frame and sparse cushioning, it looks straight out of IKEA, but the system allowed the team to study how it would feel to sit (and eat and drink) at various angles. From there, they built a cardboard massing model to lock in its curves and proportions. For the final production sample, the company went with its rich Starbucks green because, gosh is that purple a statement. But more colors could enter the mix in the future.
No doubt, this is a premium chair for a QSR restaurantmost stores may get one or two. Its inevitable cost and maintenance is probably why Starbucks ditched their purple chair years ago, which I recall looking pretty gnarly before they up and disappeared. Clark believes its new velvet fabric will be easier to clean, and that Starbucks locations can get five to ten years out of a chair before retiring it or even reupholstering it. However, she also insists that isnt their chief concern.
Part of what were in a way saying, it doesn’t exist to be convenient or easy to maintain. It exists to provide comfort. And were willing to take on the challenge, Clark says. Of course we designed it to be up to the test for all the use it gets, and well have to take care of it . . . but its something were committed to.
The new cups and chairs will arrive in U.S. stores toward the end of 2026, while the cups are slated to go abroad in 2027. And theyll undeniably add a little more oomph to Starbuckss turnaround, as it works to make its cafes once again a place you want to sit and stay a while.
I think that it really is more than just a chair or cup, Clark says. These are the most intimate things. These are the things you occupy or touch. We feel these are really intrinsically linked to everything about our brand.
When the email pinged in my inbox, I didnt even bother to open it immediately. I already knew what it was. One glance at the subject line told me everything.
After enough time on the job hunt, you develop a sixth sense for HR language. The preview textThank you for taking the timesaid it all. Its the standard soft intro to bad news: Your application was amazing . . . but not amazing enough.
The blow softens once youve received a few of these. But the emotions that follow resemble the five stages of grief: denial, anger, bargaining, depression, and eventually, acceptance. I ran the gamut of these feels when I got my latest rejection for a role that seemed promising all the way through the final interview. Heres how I felt and acted after I opened that message and faced reality.
Denial
Nah, this can’t be right. I refresh my inbox three times, as if the letters in the message will magically rearrange themselves into a sequence that reveals a start date. Could it be a system glitch? Maybe they sent this to the wrong candidate? (Believe it or not, its happened to me before.) I mean, I was perfect for this role. Remember in the final interview when I gave that answer about cross-functional collaboration that made the hiring manager nod so hard I thought she had that new J. Cole playing in her AirPods?
I draft a response. Thank you for your consideration. However, I believe there may have been an error . . . I let it sit in my drafts folder for exactly 11 minutes before deleting it. Even my delusions have limits. But I do check LinkedIn to see if they’ve posted the position again. They haven’t. Which means they hired someone. Which means this is real. Which leads me directly to . . .
Anger
I’m in my feelings now. Who did they hire? I need to know immediately. I’m on LinkedIn doing forensics like I’m on The First 48. I filter the companys employees by most recent hires. There he is. Brayden. Of course it’s a Brayden. His profile says he thrives in ambiguous environments and has experience with stakeholder management. My profile says the exact same thing but with better action verbs. Ugh.
Bargaining
Okay, let me think about this objectively. What could I have done differently? Maybe I shouldn’t have mentioned I needed to check the start date because of a vacation I had already booked. Maybe that made me seem uncommitted. Or maybe I should’ve asked more questions at the enddid I seem too confident? Not confident enough? Maybe I talked too much . . . or too little. Should I have laughed at the hiring managers joke about getting her ducks in a row? It wasn’t funny, but maybe that was the test.
I consider emailing the recruiter to ask for feedback. Just a friendly note. Hey! Would love to learn what I could improve for next time :) The smiley face is crucial. Makes me seem coachable and not at all dead inside. I type it out. I don’t send it. I know what they’d say anyway: We had many qualified candidates. Translation: Brayden’s uncle plays golf with the CEO.
Depression
It’s been three days since the rejection. I’m still thinking about it. I’ve applied to 16 other jobs since then. Each one feels like I’m rolling up a resume, stuffing it into a Dos Equis bottle, and chucking it into the ocean. My Easy Apply count on LinkedIn is getting embarrassing. I’m tailoring cover letters for positions I’m overqualified for, underqualified for, and in some cases, not even sure what the job actually is. Customer Success Champion could mean literally anything.
I think about Brayden again. Brayden’s probably in orientation right now, getting his company laptop, meeting the team, hearing about the unlimited PTO that no one actually takes. Brayden’s probably not wondering if his name sounded too ethnic on the application. Brayden’s probably not calculating whether the commute is worth it while also knowing he won’t get the offer anyway. Brayden’s just . . . winning.
I eat leftover jerk chicken at 11 a.m. and consider whether this is rock bottom or if rock bottom is a few more rejection emails away.
Acceptance (sort of)
Here’s what I know: This isn’t personal, even though it feels personal. Corporate America isnt rigged. It just tends to work out beautifully for guys named Brayden. That company wasn’t the one. Maybe the role wasn’t even that good. The Glassdoor reviews mentioned fast-paced environment, which is code for no work-life balance anyway.
I update my resume again. Not because I think it’ll make a difference, but because I need to feel like I’m doing something. I tweak one bullet point. I remove an unnecessary comma. I save it as Resume_FINAL_v3_ACTUAL_FINAL_Feb2026.pdf knowing damn well there will be a v4.
And then I do what I always do: I apply to another job. Because theres only one thing worse than getting rejection emails, and thats not getting any emails at all.
Hello again, and thank you, as always, for spending time with Fast Companys Plugged In.
In a remarkably influential 2011 Wall Street Journal op-ed, Netscape and Andreessen Horowitz cofounder Marc Andreessen declared that software was eating the world. From entertainment to commerce to transportation, he argued, startups that were about code at their core were disrupting many of the worlds most deeply entrenched businesses. That was just the beginning, he warned: Companies in every industry need to assume that a software revolution is coming.
Fifteen years later, we know that some of the disruptors Andreessen citedsuch as Zynga, Groupon, and Skype (RIP)did not, in fact, eat the world. His larger point, however, played out much as he predicted. Software really does run everything these days. And many of its purveyors are among the most successful companies in the world.
Recently, however, Wall Street has been spooked by the possibility of another sea change in the making: AI might be on the verge of eating software. The sudden leap forward in the capability of software-writing LLM tools such as Anthropics Claude Code has investors worried that the corporate behemoths presently making tidy profits by selling subscription-based softwareparticularly for enterprise customersmight find themselves unable to compete with apps coded by AI for very little cost.
This theoretical collapse of the software industry is known as The SaaSpocalypse, a name I hate but cant quite avoid acknowledging. (I promise not to bring it up again.) Its reflected in the stock performance of such seemingly robust companies as Workday (down 35% year to date), Adobe (-26%), Salesforce (-25%), Autodesk (-21%), and Figma (-19%). On February 23, after Anthropic published a blog post touting Claudes ability to modernize software written in the 66-year-old COBOL programming language, IBMCOBOLs kingpin for most of that timesaw its biggest one-day stock drop in more than a quarter century.
Investors are right to expect that AI will radically change software as a business in the coming years. The evidence is already here, in the form of developments such as Blockthe parent company of Squareannouncing on February 26 that its terminating 40% of its 10,000 employees. Explaining the brutal reduction, CEO Jack Dorsey contended that AI will allow a smaller team to accomplish more and do it faster, and said he was getting ahead of an inexorable industry-wide trend. What happens next remains to be seen, but Block will surely never be the same.
Still, Wall Streets apparent belief that AI spells bad news for todays software titans is premature, and possibly just misguided, period. Its certainly heavy on vibes rather than hard data: Mondays dip in the S&P 500 apparently stemmed in part from a dystopian imaginary June 2028 memo published by Citrini Research. Laying out a sweeping nightmare involving AI crushing the U.S. economy, it name-checked specific companies such as DoorDash and Zendesk as being incapable of competing with AI-infused apps and agents. Well, maybe, though even the documents authors admitted they were certain some of these scenarios wont materialize.
In a little over two years, it will be possible to assess what Citrini got wrong and right. For now, it remains equally possible to imagine futures in which 2026s software-based kingpins arent mowed down by AI, even if the technologys coding chops will continue to improve indefinitely rather than hitting a wall.
For one thing, the software business isnt solely about writing software. It requires selling itsometimes in the form of hefty annual contractsand supporting it when things go wrong. It will be difficult for AI (or even most AI-savvy startups) to take on these tasks outside of the human-powered infrastructure that major software companies have built, often over decades.
In Sun Microsystems cofounder Scott McNealys memorable phrase, enterprise customers like having one throat to chokesomeone with the bottom-line responsibility of making them happy. They wouldnt get that by vibe-coding their own in-house replacements for major apps, or buying them from a tiny company offering look-alike equivalents. Instead, they have a powerful incentive to keep doing business with companies that have already shown an ability to deliver.
People who use AI to write their own apps might even develop a newfound appreciation for all the ways software suppliers make their lives easier. For instance, last April I wrote about the note-taking app Id vibe-coded for my own use, and said Id put it together in a week. What I didnt know at the time was that Id spend the next 11 months fiddling around with new features, squashing bugs, and stressing over the fact that Inot Apple, Google, or Notionbear responsibility for the apps security and data integrity. Id do it all over again, but because its been great, mind-expanding fun, not because its saved me money or time.
Its far too early to conclude that existing software giants wont use AI to grow even more dominant. After all, they have considerable resources to throw at that challenge, and deep knowledge of the industries they serve. AI could be a potent accelerant to their growth, or just a way to slash costs by reducing human headcount. But theres little evidence its on the cusp of figuring out how to build and market products humans will find compelling without plenty of guidance.
Even as the technology puts pressure on software companiessay, by introducing enough competition that its tougher to endlessly raise pricesthey might be intrepid enough to find a new path forward. IBM, for example, isnt short on AI savvy of its own; if the company cant find a way to make money from customers wanting to modernize COBOL-based platforms, its IBMs own fault, not Anthropics.
Yes, history is full of sobering case studies of once-mighty software companies that gotoverwhelmed by technological change. In the 1990s, for example, the PCs shift from the text-based DOS to the graphical interface of Windows was ruinous to big names such as Lotus, WordPerfect, and Ashton-Tate, none of which bet big enough on Windows early enough. Their miscalculation was unquestionably Microsoft Offices gain.
But it doesnt always pan out that way. In the following decade, Office faced a similar threat as productivity migrated to internet-based tools. When Google launched products such as Docs and Sheets, stuffed them with innovative features, and offered them for free, observers thought that might be terrible news for Microsoft. Not so: The company reacted skillfully enough that Microsoft 365, as it calls Office in its current form, is bigger than ever, to the tune of $95 billion in revenue last year.
In Silicon Valley, it has become fashionable to tell workers that the only way to remain relevant is to embrace AI rather than fear it. As Nvidia CEO Jensen Huang puts it, Youre not going to lose your job to an AI, but youre going to lose your job to someone who uses AI. The same principle applies to todays software companies. Theyre not going to be killed by AIonly by other companies that are better at seizing the opportunities it offers than they are.
Youve been reading Plugged In, Fast Companys weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to youor if you’re reading it on fastcompany.comyou can check out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard.
More top tech stories from Fast Company
If technology could bring traffic fatalities down to nearly zero, why not embrace it?What the elevator can teach us about self-driving cars. Read More Anthropic’s autonomous weapons stance could prove out of step with modern warThe Pentagon is demanding that the AI company remove the safety guardrails from its AI models to allow all lawful uses. Read More
Is Apple about to debut a new iPhone camera feature?What is ‘variable aperture’ and why you should care. Read More
AI can write now. What happens to reporters?If bots can reliably draft copy, ‘something big’ might be happening to the job of a journalist. Read More
Apple killed Dark Sky. Now its creators are trying again with a new weather appAcme Weather brings back the team behind the cult-favorite forecast app, with new features designed to show uncertainty. Read More
15 incredibly useful things you didn’t know NotebookLM could doFrom managing meetings to maintaining your car, Google’s Gemini-powered research tool can provide all sorts of eye-opening revelations. Read More
Last nights surprise announcement from Netflix that it was abandoning its Warner Bros. takeover bid in the wake of a “superior” offer from Paramount Skydance has sent shockwaves through both Hollywood and Wall Street. And investors in all three companies have reacted strongly.
Heres what you need to know.
Whats happened?
Yesterday, Warner Bros. Discovery said it has determined that a revised bid for its cinema and television properties from Paramount Skydance was a superior proposal to Netflix’s long-standing offer of $82.7 billion.
Paramount, which has been in a hostile bidding war with Netflix over the movie studio, issued a new proposal to Warner Bros. on Tuesday.
That revised proposal saw Paramount offer roughly $111 billion for all of Warner Bros. Discoverys assets.
To put those numbers on a per-share basis, it meant that while Netflix was offering roughly $27.75 per share, Paramount was offering $31.
Yet those numbers arent exactly an apples-to-apples comparison. Thats because Netflix was looking to acquire only Warner Bros. Discoverys movie and streaming divisions, including the Warner Bros. film studio and HBO Max streaming service.
Paramounts offer, by contrast, wants all of Warner Bros. Discovery, including its television properties, which consist of CNN, Discovery Channel, Turner Classic Movies, and many more.
Executives at Warner Bros. Discovery had made it no secret that they were more amenable to a takeover by Netflix instead of David Ellisons Paramount Skydance, but in the end, Hollywood is a business, and money speaks louder than personal preferences.
And that money made Warner Bros. Discovery deem Paramounts offer a “Company Superior Proposal” as defined by its current Netflix merger agreement. As a result, Netflix was obligated to come back with a counteroffer within four days.
Netflix says WB is not worth the higher price
But in a move that surprised many in Hollywood and on Wall Street, Netflix didnt need four days.
Within hours of Warner Bros. Discovery designating Paramounts offer superior, Netflix announced that it was bowing out of the acquisition battle.
In a statement announcing the surprising withdrawal, Netlfixs co-CEOs, Ted Sarandos and Greg Peters, said that the company was disciplined and that after Paramount Skydances new offer, a Netflix-Warner Bros. deal is no longer financially attractive.
The CEOs added: this transaction was always a nice to have at the right price, not a must have at any price.
For its part, Warner Bros. Discovery issued a statement from CEO David Zaslav, saying, “Netflix is a great company and throughout this process Ted, Greg, Spence and everyone there have been extraordinary partners to us.”
“We wish them well in the future,” Zaslav added. “Once our Board votes to adopt the Paramount merger agreement, it will create tremendous value for our shareholders.”
NFLX, PSKY, and WBD stock prices swing
While Hollywood will be dealing with the surprise withdrawal of Netflixs offer for some time to come, investors reactered immedialyimpacting the stock prices of all three companies involved in the dramatic announcement.
Despite Netflix walking away from its deal (and thus abandoning the possibility of owning the lucrative film and streaming rights to such properties, including Batman, Harry Potter, and Game of Thrones) shares of Netflix (Nasdaq: NFLX) are currently up significantly in premarket trading.
As of this writing, the stock is up nearly 7.4% to $90.85. This stock price rise might seem antithetical at first, considering the IP that Netflix is walking away from, but it highlights how Netflix investors in general have been apprehensive of the proposed Netflix-Warner Bros. merger since it was announced in December.
At the time of the announcement, Netflix shares were trading at around the $103 mark. As of yesterdays market close, which was before Netflix announced it was pulling out of the deal, NFLX shares have declined nearly 19% since the merger announcement.
Investors in Paramount Skydance Corp (Nasdaq: PSKY) also seem satisfied by the news, with PSKY shares are up 7.25% over yesterdays closing price of $11.18 to $11.99.
So why are Paramount investors happy? It largely comes down to the fact that Paramount needs Warner Bros. more than Netflix did. Netflix is the dominant streamer across the globe, while Paramount is a relatively smaller player compared to Netflix, Disney, and Warner Bros. (via the latters HBO Max).
If Paramount is to stay competitive in the future, it needs to build up its IP portfolio so that it can continue to attract paying subscribers. By acquiring Warner Bros Discovery, it can do just that.
And then we get to shares of Warner Bros. Discovery (Nasdaq: WBD). Yesterday, the stock closed at $28.80.
Currently in premarket trading, they have fallen about 2% to $28.22. While Paramounts offer is locked in at $31 per share, todays fall is probably a sign from investors that they are a bit disappointed that there was not a counteroffer from Netflix, which could have made their shares even more valuable.
A Paramount Skydance deal is still far from certain
The fact that WBD shares are down likely also reflects some ongoing uncertainty in investors minds.
While Paramount Skydance is now the only bidder for Warner Bros. Discovery, and Warner Bros seems happy with the proposal, it doesnt mean the two companies will certainly merge.
A combined Paramount Skydance-Warner Bros. Discovery raises a lot of antitrust and consolidation concerns for both Hollywood and linear and cable television.
Given that Paramount Skydance is interested in acquiring WBD’s film and television properties, the merger will likely face even higher scrutiny than a Netflix-Warner Bros. merger would have.
Some believe that due to the Ellisons friendly relationship with President Trump, a Paramount Skydance-Warner Bros Discovery merger may have smoother-than-expected sailing.
However, ultimately, it will be up to the Justice Department to approve the merger in the United States.
Even if the merger is approved in the United States, that doesnt mean other regulators from around the world will approve it, and that uncertainty will be weighing on investors minds for some time.
If you don’t want to be left behind by the AI revolution, you really need to start paying for it.
At least thats become the common refrain among some AI enthusiasts, who seem intent on instilling FOMO in less technical users. The free versions of ChatGPT and Claude, they say, are woefully inadequate if you want to understand where things are headedso stop being a cheapskate and hand over your $20 (or $200) a month like the rest of us.
“Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone,” HyperWrite CEO Matt Shumer recently wrote in a widely shared essay on AI’s impact. “The people paying for the best tools, and actually using them daily for real work, know what’s coming.”
I’m giving you permission to safely ignore this advice, and to not feel bad about it. While an AI subscription might make sense if you’re running into specific frustrations with the free versions, you can still get plenty of mileage without paying, and learn a lot about the state of AI in the process. Don’t be frightened into buying something that hasn’t actually proven its value to you.
The state of the art is still free
One way that AI boosters try to scare you into paying for AI is by arguing that the free versions are already obsolete, so any negative impressions you might’ve gotten from them are misguided.
“Part of the problem is that most people are using the free version of AI tools,” Shumer wrote in his essay. “The free version is over a year behind what paying users have access to.”
This claim is provably false:
The free version of ChatGPT includes access to GPT-5.2, OpenAI’s latest model, which launched in December.
The free version of Google Gemini includes access to Gemini Pro 3.1, which launched on February 19.
Claude’s free version doesn’t include Opus 4.6, but has the same Sonnet 4.6 model that the paid version uses by default. It launched on February 17.
Microsoft 365 subscribers can also select “Smart Plus” in Copilot to use GPT-5.2, without a premium AI subscription.
xAI’s Grok 4 is available for free.
Of course, the free versions of these tools all have usage limits, but so do the paid ones. When I signed up for a month of Claude Pro to test Opus 4.6, I quickly ran into yet another paywall. To continue the conversation, I had to either buy pay-as-you-go credits or upgrade to the $200-a-month Claude Max plan. Without paying more, I couldnt use Claude at allnot even Sonnet 4.5until my limit reset. My main takeaway was that I should have just stuck with Sonnet in the first place.
Instead of paying for some vague feeling that you’re getting the state of the art, you should play around with what AI companies offer for free. Make them demonstrate that the results are meaningfully different before you consider paying them, not after.
AI should prove itself to you, not vice versa
For AI boosters, the corollary to paying for AI is that you also need to throw immense amount of time into figuring out what it’s for. Ethan Mollick, for instance, writes that you should “resign yourself to paying the $20 (the free versions are demos, not tools),” then spend the next hour testing it on various real-world tasks.
Sorry, but this is backward from how software as a service should work. It’s not your job to invest time and money into convincing yourself that AI is worth more time and money. Let the AI companies do the convincing, and don’t fall prey to FOMO in the meantime.
Playing the field is just as instructive
If you do commit to paying for an AI tool, chances are you won’t use other AI tools as much, or at all. But that in itself isn’t a great way to understand the state of AI.
What you should be doing instead is bouncing around, taking full advantage of what each AI company offers for free. That way, you’ll get a sense not just of the subtle differences between large language models, but also the unique features that each AI tool offers. You’ll also be less likely to run into usage limits, the only trade-off being that your past conversations will be scattered across a few different services.
Such behavior is, of course, wildly unprofitable for all the companies involved. But again, that’s not your problem. If you’re getting sufficient value out of free AI tools, the AI companies will have to tweak their free offerings accordingly (for instance, with ads) or come up with new features worth paying for. Claude Code, for instance, is available only with a subscription, and over time we may see more paywalled tools (like Claude Cowork, which is still in early development) that cater to specific tasks or verticals.
Until that happens, enjoy the free versions of AI tools, and rest easy knowing that you’re not missing much.
Its the last week of Black History Month (BHM) and its clear Americans are over performative values. Trite BHM-inspired merchandise sits on retailer shelves untouched while media is abuzz covering the artistry, activism, and symbolism of Bad Bunnys Super Bowl halftime show. The signal is clear: consumers are looking to brands for real solutions to real problems, not products that commodify culture.
Most companies build everything from advertising to AI for the “average user,” but in doing so, they react to rather than lead markets. Strategic leaders look to growth audiencesunderserved groups who are the fastest-growing demographicsas lead users. They are the “canaries in the coal mine” because they navigate the highest levels of systemic friction, making them the first to experience “average” design failures.
What does championing these lead users look like at a communications, product, or systems level? It looks like Elijah McCoy automating engine lubricationan innovation bred from the friction between his engineering degree and the menial labor he was forced to perform, thus creating the real McCoy quality standard. It looks like Jerry Lawson changing the economics of the gaming industry by inventing the video game cartridge that divorced its hardware from its software. And it looks like emergency medicine becoming a global standard after being piloted by the Pittsburgh Freedom House Ambulance Service who, in the face of medical bias and systemic unemployment, also redefined emergency care as a public right.
Drawing from their lived experiences in underserved groups, these pioneers didnt just solve problems; they mastered environmental friction. Today, that friction also manifests in algorithms. Championing growth audiences as lead users means ensuring they are critical AI system “stress testers.” When we fail to design for them, we allow AI data, development, and deployment to default to obtuse “averages” that can frustrate or drive away valuable customers. Three recent examples highlight issues and opportunities.
Relying on ‘Data Infallibility’ versus Lived Realities
In this Infallibility Loop bias, a brands AI trusts a data sourcelike a flawed GPS coordinate or outdated government mapas an absolute truth, even when customers provide contrary evidence. This is a digital echo of historical redlining: a systemic refusal to see humans over faulty data.
The Experience: A Black homeowner in an affluent area is penalized by an AI that confuses her address with a property in a different town, automatically forcing unnecessary flood insurance onto her mortgage and increasing the payments. Despite providing human-verified deeds and highlighting known GPS errors, the AI blocks her incomplete payments and triggers automated credit hits. A resolution only came months later after the consumer filed state-level servicer complaints.
The Fix: Prioritize Dynamic Qualitative Data Collection. Design should allow real-time, contextual evidence to override static, biased datasets. True brand innovation requires systems to yield to the experts: their customers.
Leveraging ‘Data Intimacy’ while Neglecting Situational Accuracy
This trust paradox occurs when brands use private data, but fail to combine situational data, making personalization feel like needless surveillance.
The Experience: During Januarys recent record-breaking New York snowstorm, a customer called a national pharmacys location in her neighborhood to make sure they were open. The AI-powered interactive voice response (IVR) recognized her number, asked for her birthdate, and greeted her by name. Yet, after performing this exchange, it provided a “default” confirmation that the store was open when asked. Without a car, the customer braved life-threatening conditions on foot only to find a handwritten note on the door indicating it had closed due to the storm.
The Fix: Add Good Friction. A term coined by MIT professor Renee Richardson Gosline, “Good Friction” requires that when external context (like a Level 5 storm) conflicts with standard scripts, the system pauses and verifies first.
Prioritizing ‘Recency’ But Erasing Loyalty
Recency bias in algorithms weights the last data point more heavily potentially resulting in algorithmic erasure.
The Experience: A 20-year elite status customer calls an airline, only to be greeted by the name of his niece (a nonmember relative for whom he recently booked a one-off ticket) and then is erroneously deprioritized in the automated journey as a nonmember. In many “growth audience” and immigrant households, economics are multigenerational and communal, with a single “lead user” facilitating purchases for extended family. This airline systems “memory” was shallow, seeing only the most recent transaction and ignoring a decades-long relationship because a reservation shared the same contact number.
The Fix: Focus on Holistic Design. AI must be weighted to recognize the arc of the customer journey, ensuring that loyalty isnt erased by a single data point or the nuances of communal purchasing.
To be sure, bad data is a universal problem, but the lack of situational intelligence in our AI systems hits growth audienceslike Black consumersfirst and hardest. Because these audiences represent a disproportionate share of future consumption and have the most “cultural common denominators,” their frictions are diagnostics for markets writ large. We arent just solving for a niche by championing them as lead users, we are adopting more rigorous, empathetic, expansive, and effective standards that solve real problems for all people.
At hundreds of Burger King restaurants across the U.S., theres a new invisible worker whos tracking which ingredients are in stock, analyzing daily sales data, and checking in on whether employees are saying Thank you and Youre welcome. Its an AI assistant named Patty.
According to Thibault Roux, Burger Kings chief digital officer, the voice-activated chatbot is designed to help employees and managers handle tasks that might usually require pulling out a computer or consulting with an instruction guide. Patty began showing up at select locations about a year ago, and is now in a pilot phase at approximately 500 Burger Kings. Its expected to roll out to the rest of the chains U.S. locations by the end of the year.
On a day-to-day basis, Patty has an array of functions, from letting a manager know if a store is low on onions to helping an employee build a new burger. But it has another role thats raising quite a few eyebrows: analyzing Burger King locations based on friendliness by tracking employees use of key phrases like Welcome to Burger King, Please, and Thank you.
Online, commenters are concerned that this functionality is a slippery slope toward 1984-style employee surveillance. In an interview with Fast Company, though, Roux clarified that Patty is not being used to analyze individual employees performance, and is instead imagined as a kind of coach.
It’s truly meant to be a coaching and operational tool to really help our restaurants manage complexities and stay focused on a great guest experience, Roux says. Guests want our service to be more friendly, and that’s ultimately what we’re trying to achieve here.
Patty, are we running low on Diet Coke?
Technically, Patty is the chatbot version of Burger Kings assistant platform, which collects data from operations including drive-through conversations, inventory, and sales, and then uses AI to analyze patterns in that data. For now, Patty operates on a customized model from OpenAI, though Roux says the technology is flexible enough that it could integrate with another partner in the future (like Anthropic or Gemini) depending on the companys needs.
For managers and employees in stores, Roux says Patty operates similarly to something like Siri. Patty is activated by a small button on the side of an employees headset, and they can ask it direct verbal questions related to their specific storelike recent sales figures or inventory updatesas well as more general company information, to which the bot will provide a verbal answer.
If you’re looking to clean the shake machine [you can ask Patty] the procedures to clean it, Roux explains. Or we have a lot of limited-time offers, and sometimes they can be cumbersome to remember. You can easily tap into Patty and be like, Hey, remind me, does the new build maple bourbon barbecue have crispy jalapeos?
Patty can also reach out to employees directly if it notices a pattern of interest. For example, if Patty thinks a specific store is out of lettuce, it might ping a manager to confirm. Once its received confirmation, it can mark lettuce as sold out on that locations app and websitea process that previously would have required human intervention. Roux says franchisees and regional managers can decide how they want Patty to reach employees with information, whether its through their headsets or via a text message (though the tech is programmed explicitly to never interrupt a worker during a customer interaction).
Insights from Burger Kings Assistant platform also live outside of employees headsets. Managers can check information from the tool on an accompanying website or app. For example, Roux says, when a district manager is visiting a new store, they might ask Patty on the app, What are the top three guest complaints at this location this week? or What are their top missing items?
In an interview with Fast Company writer Jeff Beer earlier this month, Burger King President Tom Curtis said the assistant platform has already led to some significant menu changes. Curtis explained that the AI tracked all the times that team members said Im sorry, we dont have that and linked them back to a common denominator: apple pie. In January, Burger King brought back its apple pie for the first time since 2020.
Were in the idiocracy version of 1984
Pattys more straightforward uses, like helping managers access sales data and check inventory, seem fairly predictable in the context of fast food. Where Burger King is really pushing Pattys use cases, though, is with its friendliness metric.
In an interview with The Verge on February 26, Roux said Patty would recognize phrases like Welcome to Burger King, Please, and Thank you, and then give managers access to data on their locations friendliness performance based on those keywords. Mere hours after that piece went live, a thread in the subReddit r/technology on Patty had already amassed more than 15,000 upvotes and nearly 3,000 comments. Common refrains from users include comparing the technology to the surveillance state in George Orwells novel 1984, labeling it authoritarian and dystopian, and accusing Burger King of employee surveillance.
“This would be criticized as being cartoonishly unrealistic in a sci-fi movie 10 years ago,” one user wrote. Another added, “We’re in the idiocracy version of 1984.”
When asked about this response, Roux says the data from employees conversations is anonymized, and that none of these friendliness metrics will be used for grading or assessing individuals. Further, he adds, Patty will not directly instruct employees on what to say or how to say it. Instead, data on friendliness will be shared with managers, who can use it for face-to-face coaching with their teams.
Still, its unclear exactly how Patty is quantifying friendliness. In a video explanation of the feature, a manager is shown asking the bot, Is there anything that needs my immediate attention? to which it responds, The teams friendliness scores this morning were the highest this week.
In an email to Fast Company, a Burger King spokesperson said, In select pilot locations, weve explored using aggregated keywords, including common hospitality phrases, as one of several signals to help managers understand overall service patterns. The tool is not used to score individuals or enforce scripts. Burger King did not respond to ast Companys request for clarification on how friendliness scores are calculated.
So far, Roux says hes seen growing interest in Patty from franchisees, with several managers making specific requests for future add-ons.
A lot of our franchisees . . . and regional general managers are very competitive, so they want to know, Hey, how do I compare to other restaurants? Roux says. I think that’s something that we’re going to be rolling out. In fact, we were looking at some of the designs earlier this week with the franchisees. So this is only the beginning.
Recently, Grok AI faced criticism after users found it was creating explicit images of real people, including women and children. Although xAI has now implemented some restrictions, this incident revealed a serious weakness. Without safeguards and diverse perspectives, girls and women are put at greater risk. The dangers artificial intelligence poses to women and girls are real and happening now, affecting their mental health, safety, healthcare, and economic opportunities.
Last fall, a mother discovered why her teenage daughter’s mental health had been deteriorating: It was a result of conversations with a Character.AI chatbot. She’s not alone. Aura’s State of Youth Report, released in December, found that parents believe technology has a more negative effect on girls’ emotions, including stress, jealousy, and loneliness51% compared with 36% for boys. Thats unacceptable, and we need to do better.
The risks extend beyond mental health. OpenAI recently reported that more than 40 million Americans seek health information on ChatGPT daily. As AI in healthcare expands, the consequences of biased training data can be dangerous. AI models that are trained predominantly on male health data produce worse outcomes for women. For instance, an AI model designed to detect liver disease from blood tests missed 44% of cases in women, compared with 23% in men.
Uneven playing field
In the workplace, AI is not leveling the playing field. Despite laws prohibiting discrimination, AI-powered hiring tools have repeatedly caused concerns about bias, fairness, and data privacy. A study published by the University of Washington found that in AI resume screenings, the technology favored female-associated names in only 11% of cases.
These failures reflect who is building our technology. Women make up just 22% of the AI workforce. When systems are designed without women’s perspectives, they replicate existing inequities and introduce new risks. The pattern is clear. AI is failing girls and women.
Pivotal moment
This could not come at a more pivotal moment in the job market. A quarter of the roles on LinkedIns latest list of the 25 fastest-growing jobs in the United States are tech-related, with AI engineers at the top. Decisions about how AI is designed today will shape access to jobs, healthcare, education, and civic life for decades. It is critical that women play an active role in developing new AI tools so that inequity is not baked into the systems that increasingly govern our lives.
Young women are not disengaged with AI. Research conducted last year by Girls Who Code, in partnership with UCLA, found that young women are deeply thoughtful about the dual nature of technology. They see its potential to advance healthcare, expand educational access, and address climate change. They are also aware of its dangers, such as bias, surveillance, and exclusion from development. This isnt blind optimism. Instead, it offers a perspective that is often missing in todays AI development.
Creating technology is an exercise of power and holds great responsibility. Since girls are often the most affected by AIs failures, they must be empowered to help lead the solutions. Women like Girls Who Code alumna Trisha Prabhu, who developed ReThink, an anti-bullying tool, exemplify this. Latanya Sweeney, recognized as one of the top thinkers in AI, founded Harvards Public Interest Tech Lab. Their achievements demonstrate the potential when women lead in tech development.
Smart steps
If we want safer, more responsible AI systems, three steps are essential.
First, computer science education should integrate social impact. Coding cannot be taught in isolation from its consequences. Students should learn technical skills alongside critical analysis of how technology shapes communities and lives. This approach produces results. For instance, one Girls Who Code student utilized the skills she learned to create an app called AIFinTech to help immigrant families manage their personal finances.
Second, women must be represented in AI development and governance, particularly those from historically underserved communities. They need seats at the tables where AI systems are designed, tested, and regulated. This means ensuring gender diversity on AI ethics boards and that government AI committees are representative of the demographics most affected.
Finally, how we evaluate artificial intelligence needs to evolve. Today, AI is assessed by efficiency, accuracy, and profitability. We must also evaluate health, equity, and well-being, especially for girls and young women. Before an AI system is deployed in a high-stakes environment such as healthcare, it should be required to pass tests for gender bias and demonstrate that it does not produce disparate outcomes. New York City, for example, requires employers that use automated employment decision tools to undergo an independent bias audit annually.
We do not have to accept AIs flaws by default. We are witnessing AIs impact on girls in real time, and we must seize the opportunity to change course while the technology is still being shaped. When girls are given the chance to lead in AI, they will build safer systems not just for themselves, but for everyone.
What began as a race to build better AI models has escalated into a competition for compute, talent, and control. Foundation modelslarge-scale systems trained on vast datasets to generate text, images, code, and decisionsnow underpin everything from enterprise software and cloud infrastructure to national digital strategies.
The industrys language around AI has grown more ambitiousand more elastic. Agentic AI has leapt from research papers to Davos billboards, while artificial general intelligence, or AGI, now appears routinely in investor decks and earnings calls. Definitions have begun to blur. Some companies quietly lower the bar for what qualifies as general, stretching the term to encompass incremental productivity gains.
Yet the economic results, particularly measurable returns on AI investment, remain uneven. According to PwCs 2026 Global CEO Survey, 56% of 4,454 CEOs across 95 countries reported neither increased revenue nor reduced costs from AI over the past 12 months. Only 12% achieved both. Even so, 51% plan to continue investing, despite declining confidence in revenue growth. The result is a widening gap between engineering reality, commercial storytelling, and public expectation.
Few voices carry as much authorityor have shaped modern AI as directlyas Andrew Ng. The founder of DeepLearning.AI and Coursera, executive chairman of Landing AI, and founding lead of the Google Brain team, Ng has helped define nearly every major phase of the field, from early deep-learning breakthroughs to the current wave of enterprise deployment. He has authored or coauthored more than 200 papers and previously led the Stanford AI Lab. In 2024, he popularized the term agentic AI, arguing that multistep, tool-using systems capable of executing workflows may deliver more near-term economic value than simply scaling larger models.
In an exclusive conversation, Ng offered Fast Company a reality check. He says true AGIthat is, AI capable of performing the full breadth of human intellectual tasksremains decades away. The true competitive frontier, meanwhile, lies elsewhere.
This conversation has been edited for length and clarity.
You helped popularize the term agentic AI to describe a spectrum of autonomy in AI systems. How did you come up with it, and how has the concept evolved as multi-agent systems move into enterprise production?
I began using the term almost two and a half years ago, though I didnt publicly take credit for it at the time. I started using it because I felt the community needed language that shifted the focus toward AI systems capable of taking multiple steps of reasoning and actionnot just a single prompt-and-response exchange. More specifically, I felt there would be a spectrum of AI systemssome slightly autonomous or slightly agentic, and others highly agenticwhere they take many steps of actions and work for a long time.
No one was using the term agentic to describe this concept before I began using it. I started introducing it in my newsletter and in talks at conferences and industry events, and it quickly gained traction there. I didnt expect marketers to run with it the way they did.
When I attended Davos this year, I saw the word plastered on the sides of buildings. Even outside San Francisco, agentic now appears on billboards. I did want to intentionally promote the use of the term, but seeing how common it has become, I sometimes wonder if I overdid it.
Enterprise adoption of agentic AI is accelerating, yet many organizations are struggling with integration, governance, and measurable ROI. Why is it so?
Two years ago, there was intense hype around AIs risks and dangers, among other concerns. Last year, businesses began shifting their focus toward real-world implementation. This year, the conversation has moved firmly to ROI. Even though many companies are not yet seeing strong returns, they continue to invest because they understand that AI will eventually deliver value. The discussion has shifted from excitement about what AI might do to a more grounded focus on how it can generate real economic impact.
Theres also an interesting split-screen dynamic emerging. On one hand, many businesses say agentic AI is not yet delivering meaningful ROI, and theyre right. At the same time, teams building agentic workflows are seeing rapid growth and real, valuable implementations. The agentic movement still has very low penetration, but it is compounding quickly.
What are the most significant mistakes enterprises make when deploying agentic systems at scale, and how should leaders rethink their technology and operating models to overcome them?
Many businesses are pursuing bottom-up innovation, which is valuable, but the limitation is that it often leads to point solutions that deliver incremental efficiency gains rather than transformative change. If AI automates just one step in a process, for example, it might save an hour of human work and reduce costs. Thats useful and worth doing, but it doesnt fundamentally change the business. Much of todays AI deployment falls into this categoryincremental improvement rather than full transformation. To unlock real value, companies need to look beyond optimizing individual tasks and start reimagining entire workflows.
Doing so requires top-down leadership. Often no single person working on one step has the authority to reshape the entire process, which is why executive-level direction becomes essential. Real impact comes from tailoring AI strategy to each organizations specific context rather than following generic industry playbooks.
There is a growing debate about whether we are in the midst of an AI bubble or simply an early infrastructure build-out comparable to the internet era. How do you distinguish between speculative hype and genuinely durable AI value being created today?
At the application layer, I dont think were in an AI bubble. AI is expanding rapidly across business use caseshow we process legal and technical documents, manage customer success workflows, conduct research, and much more. I would like to see more investment in AI applications and inference infrastructure. Right now, there simply isnt enough inference capacity, and worries around rate limits exist.
The more interesting question about a potential bubble sits in the model training layer, where infrastructure spending continues to surge. If any risk exists, its highest there because the largest investments are concentrated among a small number of players. When companies build highly specialized hardware that can only be reused for inference with some inefficiency, the risk of overbuilding increases. I dont think were overbuilding right now, but if any part of the AI market faces that possibility, its the training layer.
As the industry moves beyond a single-model mindset toward more diverse agentic systems, how should enterprises think about AI architecture? Is there likely to be one dominant framework for building scalable, real-world AI systemsor will organizations need a more flexible approach?
Software can range from five lines o code to massive systems that run for years. Because of that range, there wont be a one-size-fits-all approach to building or governing these systems. Just as we dont use a single framework to manage everything from simple scripts to enterprise platforms, we wont rely on one architecture for agentic AI. Human work itself is incredibly diversefrom basic tasks like spell-checking to analyzing complex financial documents. Since the work varies so much, the AI systems we build will also need to vary.
One principle my teams follow when building agentic AI systems is speed, as continuous improvement is essential. Our typical cycle involves building carefully to avoid major risks, testing with users, gathering feedback, and refining the system until it truly works well. That rapid loop is what helps teams build reliable, high-performing systems faster.
Agentic AI is rapidly increasing systems ability to reason and act with limited human intervention. Does the rise of agentic architectures meaningfully accelerate the path toward AGI, or are we still far from true general intelligence?
Most of the public thinks of AGI as AI that is as intelligent as people, and one useful definition is AI that can perform any intellectual task a human can. You and I could learn to fly an airplane with maybe 20 hours of training, learn to drive a truck through a forest, or spend a few years writing a PhD thesis. Most humans can do these things. Were still very far from AI meeting that definition of AGI.
For alternative definitions that some businesses have put forwarddefinitions that dramatically lower the baryou could argue we already achieved AGI. Theres a good chance that under these lower-bar definitions, some businesses will soon try to declare success. But that wont mean AI has reached human-level intelligenceit will simply mean the definition has been reworked to fit a much lower threshold.
Maybe a year ago, AGI felt 50 years away. Over the past year, perhaps weve made a solid 2% of progress, with another 49 years to go. These numbers are metaphorical, so dont take them too seriously. [Laughs] But we are closer than before, yet many decades away from an AI that matches human intelligence. If you stick with the original definitionaligned with what people genuinely imagine AGI to bewe remain very, very far away.
Is geopolitical fragmentation reshaping global AI strategy for both governments and enterprises?
One of the other big themes Im seeing is sovereign AI. The world is becoming more fragmented, and theres a lot of discussion about how nation-states want to make sure they have access to AI without needing to rely on other nations or any single company that they may not fully trust or be able to rely on in the long term. Governments and regions are thinking carefully about how to build and maintain their own AI capabilities so they can remain competitive and secure.
As AI becomes more central to economic growth and national security, this question of who controls the infrastructure and models becomes much more important. So alongside enterprise adoption, theres also a growing geopolitical dimension to AI deployment.
In 2026, as enterprises search for real economic returns from AI, what leadership decisions and workforce shifts will ultimately determine whether organizations capture meaningful value from agentic systems?
Leadership matters. When I work with CEOs, I see decisive moments when the C-suite must think strategically about what to invest in and then place those bets thoughtfully, guided by a clear understanding of what the technology can and cannot donot just the surrounding hype. In periods of transformation, leadership decisions determine whether an organization captures real value from AI or merely experiments at the margins.
I often speak with CEOs before they set a major strategic direction. No one knows exactly where AI will be in a few years, so we are operating in a kind of fog of war. But uncertainty does not mean we dont know anything. Teams and partners who understand the technology well can narrow that uncertainty significantly and make far more informed decisions.
At the same time, everyone should learn to codeor at least learn to build software with AI. AI has lowered the barrier to creating custom tools. Today my marketers, recruiters, HR professionals, and financial analysts who use AI to write code are already more productive than those who do not. When I hire, I increasingly prefer people who know how to build with AI assistance. I may have been early on this shift, but I now see more startups and established companies moving in the same direction.
Just as it became unthinkable to hire someone who could not search the web or use email, I am already at the point where I hesitate to hire knowledge workers who cannot use AI to build or automate with code.
It’s sometime in the future, and Elon Musk, Jeff Bezos, and Sam Altman have joined forces on a new venture called Energym. The global chain of gyms is designed to harness the energy of the unemployed as they exercise on machines. The generated electricity feeds the AI servers that put them out of a job. Think Planet Fitness meets the Matrix, but without living in a simulation.
Energyms mission is to feed the AI machines with human sweat, and it’s a great business model. By 2030, almost 80% of people have lost their jobs. If you have no money and no purpose, you may as well use all your free time to work out and feed AI server fans with some kilowatts. It solves our need for energy and your need for purpose, Altman says in a promotional video.
Energym, as you probably already know, is not real. But it very well could be. In this era, where so many brands and startups are constantly trying to flip the most inane ideas into the Next Big Thing to get a $50 billion valuation and an IPO, this absurd premise makes total sense.
The mockumentary-style ad fpr Energym that has been circulating on the internet captures the current AI startup circle jerk better than any I’ve seen online so far.
https://www.instagram.com/reels/DVLE-QJEf0n
The advertisement was created by Hans Buyse and Jan De Loore. The latterwho wrote the copy for the video, as well as edited and produced itis the cofounder of a one-man AI creative studio in Belgium called Kitchhock. The company has been creating all types of videos since 2011, back when there was no Seedance or Veo. But now, De Loore is using his creative chops and the latest generative video AI tech to make real ads for real companies in Belgium through his AI video studio arm, AiCandy.
Energym is just a satirical ad designed to promote his own business and destroy the very core of those who make the technology that powers his business. (Incidentally, Energym is the same name as a company that makes a very real $2,800 static bicycle designed for exercise and to produce electricity, but its not related to AiCandy’s fake ad.)
The Energym commercial is obviously tongue in cheek, as are many other videos we have seen in recent months that make fun of our increasing dependency on artificial intelligence and its power. But this one hits particularly hard. For some, it may be the Black Mirror-esque nature of it. (Theres an actual episode of the British TV series that feels like an extended version of the ad.)
Personally, it connects with the WTF-ness that the current AI situation is provoking in me on different levels. The fear of whats next. The dread of seeing reality destroyed. The disgust for the fat cats that are running this charade with no checks and nobodys permission. I find it hard to pinpoint what it is. Its just an absurd exaggeration with no logical basis that hits too close for comfortand, at the same time, makes me happy.