|
|||||
Remember how much fun it was to shop on the internet a decade ago? If you visited the Goop website, Gwyneth Paltrow might introduce you to her favorite $75 candle or $95 vibrator. If you were looking for a lasagne recipe, you could find a good one on Food52along with recommendations for a baking dish hand-selected by former New York Times food editor Amanda Hesser. Watch-lovers flocked to Hodinkee to see what founder Benjamin Clymer thought of the cool new Longines or Omega timepiece (with a handy link to buy it, in case you really liked it). At their peak, around five years ago, all of these media companies landed millions of dollars in venture capital and had valuations well into the nine figures. Legacy media ranging from the New Yorker to Vogue took a page from their book, too, linking to products you could buy directly from the pieces published on their websites. Gwyneth Paltrow and Kerry Washington speak during a live recording of the Goop podcast, September 19, 2019 [Photo: Stefanie Keenan/Getty Images for Goop] But over the last two years, this generation of content-to-commerce pioneers has fizzled out. Goop has gone through multiple rounds of layoffs and its website is a shell of what it used to be. In 2024, Hodinkee was sold at a fraction of its former valuation. And last month, Food52 declared bankruptcy and is headed towards a fire sale. It’s worth asking what happened to these startupsand what comes next, as AI transforms the way we shop online. The rise and fall of Food52 The rise and fall of Food52 offers insight into what went wrong with the content-to-commerce model. Founders Amanda Hesser and Merrill Stubbs had come from the traditional food media. They saw a gap between legacy magazines like Bon Appétit and Food & Wine, which prioritized the perspectives of elite chefs, and amateur food blogs, which were flooding the internet. With Food52, they invited home cooks to submit recipes, which their team would test. The best ones would be featured on the site, alongside beautiful photography. The concept resonated and site traffic grew quickly. Initially, the company generated revenue from advertising and brand partnerships. But in 2013, the site launched a shop that sold kitchenware and artisanal ingredients that Food52 staffers recommended. This approach made sense says Dan Frommer, founder of The New Consumer. One of the biggest problems with shopping online is the overwhelming volume of products available. First generation content-to-commerce startups offered expertise and a point of view, which gave them the authority to recommend products. “They were offering curation, which was a valuable service at the time,” he says. No-Bake Granola Bars from the Food52 Vegan’s cookbook by Gena Hamshaw, ca. 2015. [Photo: Melissa Renwick/Toronto Star/Getty Images] Goop and Hodinkee followed similar trajectories. They began as blogs centered around a particular perspective and aspirational lifestyle, driven by their well-known founders. Over time, they built up enough trust with their readers to sell them products. (Food52 declined to comment on the story. We reached out to Goop and Hodinkee, but neither got back to us by the time of publication.) In 2019 and 2020, investors still believed this might be the future of retail. They pumped millions into their startups to grow their audiences, start new revenue streams like events, and start their own product lines. Food52, for instance, was valued at $300 million in 2021, after an $80 million investment from TCG (which also invested in Hodinkee). But this funding may have inadvertently led to their decline. With the influx of cash, these startups had a mandate to scale, but they all struggled to grow sustainably. By the start of this year, Food52 had declared bankruptcy. America’s Test Kitchen has reportedly agreed to buy it for $6.5 million, of which $3.42 million is Chapter 11 financing. Frommer argues that there were many idiosyncratic reasons why each of these companies failed. Food52, for instance, appeared to have bitten off more than it could chew. In 2019, it launched its own in-house kitchenware line; it also acquired two entirely new companies, the Danish cookware brand Dansk and the lighting brand Schoolhouse. “There was a lot wrong with the business,” Frommer says. “There were failures in strategy and execution.” But taking a step back, it’s clear that there were also broader issues with the content-to-commerce model that affected all of these businesses. What Didn’t Workand What Did Theseearly content-to-commerce platforms accurately identified that consumers were overwhelmed with the avalanche of products available on the internetand they also knew that taste could be monetized. Still, there were flaws with their model. For one thing, consumers often didn’t come to these websites with the intent to shop. They were there to take in the content: the recipes, listicles of clean beauty products, or a conversation with Ed Sheeran about his favorite watches. Only a small proportion of consumers would feel compelled to buy a product. Often, when a publication’s famous founder recommended a product, it would sell better; but over time, as the sites grew to have teams of writers, the sites no longer conveyed the distinct sensibilities of Paltrow, Hesser, or Clymer. Then there were the economics. It is hard to make money by marketing other brand’s products. These sites generated small amounts of revenue by selling products at a markup on their online stores or by making a commission by driving the customer to another brand’s website. All of these companies realized that a more profitable route was to make their own products, which they all did, from Goop’s beauty and fashion lines to Hodinkee’s watch straps and limited edition collaborations with brands like Longines. But this meant building out teams with expertise in designing and sourcing products, which was also a major investment. Finally, there was all the competition. Other media sites quickly realized they, too, could create a new revenue stream by linking to products. And some began doing it much more effectively. In 2016, for instance, the New York Times acquired Wirecutter for $30 million. Unlike Food52, Goop, and Hodinkee, Wirecutter was designed to help consumers at the moment when they were ready to buy a product. New York Magazine built its own product recommendation site called The Strategist, which has a similar model. “Content that really drives commerce is not just ambient recommendations around fun articles,” says Frommer. “It’s really purpose-driven content designed to help the consumer solve a problem. The majority of traffic to Wirecutter and The Strategist happens at the moment of needthey promote their humidifier recommendations when the winter air is dry.” The content-to-commerce model hasn’t disappeared; it has shape shifted. There are now massive players like Wirecutter that dominate the landscape. And at the other end of the spectrum, there are armies of individual content creators who recommend products to their followers on Substack, Instagram, or TikTok. It’s just the middle of the market that has collapsed. But as with everything on the internet, change is constant. And everything we know about how to shop online is about to get transformed by AI, which is already where many people begin their shopping journey. In many ways, AI agents are the ultimate blending of content and commerce: They offers product recommendations, personalized to the user, presented within a conversation. But what’s missing from AI is a unique point of view or sensibilitywhich is what the early content-to-commerce players excelled in. In an AI-driven shopping future, the winners wont be the smartest algorithms. It’ll be the ones that blend data with something that feels like taste.
Category:
E-Commerce
For decades, people with disabilities have relied on service dogs to help them perform daily tasks like opening doors, turning on lights, or alerting caregivers to emergencies. By some estimates, there are 500,000 service dogs in the U.S., but little attention has been paid to the fact that these dogs have been trained to interact with interfaces that are made for humans. A team of researchers from the United Kingdom wants to change that by designing accessible products for, and with dogs. The Open University’s Animal-Computer Interaction Laboratory in the UK was founded in 2011 to help promote the art and science of designing animal-centered systems. Led by Clara Mancini, a professor of animal-computer interaction, the lab studies how animals interact with technology and develops interactive systems designed to improve their wellbeing and support their relationships with humans. [Video: The Open University] The team’s first commercially available product is a specifically-designed button that service dogs can press to help turn on corresponding appliances at home, like a lamp, a kettle, or a fan. The Dogosophy Button took more than ten years to develop and was tested with about 20 dogs from UK charity Dogs for Good. It gives dogs more control over certain aspects of their home, which can make training them easier and further strengthen the bond between a human and their dog. It’s also taught the team a few lessons about how to design for humans. “I am now a better human designer,” says Luisa Ruge, an industrial designer who worked with Mancini and led the design of the button. For now, the Dogosophy Button is only available for purchase in the UK (for about $130). [Photo: The Open University] The challenges of designing for animals Anyone who’s ever designed a product for a human client knows the process relies on a perfect storm of variables like gender, age, background, and personal preferences. But these designers also have one advantage they likely take for granted: they can ask their client what they think at every step of the way. Getting feedback from a dog is much harder and requires an understanding of animal behavior. “Theres a lot of iteration,” says Ruge, “and a huge ethical and reflective component because I can’t be a dog, I don’t [feel] what they feel.” Ruge began her career as an industrial designer, but as she moved up the corporate ladder, she realized she was fascinated with animals. Her interest led her to train as a service dog trainer at Bergin College of Canine Studies in California. “One of the ways to bond is we had to be tied to our dog with a carabiner and leash for 8 days, 24/7,” she recalls. Later, she attended a conference on human behavior change for animal welfare, where she met Mancini and became interested in her lab. Ruge immediately enrolled in a PhD at The Open University, and spent the next three years writing a thesis on designing for the animal user experience and proving out her dog-centered methodology. Ruge followed the five human factors model, a method that helps designers understand the end user’s behavior by breaking down the UX into five factors. The typical list includes physical, cognitive, social, cultural, and emotional factors, but Ruge added a sixthsensoryand then later, a seventh: consent. To understand the exact characteristics and abilities she had to design for, she focused on Labrador Retrievers, Golden Retrievers, as these are the most common breeds for service dogs. Her research led to various correlations that informed the design of the button. For example: since both breeds have long tails, the button should not feature sensors that might accidentaly be activated by it. Since both breeds are predisposed to hip dysplasia and joint problems, the button should also not be designed in a way that requires jumping to activate. And since all dogs see the world in hues of yellow, blue, and brown, the button should be made in one of these colors so it is easy to perceive. [Video: The Open University] When Ruge first got involved, the prototype Mancini had developed was square in shape, and looked a bit like the standard metallic button that people with wheelchairs can press to open a door. Nowafter about 20 iterations and five prototypesthe button is round, convex, and blue. It is textured to prevent a dog’s wet snout from sliding on it, and its push depth is such that a more timid dog shouldn’t have to press hard to activate it. Ruge had to test some of her designs the hard way. The first prototype she ever made took days to develop and the dogs destroyed it “in two seconds,” she recalls with a laugh. But dogs don’t know that a prototype should be handled with care. To them, a work-in-progress product looks no different than a finished product. Animal design as a discipline Designing for dogs humbled Ruge’s assumptions. “It lets you know you’re never 100% right,” she says, adding that the only way to confirm her theories was through extensive testing and observation. It also made her a better designer for humans, because she learned to better spot her biases and assumptions. “Sometimes, I’m assuming you feel a handle like I do, and you don’t,” she says. In the end, though, animal design is where Ruge’s passion lies. Since earning her PhD, she has moved back to her native Colombia and started a design consultancy called Ph-auna (pronounced fauna) where she focuses on animal centeed innovation. She hosts a podcast called Pomodogo, guiding humans to better connect with their dogs, and is now working on an app that gamifies dog training and inspires humans to be better caretakers. “There’s an immense opportunity for animal design to be its own design discipline,” she says. Meanwhile, in the UK, the Dogosophy Button is available to individual customers willing to buy it, but the team is hoping to broaden its scope beyond the home. Mancini, who spearheaded the button project, says they first installed an earlier version of the button to operate the motorized door of a restaurants accessible toilet, but the restaurant ended up shuttering. Then, they tried installing it at a local shopping mall, but the plan fell through due to budget constraints. Still, she plans to continue developing new versions and adapt them for the characteristics of other species too. “It is my interest to try and install the buttons in public buildings,” she says. “I would love for whole cities to be more accessible for dogs and other urban animals.”
Category:
E-Commerce
On my phone, there are already videos of the next moon landing. In one, an astronaut springs off the rung of a ladder, strung out from the lander, before slowly plopping to the surface. He is, alas, still getting accustomed to the weaker gravity. In another, the crew collects a samplea classic lunar expedition activitywhile another person lazily minds the rover. A third video shows an astronaut affixing the American flag to the ground, because this act of patriotism is even better the second time around. The blue oceans of Earth are visible, in the background, and a radio calls out: Artemis crew is on the surface. America is going back to the moon, and NASA is in the final weeks of preparing for the Artemis II mission, which will have astronauts conduct a lunar flyby for the first time in decades. If all goes well, during the next endeavor, Artemis III, theyll finally land on the lunar surface, marking an extraordinary and historical and in some sense, nostalgic, accomplishment. The aforementioned videos are not advance copies, or some vision of the future, though. They were generated with OpenAIs video generation model and are extremely fake. Still, this kind of content is a reminder that the upcoming Artemis missions promise a major epistemic test for the deniers of the original moon landing. This a small but passionate and enduring community who doubt the Apollo moon landing for a host of reasons, including that (they allege) the government lied or (they believe) it is simply physically impossible for humans to go the moon. Now, when NASA returns to the lunar surface, these people will be confronted with far more evidence than from the last time around. The space agency operation will be broadcast, live, and including camera technology and social media platforms that just werent around in the 1960s. But theres also a bigger challenge before us. NASA will be launching its moon return effort in a period of major distrust in American scientific and government institutions, and, amid the proliferation of generative AI, declining confidence in the veracity of digital content. Most observers will be able to sort through the real NASA imagery, and anything fake that might show up. Still, there tends to be a small number of people who doubt these kinds of milestones, especially when a U.S. federal agency is involved. Adding AI to the conspiracy theory cocktail When the moon landing first came in, AI wasn’t a thing. The sophistication of [the landing] didn’t necessarily make us question it, says David Jolley, a professor at the University of Nottingham who studies conspiracy theories. But now, with the power of AI and the power of images that you can create, it certainly offers that different reality if you want to interpret it in that way. Its the trust in those sources that we need to kind of really create. Of course, if you haven’t got trust in our gatekeepers and you don’t trust scientists, well, suddenly you are going to lean into: well, this, is this real? Is this just AI? he continues. The upcoming Artemis missions arent yet a major topic among lunar landing deniers. But there are hints it will attract more attention from conspiracy theorists. During the last Artemis mission, which was unmanned, Reuters had to push back on online posts suggesting the expedition proved that Apollo 11 didnt actually happen. (Skeptics suggested longer Artemis I mission timelines, a product of a change in route, actually cast doubt on the original Apollo timeline). Other online skeptics have already suggested that, with Artemis, NASA is yet again faking a space endeavor. Some people in internet conspiracy communities suggest the upcoming moon missions will be entirely CGI (computer-generated imagery). Generative AI stands to introduce even more confusion, says Ben Colman, the CEO of Reality Defenders, a deep fake detection platform. Generating a believable image of a (fake) moon landing is now something any consumer can do. Any astute physicist will be able to tell you if these videos get star placement or physics wrong, as they are likely to do, he says, but even that is getting better with each model iteration. Conspiracy theories are sticky There are, of course, many reasons why people say they deny reality of the first lunar expeditions. They are canonical, misinterpreted references, like Van Allen belts, a zone of energetic charged particles that surrounds the planet (critics say the belts are too radioactive for manned vehicles to traverse) and the suspicious flag-in-the-wind (theres no wind on the moon!). All of these pointsand the many other points deniers bring uphave been thoroughly debunked. Still, this small community of self-appointed detectives are insistent. Even decades after the missions ended, people are still combing through NASAs videos and images, mining for signs of alternations or other surreptitious editing. To them, an expected shimmer reveals a film operation just beyond the view of the camera. A movement that might not look right is a hint that the world has been duped. Open source intelligence (OSINT) becomes the rabbit hole. Some allege we didn’t go to the moon, perhaps because we were trying to trick the Soviets into thinking that we had superior technology than they did, explains Joseph Uscinski, a political scientist at the University of Miami who also studies conspiratorial beliefs. Some people think we did go but it wasn’t televised. And that footage that we saw was made later in a sound studio. Some people think Stanley Kubrick was in charge of filming the faked Moon Landing footage. For its part, NASA is preparing to point to evidence, should any deepfake allegations come their way. Agency spokesperson Lauren Low tells Fast Company: We expect AI experts will be looking closely at all our images and will be able to verify they are real images taken by real astronauts as part of the Artemis II test flight around the Moon. Moreover, Low added, there will be many ways for people to watch the lunar flyby themselves, including live broadcasts, two 24/7 YouTube streams, a new conference, and views from Orion cameras. In other words, the reality of Artemis will be very hard to deny. Research suggests that conspiracy theories are entertaining, and even serve peoples core psychological needs, like a desire to understand the world or a way of dealing with uncertainty. Finding other people, including on social media, pushing these theories can help normalize them, and make someone feel like theyre part of a broader community. Some people simply dont trust institutions, and evidence that something did, indeed, happen only raises further questions, and suspicions that it didnt. To an extent, politics matters, too; people outside the United States are more likely to deny the moon landing, polls show. In the end, says Uscinski, we should prepare for people who are prone to conspiratorial thinking, or prone to mistrusting institutions, to take a skeptical view of any big news event. This may happen again when the Artemis missions finally launch. “The good news is that belief in conspiracy theories isnt likely to get worse,” he explains. “The bad news is that this conspiratorial thinking has always been this pervasive. People are very good at waving away evidence that tells them things they don’t want to hear, and they’re very good at believing things, either without evidence or with really shitty evidence when it tells them what they do want to believe about the world, Uscinski adds. You don’t need AI or sophisticated technology to provide a justification.
Category:
E-Commerce
It looks like OpenAI is taking the “new year, new you” approach when it comes to its business strategy. To kick off 2026, the company announced it would soon introduce ads into ChatGPTwhich was a bit of a surprise, considering CEO Sam Altman had previously said ads would be a last resort as a business model. It’s hard to say how final a resort this is without looking at OpenAI’s balance sheet, but we do know the company is feeling the heat. After Google released Gemini 3 in the fallwhich scored well on leaderboards, market share, and plaudits from the AI communityAltman declared a code red at OpenAI to ensure that ChatGPT is best in class. And as impressive as OpenAI’s fundraising has been, Google is a $4 trillion company. OpenAI needs all the resources it can get. So ChatGPT users are getting ads. It’s a risky move, since there are strong indicators that consumers are wary of ads in AI answers. A report from Attest, a consumer research company, found that 41% of consumers trust AI search results more than paid search results, suggesting that AI users like that they don’t have to worry about ads in AI summaries, even if their accuracy may sometimes be questionable. Hallucinating is apparently less of an offense than selling out. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/media-copilot.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/fe289316-bc4f-44ef-96bf-148b3d8578c1_1440x1440.png","eyebrow":"","headline":"\u003Cstrong\u003ESubscribe to The Media Copilot\u003C\/strong\u003E","dek":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit \u003Ca href=\u0022https:\/\/mediacopilot.substack.com\/\u0022\u003Emediacopilot.substack.com\u003C\/a\u003E","subhed":"","description":"","ctaText":"SIGN UP","ctaUrl":"https:\/\/mediacopilot.substack.com\/","theme":{"bg":"#f5f5f5","text":"#000000","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#000000","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91453847,"imageMobileId":91453848,"shareable":false,"slug":""}} However, ads in AI experiences look increasingly like an inevitability. Consumers don’t love ads on TV or streaming either, but they’re endemic to the media ecosystem. Google is already serving ads in AI Overviews and AI Mode, and it may someday bring them to Gemini, too, although company executives deny there are any plans to do this. Regardless of what it does with the Gemini chatbot, Google appears determined to weave advertising into many of its AI experiences, which is hardly a shock. Big Tech, bigger bite For the media, this isn’t exactly thrilling news. OpenAI entering the ad business means yet another Big Tech player is competing with them for digital ad dollars alongside platforms like Google, Meta, and Amazon. And there’s less traffic to go around since those same AI chatbots summarize content, often negating the need to click through. There’s a reason web traffic to publishers dropped by a third last year. However, advertising tied to AI answers might end up being exactly the leverage publishers need to make their case for compensation. When a publisher’s content is used to create the answer to a query, the line back to revenue is always somewhat indirectafter all, the user likely subscribed to the chatbot well before they ever typed their question, and most AI services have a free tier anyway. But if your content fuels an answer, and that answer directly leads to revenue for the AI company through either impressions or transactions, the chain from content to dollars is clearer. It’s also more trackable than it’s ever been. Whereas the world of SEO inferred a lot from search terms and clicks, queries in AI search are more specific, and the tools much better at pinning down intent. Understanding which answers, and what content within them, best facilitate transactions is a very knowable thing. OpenAI did its best to quash fears about commercialization by stating its first principles of advertising, one of them being that ads will not influence the substance of the answers in ChatGPT. The idea is that if, say, Coca-Cola pays for an ad campaign, then any answer will not be any more or less likely to mention Coke than if that campaign didn’t exist. But I wonder if the answer might be more or less inclined to steer the user toward buying a soft drink in general, with the ad providing a little card for you to tap on that does just that. Optimize and persuade Even if OpenAI insists that won’t happen, it can’t speak for all the brands and content providers that fuel the answer. How successful such efforts might be is extremely unclear at this point, but it’s a safe bet they’re going to try. The nascent field of GEO (generative engine optimization) seems destined to give rise to a new dimensionnot just how content affects AI answers, but how it convinces users to take action. You’re not just optimizing for presence, but also for persuadability. All of this is theoretical, of course, and perhaps Google, OpenAI, and everyone else will succeed in keeping the ad-revenue pie all to themselves. But as revenue from AI answers increases, every marketer on the planet will want to know which answers are the most lucrative, and what content they’re made from. If publishers can prove they’re providing the secret sauce, they’ll have more leverage in demanding their slice. Proving that value is not trivial. Successful bargaining oer this “content-to-click” effect starts with measuring it, and that’s going to take work. Understanding how content appears in and affects AI answers is brand-new science, but it is science: Experimentation, iteration, and leveraging different kinds of toolslike snippets, bot blocking, and dedicated GEO platformsare what’s needed. Over the past 25 years, Silicon Valley slowly built tremendous platforms that ended up consuming the vast majority of advertising revenue, locking out the media in the process. And let’s be honest: There’s a good chance artificial intelligence will end up continuing that trend. But the irony of monetizing AI answers with advertising is that it may end up creating the best opportunity for publishers to define exactly how much value they bring to them. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/media-copilot.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/fe289316-bc4f-44ef-96bf-148b3d8578c1_1440x1440.png","eyebrow":"","headline":"\u003Cstrong\u003ESubscribe to The Media Copilot\u003C\/strong\u003E","dek":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit \u003Ca href=\u0022https:\/\/mediacopilot.substack.com\/\u0022\u003Emediacopilot.substack.com\u003C\/a\u003E","subhed":"","description":"","ctaText":"SIGN UP","ctaUrl":"https:\/\/mediacopilot.substack.com\/","theme":{"bg":"#f5f5f5","text":"#000000","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#000000","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91453847,"imageMobileId":91453848,"shareable":false,"slug":""}}
Category:
E-Commerce
Henry Ford famously noted, Whether you think you can do it or not, you are usually right. His point was that beliefs, especially about our talents, performance, and even luck, can be self-fulfilling. Irrespective of whether they are right or wrong, they will become true by influencing objective success outcomes. Ford was hardly alone. Along the same lines, decades of psychological research show that beliefs matter, often profoundly so. Perhaps the most influential work comes from Albert Banduras theory of self-efficacy, defined as peoples beliefs in their capability to organize and execute the actions required to manage prospective situations. Across hundreds of studies, higher self-efficacy has been linked to greater motivation, resilience, learning, and performance. People who believe they can improve are more likely to set challenging goals, invest effort, persist in the face of difficulty, and recover from failure. Closely related ideas emerged from attribution theory and expectancy value models, which showed that individuals who attribute success to effort rather than fixed ability, and who believe their actions will make a difference, tend to perform better in school and at work. The most popular variant of these, at least in the world of HR and management, has been Carol Dwecks research on growth versus fixed mindsets, which popularized the idea that believing that abilities can be developed encourages learning-oriented behavior, greater perseverance, and better responses to feedback. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9.jpg","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1x1-2.jpg","eyebrow":"","headline":"Get more insights from Tomas Chamorro-Premuzic","dek":"Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and the co-founder of DeeperSignals. He has authored 15 books and over 250 scientific articles on the psychology of talent, leadership, AI, and entrepreneurship. ","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/drtomas.com\/intro\/","theme":{"bg":"#2b2d30","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#3b3f46","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91424798,"imageMobileId":91424800,"shareable":false,"slug":""}} Taken together, this body of research persuaded a large number of people of the importance of mindset, implying a counterintuitive causal chain whereby beliefs shape performancerather than the other way around. Specifically, the story goes, irrespective of how rational our thoughts are, they will likely shape attention, effort, emotional reactions, and behavior, which in turn impacts tangible results and outcomes. A mental software update Suitably, much of the self-help industry has run with this idea at full speed. Bookstores, podcasts, LinkedIn feeds, and corporate off-sites are now saturated with advice urging us to reframe, manifest, believe harder, and upgrade our mindset. According to this logic, success is largely a mental software update away. Change your thoughts, and the universe will follow! This is where things start to get a little silly. Mindset does not suspend physics, probability, or competence. It still matters whether you can actually cross the road without getting hit by a bus. And even if you firmly believe you are Serena Williams on the tennis court, lacking the ability to play tennis means you may be the only person on earth who shares that belief. Confidence does not magically produce a serve, a backhand, or a Grand Slam title. Motivational cosplay At its most extreme, mindset culture drifts into motivational cosplay: people repeating affirmations in the mirror while ignoring the inconvenient details of skill, preparation, competition, and luck. Worse, it can quietly turn failure into a moral flaw. If you didnt succeed, you must not have believed enough, visualized hard enough, or optimized your morning routine sufficiently. Structural barriers, unequal opportunities, and plain bad luck are written out of the story. The irony is that the science never claimed mindset was omnipotent. Beliefs help when they are tethered to reality. They amplify effort, persistence, and learning, but they cannot substitute for ability, practice, or opportunity. Positive thinking works best when paired with negative feedback, deliberate practice, and a sober assessment of constraints. In short, mindset matters (a bit), but not in the magical way the self-help industry sells it. Thinking you can do something helps you try. It does not guarantee you will succeed. And no amount of positive thinking will turn wishful confidence into world-class talent. Modest effects Indeed, a closer look at the scientific evidence indicates that popular interpretations on the power of mindset and positive thinking have gone too far. First, the effects of mindset are actually not that large. Meta-analyses show that growth mindset interventions produce small to moderate effects, particularly when compared with structural factors such as prior ability, socioeconomic status, quality of instruction, or access to opportunity. Put differently, believing you can improve is helpful, but it is no substitute for actually improving. Between thinking you are as good as Lionel Messi and being half as good as him, the latter is unequivocally preferableunless your goal is to impress people who dont understand soccer, in which case you can hope to deceive or fool them! Confidence without competence may feel empowering, but it rarely wins matches, promotions, or championships. (It does make for popular sitcom characters like Michael Scott or David Brent, though.) Second, beliefs do not operate in a vacuum. Confidence helps most when it is paired with real skills, feedback, and environments that reward effort. The problem with overvaluing confidence or self-belief is that, roughly half the time, it is correlated with actual ability. When people are genuinely competent, their confidence is often earned, which is why Muhammad Ali could plausibly claim that it isnt bragging if you can back it up. In those cases, belief is less a psychological trick than a reasonably accurate signal of underlying skill. The trouble starts when confidence drifts away from competence. Underconfidence, while uncomfortable, can be oddly functional: It pushes people to prepare more, seek feedback, and close gaps they suspect (or know) they have. Accurate confidence, by contrast, reflects self-awarenssa realistic calibration between what one can do and what the situation demands. Delusional confidence is different altogether. It may help people impress, persuade, or temporarily fool others, but this is usually a short-lived strategy unless everyone else is equally deluded. When confidence consistently outruns competence, the cost is eventually paid, either by the individual when reality catches up or by everyone else who has to deal with the consequences. Third, an excessive focus on mindset risks slipping into a form of psychological moralizing, where success is credited to the right attitude and failure is blamed on the individuals thinking rather than on constraints, inequality, or bad luck. This becomes especially problematic when people are encouraged to believe not only that they live in a meritocracy, but also that their outcomes hinge primarily on how strongly they believe in themselves. In such a world, effort and optimism are not just virtues but moral obligations, and when success does not materialize, the only plausible culprit left is the self. The result is a quiet but corrosive form of self-blame. If belief is supposed to be the main lever of success, then failing to succeed feels like a personal deficiency of character, motivation, or mental toughness. Structural barriers fade into the background, while disappointment is internalized as guilt. Ironically, this narrative can be demotivating, not empowering. A better way A more helpful alternative would be to focus less on upgrading peoples beliefs and more on developing their actual skills and competence. This remains valuable even when individuals start out with low confidence in their abilities, which may simply reflect an accurate awareness of the gap between their current and ideal selves. Closing that gap through practice, feedback, and learning does more for long-term performance and well-being than insisting people feel confident before they have much to be confident about. Needless to say, there is also evidence that positive beliefs can backfire when they become detached from reality. Inflated self-beliefs are linked to poor calibration, overconfidence, and reckless decision-making. In organizational settings, confidence without competence can be costly, especially when it crowds out learning, dissent, or accurate self-assessment. In some cases, acknowledging that you are simply not very good at something is not an act of pessimism but of strategic realism. Persisting in a poorly matched role or career path on the basis of false hope can be actively harmful. Psychologists refer to this as false positive self-beliefs or miscalibrated optimism (which appear to be the norm), where individuals overestimate their likelihood of success and continue investing in goals that are unlikely to pay off. By contrast, recognizing limits early allows people to redirect their effort toward domains where their abilities, interests, and opportunities are better aligned. There is also a social cost to miscalibration. If others realize you are less capable than you believe yourself to be, the reputational penalty is typically higher than if you had reached that conclusion first. Self-awareness signals judgment and maturity; obliviousness signals risk. In practice, what matters most is not how good you think you are, but how good others think you are, because it is other people who allocate opportunities, responsibilities, promotions, and trust. Ironically, some of the best performers are those who initially underestimate themselves. Mild underconfidence can motivate preparation, learning, and skill acquisition, leading to steady improvement and positive surprises. Conversely, people who overestimate their abilities often stagnate, mistaking confidence for progress and reassurance for feedback. Over time, belief divorced from performance does not just fail to help; it actively prevents development. The science, then, supports a more nuanced conclusion. Mindset matters, but it is not magic. Beliefs are best understood as enablers rather than engines of success. They help people make use of their abilities and opportunities, but they cannot substitute for them. And yet, we tend to praise self-belief far more enthusiastically than self-knowledge. Confidence is celebrated as a virtue; realism is often mistaken for negativity. But from the perspective of everyone else, self-knowledge is usually the more valuable trait. Most of us have worked with at least one person who is spectacularly pleased with themselves, modestly competent at best, and blissfully unaware of the gap between the two. Their confidence may be admirable in the abstract, but it is considerably less charming when they are making decisions, leading teams, or presenting their vision. If we evaluated the world from other peoples point of view, we would quickly realize that it is not in anyones interest for the unjustifiably confident to succeed because of those very flaws. When people advance on the strength of misplaced self-belief rather than demonstrated competence, the costs are externalized: Colleagues pick up the slack, organizations absorb the damage, and reality eventually intervenes, often expensively. A healthier mindset, then, is not blind optimism but informed confidence: knowing what you can do, what you cannot yet do, and where your effort will actually pay off. In short, self-belief may feel good, but self-knowledge gets things done. Reality rewards competence, not confidence. The only role of belief is to signal whether you know the difference. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9.jpg","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1x1-2.jpg","eyebrow":"","headline":"Get more insights from Tomas Chamorro-Premuzic","dek":"Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and the co-founder of DeeperSignals. He has authored 15 books and over 250 scientific articles on the psychology of talent, leadership, AI, and entrepreneurship. ","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/drtomas.com\/intro\/","theme":{"bg":"#2b2d30","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#3b3f46","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91424798,"imageMobileId":91424800,"shareable":false,"slug":""}}
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] next »