Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2026-02-27 11:53:00| Fast Company

Its the last week of Black History Month (BHM) and its clear Americans are over performative values. Trite BHM-inspired merchandise sits on retailer shelves untouched while media is abuzz covering the artistry, activism, and symbolism of Bad Bunnys Super Bowl halftime show. The signal is clear: consumers are looking to brands for real solutions to real problems, not products that commodify culture. Most companies build everything from advertising to AI for the “average user,” but in doing so, they react to rather than lead markets. Strategic leaders look to growth audiencesunderserved groups who are the fastest-growing demographicsas lead users. They are the “canaries in the coal mine” because they navigate the highest levels of systemic friction, making them the first to experience “average” design failures. What does championing these lead users look like at a communications, product, or systems level? It looks like Elijah McCoy automating engine lubricationan innovation bred from the friction between his engineering degree and the menial labor he was forced to perform, thus creating the real McCoy quality standard. It looks like Jerry Lawson changing the economics of the gaming industry by inventing the video game cartridge that divorced its hardware from its software. And it looks like emergency medicine becoming a global standard after being piloted by the Pittsburgh Freedom House Ambulance Service who, in the face of medical bias and systemic unemployment, also redefined emergency care as a public right. Drawing from their lived experiences in underserved groups, these pioneers didnt just solve problems; they mastered environmental friction. Today, that friction also manifests in algorithms. Championing growth audiences as lead users means ensuring they are critical AI system “stress testers.” When we fail to design for them, we allow AI data, development, and deployment to default to obtuse “averages” that can frustrate or drive away valuable customers. Three recent examples highlight issues and opportunities. Relying on ‘Data Infallibility’ versus Lived Realities In this Infallibility Loop bias, a brands AI trusts a data sourcelike a flawed GPS coordinate or outdated government mapas an absolute truth, even when customers provide contrary evidence. This is a digital echo of historical redlining: a systemic refusal to see humans over faulty data. The Experience: A Black homeowner in an affluent area is penalized by an AI that confuses her address with a property in a different town, automatically forcing unnecessary flood insurance onto her mortgage and increasing the payments. Despite providing human-verified deeds and highlighting known GPS errors, the AI blocks her incomplete payments and triggers automated credit hits. A resolution only came months later after the consumer filed state-level servicer complaints. The Fix: Prioritize Dynamic Qualitative Data Collection. Design should allow real-time, contextual evidence to override static, biased datasets. True brand innovation requires systems to yield to the experts: their customers. Leveraging ‘Data Intimacy’ while Neglecting Situational Accuracy This trust paradox occurs when brands use private data, but fail to combine situational data, making personalization feel like needless surveillance. The Experience: During Januarys recent record-breaking New York snowstorm, a customer called a national pharmacys location in her neighborhood to make sure they were open. The AI-powered interactive voice response (IVR) recognized her number, asked for her birthdate, and greeted her by name. Yet, after performing this exchange, it provided a “default” confirmation that the store was open when asked. Without a car, the customer braved life-threatening conditions on foot only to find a handwritten note on the door indicating it had closed due to the storm. The Fix: Add Good Friction. A term coined by MIT professor Renee Richardson Gosline, “Good Friction” requires that when external context (like a Level 5 storm) conflicts with standard scripts, the system pauses and verifies first. Prioritizing ‘Recency’ But Erasing Loyalty Recency bias in algorithms weights the last data point more heavily potentially resulting in algorithmic erasure. The Experience: A 20-year elite status customer calls an airline, only to be greeted by the name of his niece (a nonmember relative for whom he recently booked a one-off ticket) and then is erroneously deprioritized in the automated journey as a nonmember. In many “growth audience” and immigrant households, economics are multigenerational and communal, with a single “lead user” facilitating purchases for extended family. This airline systems “memory” was shallow, seeing only the most recent transaction and ignoring a decades-long relationship because a reservation shared the same contact number. The Fix: Focus on Holistic Design. AI must be weighted to recognize the arc of the customer journey, ensuring that loyalty isnt erased by a single data point or the nuances of communal purchasing. To be sure, bad data is a universal problem, but the lack of situational intelligence in our AI systems hits growth audienceslike Black consumersfirst and hardest. Because these audiences represent a disproportionate share of future consumption and have the most “cultural common denominators,” their frictions are diagnostics for markets writ large. We arent just solving for a niche by championing them as lead users, we are adopting more rigorous, empathetic, expansive, and effective standards that solve real problems for all people.


Category: E-Commerce

 

2026-02-27 11:30:00| Fast Company

At hundreds of Burger King restaurants across the U.S., theres a new invisible worker whos tracking which ingredients are in stock, analyzing daily sales data, and checking in on whether employees are saying Thank you and Youre welcome. Its an AI assistant named Patty.  According to Thibault Roux, Burger Kings chief digital officer, the voice-activated chatbot is designed to help employees and managers handle tasks that might usually require pulling out a computer or consulting with an instruction guide. Patty began showing up at select locations about a year ago, and is now in a pilot phase at approximately 500 Burger Kings. Its expected to roll out to the rest of the chains U.S. locations by the end of the year. On a day-to-day basis, Patty has an array of functions, from letting a manager know if a store is low on onions to helping an employee build a new burger. But it has another role thats raising quite a few eyebrows: analyzing Burger King locations based on friendliness by tracking employees use of key phrases like Welcome to Burger King, Please, and Thank you. Online, commenters are concerned that this functionality is a slippery slope toward 1984-style employee surveillance. In an interview with Fast Company, though, Roux clarified that Patty is not being used to analyze individual employees performance, and is instead imagined as a kind of coach. It’s truly meant to be a coaching and operational tool to really help our restaurants manage complexities and stay focused on a great guest experience, Roux says. Guests want our service to be more friendly, and that’s ultimately what we’re trying to achieve here. Patty, are we running low on Diet Coke? Technically, Patty is the chatbot version of Burger Kings assistant platform, which collects data from operations including drive-through conversations, inventory, and sales, and then uses AI to analyze patterns in that data. For now, Patty operates on a customized model from OpenAI, though Roux says the technology is flexible enough that it could integrate with another partner in the future (like Anthropic or Gemini) depending on the companys needs. For managers and employees in stores, Roux says Patty operates similarly to something like Siri. Patty is activated by a small button on the side of an employees headset, and they can ask it direct verbal questions related to their specific storelike recent sales figures or inventory updatesas well as more general company information, to which the bot will provide a verbal answer. If you’re looking to clean the shake machine [you can ask Patty] the procedures to clean it, Roux explains. Or we have a lot of limited-time offers, and sometimes they can be cumbersome to remember. You can easily tap into Patty and be like, Hey, remind me, does the new build maple bourbon barbecue have crispy jalapeos? Patty can also reach out to employees directly if it notices a pattern of interest. For example, if Patty thinks a specific store is out of lettuce, it might ping a manager to confirm. Once its received confirmation, it can mark lettuce as sold out on that locations app and websitea process that previously would have required human intervention. Roux says franchisees and regional managers can decide how they want Patty to reach employees with information, whether its through their headsets or via a text message (though the tech is programmed explicitly to never interrupt a worker during a customer interaction).  Insights from Burger Kings Assistant platform also live outside of employees headsets. Managers can check information from the tool on an accompanying website or app. For example, Roux says, when a district manager is visiting a new store, they might ask Patty on the app, What are the top three guest complaints at this location this week? or What are their top missing items?  In an interview with Fast Company writer Jeff Beer earlier this month, Burger King President Tom Curtis said the assistant platform has already led to some significant menu changes. Curtis explained that the AI tracked all the times that team members said Im sorry, we dont have that and linked them back to a common denominator: apple pie. In January, Burger King brought back its apple pie for the first time since 2020. Were in the idiocracy version of 1984 Pattys more straightforward uses, like helping managers access sales data and check inventory, seem fairly predictable in the context of fast food. Where Burger King is really pushing Pattys use cases, though, is with its friendliness metric.  In an interview with The Verge on February 26, Roux said Patty would recognize phrases like Welcome to Burger King, Please, and Thank you, and then give managers access to data on their locations friendliness performance based on those keywords. Mere hours after that piece went live, a thread in the subReddit r/technology on Patty had already amassed more than 15,000 upvotes and nearly 3,000 comments. Common refrains from users include comparing the technology to the surveillance state in George Orwells novel 1984, labeling it authoritarian and dystopian, and accusing Burger King of employee surveillance.  “This would be criticized as being cartoonishly unrealistic in a sci-fi movie 10 years ago,” one user wrote. Another added, “We’re in the idiocracy version of 1984.” When asked about this response, Roux says the data from employees conversations is anonymized, and that none of these friendliness metrics will be used for grading or assessing individuals. Further, he adds, Patty will not directly instruct employees on what to say or how to say it. Instead, data on friendliness will be shared with managers, who can use it for face-to-face coaching with their teams.  Still, its unclear exactly how Patty is quantifying friendliness. In a video explanation of the feature, a manager is shown asking the bot, Is there anything that needs my immediate attention? to which it responds, The teams friendliness scores this morning were the highest this week. In an email to Fast Company, a Burger King spokesperson said, In select pilot locations, weve explored using aggregated keywords, including common hospitality phrases, as one of several signals to help managers understand overall service patterns. The tool is not used to score individuals or enforce scripts. Burger King did not respond to ast Companys request for clarification on how friendliness scores are calculated. So far, Roux says hes seen growing interest in Patty from franchisees, with several managers making specific requests for future add-ons.  A lot of our franchisees . . . and regional general managers are very competitive, so they want to know, Hey, how do I compare to other restaurants? Roux says. I think that’s something that we’re going to be rolling out. In fact, we were looking at some of the designs earlier this week with the franchisees. So this is only the beginning.


Category: E-Commerce

 

2026-02-27 11:09:00| Fast Company

Recently, Grok AI faced criticism after users found it was creating explicit images of real people, including women and children. Although xAI has now implemented some restrictions, this incident revealed a serious weakness. Without safeguards and diverse perspectives, girls and women are put at greater risk. The dangers artificial intelligence poses to women and girls are real and happening now, affecting their mental health, safety, healthcare, and economic opportunities. Last fall, a mother discovered why her teenage daughter’s mental health had been deteriorating: It was a result of conversations with a Character.AI chatbot. She’s not alone. Aura’s State of Youth Report, released in December, found that parents believe technology has a more negative effect on girls’ emotions, including stress, jealousy, and loneliness51% compared with 36% for boys. Thats unacceptable, and we need to do better.  The risks extend beyond mental health. OpenAI recently reported that more than 40 million Americans seek health information on ChatGPT daily. As AI in healthcare expands, the consequences of biased training data can be dangerous. AI models that are trained predominantly on male health data produce worse outcomes for women. For instance, an AI model designed to detect liver disease from blood tests missed 44% of cases in women, compared with 23% in men. Uneven playing field In the workplace, AI is not leveling the playing field. Despite laws prohibiting discrimination, AI-powered hiring tools have repeatedly caused concerns about bias, fairness, and data privacy. A study published by the University of Washington found that in AI resume screenings, the technology favored female-associated names in only 11% of cases.  These failures reflect who is building our technology. Women make up just 22% of the AI workforce. When systems are designed without women’s perspectives, they replicate existing inequities and introduce new risks. The pattern is clear. AI is failing girls and women. Pivotal moment This could not come at a more pivotal moment in the job market. A quarter of the roles on LinkedIns latest list of the 25 fastest-growing jobs in the United States are tech-related, with AI engineers at the top. Decisions about how AI is designed today will shape access to jobs, healthcare, education, and civic life for decades. It is critical that women play an active role in developing new AI tools so that inequity is not baked into the systems that increasingly govern our lives. Young women are not disengaged with AI. Research conducted last year by Girls Who Code, in partnership with UCLA, found that young women are deeply thoughtful about the dual nature of technology. They see its potential to advance healthcare, expand educational access, and address climate change. They are also aware of its dangers, such as bias, surveillance, and exclusion from development. This isnt blind optimism. Instead, it offers a perspective that is often missing in todays AI development. Creating technology is an exercise of power and holds great responsibility. Since girls are often the most affected by AIs failures, they must be empowered to help lead the solutions. Women like Girls Who Code alumna Trisha Prabhu, who developed ReThink, an anti-bullying tool, exemplify this. Latanya Sweeney, recognized as one of the top thinkers in AI, founded Harvards Public Interest Tech Lab. Their achievements demonstrate the potential when women lead in tech development.  Smart steps If we want safer, more responsible AI systems, three steps are essential. First, computer science education should integrate social impact. Coding cannot be taught in isolation from its consequences. Students should learn technical skills alongside critical analysis of how technology shapes communities and lives. This approach produces results. For instance, one Girls Who Code student utilized the skills she learned to create an app called AIFinTech to help immigrant families manage their personal finances. Second, women must be represented in AI development and governance, particularly those from historically underserved communities. They need seats at the tables where AI systems are designed, tested, and regulated. This means ensuring gender diversity on AI ethics boards and that government AI committees are representative of the demographics most affected. Finally, how we evaluate artificial intelligence needs to evolve. Today, AI is assessed by efficiency, accuracy, and profitability. We must also evaluate health, equity, and well-being, especially for girls and young women. Before an AI system is deployed in a high-stakes environment such as healthcare, it should be required to pass tests for gender bias and demonstrate that it does not produce disparate outcomes. New York City, for example, requires employers that use automated employment decision tools to undergo an independent bias audit annually. We do not have to accept AIs flaws by default. We are witnessing AIs impact on girls in real time, and we must seize the opportunity to change course while the technology is still being shaped. When girls are given the chance to lead in AI, they will build safer systems not just for themselves, but for everyone.


Category: E-Commerce

 

2026-02-27 11:00:00| Fast Company

As a young child, interior designer Jeremiah Brent and his mother visited open houses and model homes in his hometown of Modesto, California, as a form of daydreaming. Brent walked through the houses, imagining the people who might live there, building a fantasy around what these homes could be. Since then, Brent has turned his childhood design obsession into a sprawling career: He runs a 50-person design firm, moonlights on Queer Eye, and recently brokered his first bedding deal with Target.  Having come up in the industry through a series of audacious bets on himself, Brent has developed a sense of humor and pragmatism around his relationship with creativity and his role as a founder, designer, and collaborator. Hes quick to poke fun at himself, noting that hes working on his control issues. (If I had it my way Id touch every hinge, every doorknob, every finish.) And hes clear that he absorbs as much as he can to consistently shape and influence his creative output: from a personal archive of design magazines to pop culture. (I watch terrible, terrible TV.)  As Brent enters the second decade of Jeremiah Brent Design, he says his relationship with design and creativity has become more rooted in storytelling, informed by the clients he works for and the team he works with. As time goes on, my work is known for a real kaleidoscope of design styles, Brent says. Everybody is so different, and their stories and their narratives are so different. I really want to be known as somebody who executes your story, not somebody who executes what I do really well. I don’t want to be one thing.  I’m an early riser. I don’t need a ton of sleep. I usually get up around 4 or 4:30 a.m. I have the mornings to myself; my kids are all sleeping. I’ve got three hours of uninterrupted silence with far too much coffee. Music on, candles lit, and I work. A lot of times, I write, which is new.  I didnt start with a degree in design. It really was just one of those things that happened through osmosis. When I started the firm, I wanted it to be me and like five people sitting around the desk, dreaming up the most insane spaces, the most beautiful things.  [Photo: Trevor Tondro] I’m super visual. My office is like a serial killer. A controlled serial killer.  I’m creatively always hungry. I’m always pulling and looking. I’m particularly inspired right now by the contrast and conflict between design styles and materials. When you bridge what was going on in, like, France in the 1930s with what was happening in the States in the 1980s? I think that conflict, and that contrast is where all the original ideas lie. Somebody asked me, Do you think taste is genetic? I don’t think taste is a recessive gene. I think it has so much to do with curiosity, audacity, travel, absorbing.  At my core, Im a good storyteller. Thats really where my strength is. I can listen. I can hear the nuances of what people need, and sometimes theyre not even saying it. That was the basis for the firm. I didnt imagine it growing to the scale it has. Even though the company is 50-plus people, we still have that same synergy of five people sitting down at a table. There are so many different ways to make something beautiful. So that’s where I’m at now. It’s defining my lane of creativity and how I participate, how I nurture the creativity of my team.  [Photo: Trevor Tondro] I always feel the most creative when I’m with the people I’m creating for. The biggest part of it is getting to know the people and understanding where they’re from. What was the first room that ever held you? What was the most important space that you remember? At least this part of the creativity, for me, is earning people’s trust. It’s something that you’re not given. You’ve gotta earn it.  The fantasy part of what I do is where the love story is. So I always kind of call out one of the most important moments of your day. Where does it start? Where is the middle? Where does it end? And that acts as the beginning of the ripple. You build from there. You know, the fantasy, that component of that conversation with a client assures them that you understand what they value. And then I work backwards.  I sketch everything. I have to see the space and how you’re going to move through it first before I dig into the intricacies of breaking everything down. It’s all visual. So I’ll draw everything, build the space out, prioritize. It’s changed over time, and it changes with clients, but you know, it’s always a conversation around what matters most to the client.  I’ve never said no to work, even when I should. This was the first year that I’ve had to be like, Okay, well, we can’t do that yet. Or That’s not gonna work. That feels weird to me. I feel a pivotal shift in my tenacious appetite for growth. The evolution becomes everybody else’s, too. It’s not just mine now. So Im making sure Im executing and illustrating the balance that I want everybody else to have in their life. I joke all the time with everybody I work with. I want you to make a lot of money, and I want you to love what you do.  [Photo: Trevor Tondro] I just need to move and to travel, sometimes. We live in New York City . . . but then we have this farm in Portugal. I realized this year that I live between two extremes: I need the volume turned all the way up, or I need to go to Portugal, where the volume is completely turned down and nurtures me in a way that I never even thought was possible. In Portugal, Im a nighttime person, and in New York, I’m a morning person. Each gives me different things.  I think trends are great if you’re not beholden to them. It’s a great way to have a conversation. It’s a great way to travel visually and maybe look at something that you would not have normally seen. To use them as a marketing tool is annoying. Just because turquoise is a hot color right now doesn’t mean you need to paint your room turquoise. But let’s examine turquoise. What do we like about it? Where did it start? It’s fun.  Ive had a crash course on how to collaborate because I married another interior designer. Which I do not suggest, because there are a lot of opinions from gay decorators in the house. I think it was an interesting exercise for me, because, especially creatively, if I had my way with our home, it would be dark with one dimly lit room with one bowl on a table. Very wabi-sabi. It’s my husband’s worst nightmare. He would live in, like, you know, a French château. Hes like Marie Antoinette. So, we have found a balance and a joint style that works for the both of us.  I’m not pretending that I’m the most talented person in the room. I may be the most passionate, but definitely not the most talented, and I’ve seen so many different times from collaborations how far you can take a project with other people. 


Category: E-Commerce

 

2026-02-27 11:00:00| Fast Company

It’s sometime in the future, and Elon Musk, Jeff Bezos, and Sam Altman have joined forces on a new venture called Energym. The global chain of gyms is designed to harness the energy of the unemployed as they exercise on machines. The generated electricity feeds the AI servers that put them out of a job. Think Planet Fitness meets the Matrix, but without living in a simulation. Energyms mission is to feed the AI machines with human sweat, and it’s a great business model. By 2030, almost 80% of people have lost their jobs. If you have no money and no purpose, you may as well use all your free time to work out and feed AI server fans with some kilowatts. It solves our need for energy and your need for purpose, Altman says in a promotional video. Energym, as you probably already know, is not real. But it very well could be. In this era, where so many brands and startups are constantly trying to flip the most inane ideas into the Next Big Thing to get a $50 billion valuation and an IPO, this absurd premise makes total sense. The mockumentary-style ad fpr Energym that has been circulating on the internet captures the current AI startup circle jerk better than any I’ve seen online so far. https://www.instagram.com/reels/DVLE-QJEf0n The advertisement was created by Hans Buyse and Jan De Loore. The latterwho wrote the copy for the video, as well as edited and produced itis the cofounder of a one-man AI creative studio in Belgium called Kitchhock. The company has been creating all types of videos since 2011, back when there was no Seedance or Veo. But now, De Loore is using his creative chops and the latest generative video AI tech to make real ads for real companies in Belgium through his AI video studio arm, AiCandy. Energym is just a satirical ad designed to promote his own business and destroy the very core of those who make the technology that powers his business. (Incidentally, Energym is the same name as a company that makes a very real $2,800 static bicycle designed for exercise and to produce electricity, but its not related to AiCandy’s fake ad.) The Energym commercial is obviously tongue in cheek, as are many other videos we have seen in recent months that make fun of our increasing dependency on artificial intelligence and its power. But this one hits particularly hard. For some, it may be the Black Mirror-esque nature of it. (Theres an actual episode of the British TV series that feels like an extended version of the ad.) Personally, it connects with the WTF-ness that the current AI situation is provoking in me on different levels. The fear of whats next. The dread of seeing reality destroyed. The disgust for the fat cats that are running this charade with no checks and nobodys permission. I find it hard to pinpoint what it is. Its just an absurd exaggeration with no logical basis that hits too close for comfortand, at the same time, makes me happy.


Category: E-Commerce

 

Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] next »

Privacy policy . Copyright . Contact form .