Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

E-Commerce

2026-01-27 11:30:00| Fast Company

To anchor the long rows of server racks that power the artificial intelligence boom, every data center needs thousands of holes drilled into its concrete floor. It’s a precise part of the construction process that has required workers to bend over with handheld drills for hours at a time grinding meticulously placed holes into thick pads of concrete. Now, there’s a robot doing it up to 10 times as fast. Tool brand DeWalt has just revealed a downward-drilling robot that can autonomously roam the floors of under construction data centers to drill the thousands of holes that are necessary for installing server hardware and other building elements. Developed in conjunction with August Robotics and tested on data centers being built by an unnamed “hyperscaler” tech company, the autonomous robotic drill has been used to pop more than 90,000 holes into the floors of data centers, all without human involvement. [Photo: DeWalt] A task that can take human workers up to two months in a large data center can now be handled by a fleet of three or four robots in a matter of days. “That is so critical from a construction perspective, because they can’t move to the next stage of construction until this is done,” says Bill Beck, president of tools and outdoor for Stanley Black and Decker, the parent company of the DeWalt brand. The pace is striking. For a smaller hole less than 1 inch wide and 2 inches deep, the robot can locate and drill one hole every 80 seconds. For a larger hole, 1 inch wide and 8 inches deep, it can finish a hole every 180 seconds. During its pilot phase, the robotic drill managed an accuracy rate of 99.97%. And because the robot is capable of operating 24 hours a day, project timelines can be drastically slashed. [Video: DeWalt] Making this process faster is increasingly important as data centers balloon in size. From single buildings to sprawling campuses, data centers are taking up vast amounts of space and becoming increasingly complex to build. “They’re huge slabs of concrete,” says Beck. With upwards of 10,000 holes needed to be drilled in each one, the job can be daunting. “And they’ve got to be perfect,” Beck says. “You can’t have the hole be a quarter-of-an-inch off.” That would make it seem like a hard job to want to do, but that’s assuming there are even enough people to take on the role. One analysis suggests there is currently a shortage of more than 500,000 skilled laborers in the construction industry. And workforce shortages are the leading cause of construction delays, according to a recent survey from the Associated General Contractors of America. The robotic drill offers an alternative. It also offers significant cost savings. Beck says it could cost about $65 per hole for this drilling work to be done by human crews. Using a fleet of the autonomous drilling robots developed by DeWalt and August Robotics, that cost comes down to about $20 per hole. DPR Construction, the largest data center contractor in the U.S., is prioritizing this drilling robot for testing and validation in 2026, according to Tyler Williams, the company’s field and robotic innovation leader. He says the technology has “real potential to reduce ergonomic strain on craft teams, boost productivity, and generally make the onsite experience better for people.” “Ultimately, everything were doing here is about supporting our customers, many of whom are focused on speed to market,” Williams says. “These kinds of methods are changing how projects get built and helping customers see returns on their capital investments sooner.” DeWalt and August Robotics have been piloting this technology for the past few months and believe the robotic drill is ready for wider adoption. It will be commercially available by mid 2026. As the scale of data center construction increases, especially among hyperscaler tech companies like Meta, Google, and OpenAI, there’s likely to be pent-up demand. “They’ve got money, and they want to go as fast as they can,” Beck says. “They know it’s a race in terms of getting these data centers up and making sure they’ve got the capacity to be able to compete from an AI perspective. So their big push obviously is how fast can you go?” For at least this one part of the job, the answer is much, much faster.

Category: E-Commerce
 

2026-01-27 11:00:00| Fast Company

When you think of dangerous jobs, an office job that requires you to sit for hours probably doesnt come to mind. And while many jobs are objectively riskier, a sedentary job can pose a serious risk to your health. The average office worker spends 70% of their workday sitting down, according to data by workplace supplies firm Banner. Yet, research shows that sitting for prolonged periods without any physical activity significantly increases the risk of ill effects such as high blood pressure, numerous musculoskeletal issues, and potentially heart disease. All in all, a desk job increases your risk of mortality by 16%, according to a study published by JAMA. Our main objective at Zing Coach is to help millions take up exercise and lead healthier lives. And as a fitness coaching company, we wanted to avoid falling into the classic corporate trap of working long hours and leading a sedentary lifestyle. We didnt want to sacrifice our employees health in the pursuit of our goals. Were seeing more and more workplaces spotlight mental health, which is important. However, physical health is just as important. Not only does it have a huge impact on productivity and performance, but its also a huge component of mental well-being. How we took the right steps towards success Like most companies, we felt the pressure to optimize productivity through processes and technology. Yet, as productivity gradually plateaued, it was evident to me that the real issue was a lack of energy. I knew that a huge part of this came from sedentary work. As a cofounder, I decided to implement a culture of wellness and vitality. This included practical steps like providing a small but welcoming in-house training space, so that employees can do short, flexible workout sessions during gaps in the workday. When employees feel their minds wandering or their backs aching, they can stand up, head to the training area, complete a workout, or even just walk and stretch a little. Science supports this approach. Physical activity increases blood flow throughout your body, including to the brain, and particularly to the prefrontal cortex. This is the part in charge of planning, decision-making, problem-solving, working memory, and impulse control. We suspected (and found) that this practice ended up boosting overall energy, which in turn sharpened focus, improved output, and reduced distractions. It was also a great way to build in more opportunities for interactions. Being a fitness company, these social workout sessions often led to innovative ideas. Small moves, big returns: what I learned by introducing workout breaks It doesnt take long to see results People are often put off improving their physical health by a perceived lack of progress. Sure, it takes time to see your hard work paying off substantially, if youre solely focusing on the physical and visual aspects. Encouraging employees to get up and move isnt just a way to counteract the harms of prolonged sitting; it actively and instantly improves mental function and overall energy. Research shows exercise boosts brain function immediately, with effects lasting hours. Even 10 minutes of moderate activity has been found to increase cognitive performance by 14%, according to research published by Neuropsychologia. We havent crunched the numbers, but the difference in focus during meetings and the higher energy levels throughout the day are obvious. And weve seen this across multiple teams. Better health leads to better teamwork Introducing workout breaks didnt just boost individual performance. It improved the team collectively. Exercise releases endorphins, the bodys natural mood elevators, which help us manage stress and deal with discomfort. Its the same chemical behind the runners highthat euphoric feeling you get after a good workout. It also improves sleep quality. It helps the person get better nighttime rest, reducing the likelihood of low-energy afternoons that are otherwise the norm. As it turns out, feeling good both mentally and physically makes it easier for colleagues to get along and work together. We also found that teams that are energetic and enthusiastic automatically become less irritable and conflictual, which fuels far stronger cross-team collaboration. Time at the desk and productivity arent the same One important lesson is how little time at a desk actually correlates with output. Sure, youll see more empty chairs throughout the day, but that doesnt mean productivity will drop. Far from it. Workers arent machines, and after 60 to 90 minutes, many lose focus and effectiveness. Short breaks in general can help refocus and recharge, and teams said that they experienced restorative effects after a physical break. They noticed improvement in all aspects of work performance and personal engagement with the next task after the active break. When it comes to working out, theres a saying that quality often beats quantity. Turns out this is also true in a corporate job. Health is the best productivity tool Ultimately, good health equals good performance. Sure, software and systems can go so far, but if you dont take the steps to prioritize your employees health and well-being, youll never be able to get them to perfom to their true potential.

Category: E-Commerce
 

2026-01-27 11:00:00| Fast Company

On January 22, President Donald Trump unveiled the logo for the Board of Peace, an international coalition his administration is forming to oversee the reconstruction of war-torn Gaza and address other global conflicts. There’s just one issue: The logo leaves out half the world. Trump initiated the effort last year, but has expanded its scope since then, imagining an organization that he leads personally and that member countries pay at least $1 billion to remain a part of. From left: The United Nations seal; the Board of Peace logo Longtime allies and NATO members including Canada, France, Italy, Norway, Sweden, and the U.K. are not members, while member nations include authoritarian countries or illiberal democracies like Saudi Arabia and Belarus that the nonprofit Freedom House rates as “not free.” It’s “like if Law & Order: SVU starred Diddy,” Saturday Night Lives Colin Jost joked about the board’s membership during SNLs Weekend Update” segment on January 24. Yet the group’s logo leans on the visual tropes of global peace to suggest a much different story. A page from the past The logo for the group riffs off the U.N. emblem, but in typical Trump fashion, it’s goldand cuts off more than half the rest of the world from the United States. Reaction online has been similar to the reaction to the board itself: negative. A team led by American designer Oliver Lincoln Lundquist created the United Nations emblem in 1945. Lundquist was a World War II veteran who also designed the blue-and-white Q-Tip box and was on the team that designed the Chrysler Motors Exhibition at the 1939 New York Worlds Fair, according to his 2009 obituary. For the U.N., Lundquist and his team designed a mark showing the globe centered on the North Pole and encircled by a laurel wreath for the official badges worn by conference delegates. That mark was later modified to the current U.N. emblem by spinning it around so Alaska and Russia are on top of the world, and it’s now zoomed out to include more of the globe, as the original badge mark cut off Argentina and the bottom of South Africa and Australia. The U.N. Blue color used by the organization was chosen because it’s “the opposite of red, the war color,” Lundquist said. Trump’s board logo is presumably gold because it’s Trump’s favorite color, and it centers roughly on the U.S. sphere of influence as Trump sees it, from Greenland to Venezuela, though Alaska is cut off and Africa peeks out. The logo is housed inside a shield instead of a circle. A version of the logo initially shared by the White House X account has been criticized as made by AI (among its inaccurate details: a U.S.-Canada border that cuts off a big chunk of Ontario). A modified version of the logo that appeared onstage during the Board of Peace signing ceremony in Davos, Switzerland, was shinier and used a different map that covers roughly the same area. Curiously, the logo’s map doesn’t include the very place the coalition was created to oversee. That means slides shared by the White House showing a nebulous timeline for a development plan of Gaza are all stamped with a logo that shows the U.S., but not Gaza. Trump said at the signing that the Board of Peace represents the first steps to “a brighter day for the Middle East.” That’s not the story his logo tells.

Category: E-Commerce
 

2026-01-27 10:30:00| Fast Company

Saudi Arabia is officially gutting Neom and turning The Line into a server farm. After a year-long review triggered by financial reality, the Financial Times reports that Crown Prince Mohammed bin Salmans flagship project is being “significantly downscaled.” The futuristic linear city known as The Line, originally designed to stretch 150 miles across the desert, is scrapping its sci-fi ambitions to become a far smaller project focused on industrial sectors, says the FT. It’s a rumor that the Saudis originally dismissed when The Guardian first reported on it in 2024. The redesign confirms what skeptics have long suspected: the laws of physics and economics have finally breached the walls of the kingdom’s futuristic Saudi Vision 2030, a country reconversion program aimed at lowering Saudi Arabia’s dependency on oil and transforming the country into a more modern society. Satellite view of construction progress at the Western portion of NEOM, The Line, Saudi Arabia, 2023. [Photo: Gallo Images/Orbital Horizon/Copernicus Sentinel Data 2023] The glossy renderings of the mile-long skyscraper and vertical forests that was The Line are now dissolving into a pragmatic, if desperate, attempt to salvage the sunk costs. The development, once framed as a “civilization revolution” was originally imagined as a 105-mile long, 1,640-foot high, 656-foot wide car-free smart city designed to house 9 million residents. The redesign pivots toward making Neom a hub for data centers to support the kingdom’s aggressive AI push. An insider told the FT the logic is purely utilitarian: “Data centers need water cooling and this is right on the coast,” signaling that the ambitious city has been downgraded to server farm with a view of the Red Sea. The end of the line The scaling back follows years of operational chaos and financial bleeding. Since its 2017 launch, the project promised a 105-mile strip of high-density living. But reality struck early. By April 2024, The Guardian reported that planners were already being forced to slash the initial phase to just 2.4 kilometers (1.5 miles) by 2030, reducing the projected population from 1.5 million to fewer than 300,000. Satellite view of construction progress at the Western portion of NEOM, The Line, Saudi Arabia, 2023. [Photo: Gallo Images/Orbital Horizon/Copernicus Sentinel Data 2023] While the public infrastructure stalledleaving what critics called “giant holes in the middle of nowhere”satellite imagery revealed that construction resources were successfully diverted to a massive royal palace with 16 buildings and a golf course. Internally, the situation was dire. The Wall Street Journal reported an audit revealing “deliberate manipulation of finances” by management to justify soaring costs, with the “end-state” estimate ballooning to an impossible $8.8 trillionmore than 25 times the annual Saudi budget. [Screenshot: Business Insider] The turmoil culminated in the abrupt departure of longtime CEO Nadhmi al-Nasr in November 2024, leaving behind a legacy marred by allegations of abuse. An ITV documentary claimed 21,000 workers had died since the inception of Saudi Vision 2030, with laborers describing 16-hour shifts for weeks on end. Even completed projects failed to launch; the high-end island resort Sindalah sat idle despite being finished, reportedly plagued by design flaws that prevented its opening. By July 2025, the sovereign wealth fundfacing tightening liquidity and oil prices hovering around $71 a barrelfinally hit the brakes. Bloomberg reported that Saudi Arabia had hired consultants to conduct a “strategic review” to determine if The Line was even feasible. The goal was to “recalibrate” Vision 2030, a polite euphemism for slashing expenditures as the kingdom faced hard deadlines for the 2030 Expo and the 2034 World Cup. The review’s conclusion is stripping away even the most publicized milestones. Trojena, the ski resort that defied meteorological logic, will no longer host the Asian Winter Games in 2029 as planned. The resort is being downsized, a casualty of the realization that the kingdom needs to “prioritize market readiness and sustainable economic impact” over snow in the desert. What remains of The Line will be unrecognizable to those who bought into the sci-fi dream. The FT says that sources briefed on the redesign state it will be a “totally different concept” that utilizes existing infrastructure in a “totally different manner.” The new Neom CEO, Aiman al-Mudaifer, is now tasked with managing a “modest” development that aligns with the Public Investment Fund’s need to actually generate returns rather than burn cash. Even bin Salman has publicly given up, although he’s framing it not as a failure but a strategic pivot. Addressing the Shura Councila consultative body for the kingdomhe framed the move as flexibility, stating, “we will not hesitate to cancel or make any radical amendment to any programs or targets if we find that the public interest so requires. And thats how a “civilization revolution” ends, my friends, not with a bang, but with a whimper. The hum of cooling fans in yet another farm producing AI slop that always was (and still is) more believable than The Line and Neom projects.

Category: E-Commerce
 

2026-01-27 10:00:00| Fast Company

Generative AI was trained on centuries of art and writing produced by humans. But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs. A new study points to some answers. In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger ström, and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomouslygenerating and interpreting their own outputs without human intervention. The researchers linked a text-to-image system with an image-to-text system and let them iterateimage, caption, image, captionover and over and over. Regardless of how diverse the starting prompts wereand regardless of how much randomness the systems were allowedthe outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings, and pastoral landscapes. Even more striking, the system quickly forgot its starting prompt. The researchers called the outcomes visual elevator musicpleasant and polished, yet devoid of any real meaning. For example, they started with the image prompt, The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action. The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image. After repeating this loop, the researchers ended up with a bland image of a formal interior spaceno people, no drama, no real sense of time and place. As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation. The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default. The familiar is the default This experiment may appear beside the point: Most people dont ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use. But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes. This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered, and regenerated as it moves between words, images, and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch. The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable, and easy to regenerate. Cultural stagnation or acceleration? For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation. Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions. What has been missing from this debate is empirical evidence showing where homogenization actually begins. The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally producewhen used autonomously and repeatedlyis already compressed and generic. This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable, and the conventional. Retraining would amplify this effect. But it is not its source. This is no moral panic Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression. But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural productsnews stories, songs, memes, academic papers, photographs, or social media postsmillions of times per day, guided by the same built-in assumptions about what is typical. The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions. This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures, and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risknot a speculative fearif generative systems are left to operate in their current iteration. They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space. In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence. This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward whats typical rather than whats unique or creative. Lost in translation Whenever you write a caption for an image, details will be lost. Likewise, for generating an image from text. And this happens whether its being performed by a human or a machine. In that sense, the convergence that took place is not a failure thats unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist. But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic. The implication is sobering: Even with human guidancewhether that means writing prompts, selecting outputs, or refining resultsthese systems are still stripping away some details and amplifying others in ways that are oriented toward whats average. If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression. The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content. Cultural stagnation is no longer speculation. Its already happening. Ahmed Elgammal is a professor of computer science and director of the Art & AI Lab at Rutgers University. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Category: E-Commerce
 

2026-01-27 10:00:00| Fast Company

At the Consumer Electronics Show in early January, Razer made waves by unveiling a small jar containing a holographic anime bot designed to accompany gamers not just during gameplay, but in daily life. The lava-lamp-turned-girlfriend is undeniably bizarrebut Razers vision of constant, sometimes sexualized companionship is hardly an outlier in the AI market. Mustafa Suleyman, Microsoft’s AI CEO, who has long emphasized the distinction between AI with personality and AI with personhood, now suggests that AI companions will live life alongside youan ever-present friend helping you navigate lifes biggest challenges. Others have gone further. Last year, a leaked Meta memo revealed just how distorted the companys moral compass had become in the realm of simulated connection. The document detailed what chatbots could and couldnt say to children, deeming acceptable messages that included explicit sexual advances: Ill show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. (Meta is currently being suedalong with TikTok and YouTubeover alleged harms to children caused by its apps. On January 17, the company stated on its blog that it will halt teen access to AI chatbot characters.) Coming from a sector that once promised to build a more interconnected world, Silicon Valley now appears to have lost the plotdeploying human-like AI that risks unraveling the very social fabric it once claimed to strengthen. Research already shows that in our supposedly connected world, social media platforms often leave us feeling more isolated and less well, not more. Layering AI companions onto that fragile foundation risks compounding what former Surgeon General Vivek Murthy called a public health crisis of loneliness and disconnection. But Meta isnt alone in this market. AI companions and productivity tools are reshaping human connection as we know it. Today more than half of teens engage with synthetic companions regularly, and a quarter believe AI companions could replace real-life romance. Its not just friends and lovers getting replaced: 64% of professionals who use AI frequently say they trust AI more than their coworkers.  These shifts bear all the hallmarks of the late Harvard Business School professor Clayton Christensens theory of disruptive innovation. Disruptive innovation is a theory of competitive response. Disruptive innovations enter at the bottom of markets with cheaper products that arent as good as prevailing solutions. They serve nonconsumers or those who cant afford existing solutions, as well as those who are overserved by existing offerings. When they do this, incumbents are likely to ignore them, at first. Because disruption theory is predictive, not reactive, it can help us see around corners. Thats why the Christensen Institute is uniquely positioned to diagnose these threats early and to chart solutions before its too late. Christensens timeless theory has helped founders build world-changing companies. But today, as AI blurs the line between technical and human capabilities, disruption is no longer just a market forceits a social and psychological one. Unlike many of the market evolutions that Christensen chronicled, AI companions risk hollowing out the very foundations of human well-being.  Yet AI is not inherently disruptive; its the business model and market entry points that firms pursue that define the technologys impact. All disruptive innovations have a few things in common: They start at the bottom of the market, serving nonconsumers or overserved customers with affordable and convenient offerings. Over time, they improve, luring more and more demanding customers away from industry leaders with a cheaper and good enough product or service.  Historically, these innovations have democratized access to products and services otherwise out of reach. Personal computers brought computing power to the masses. Minute Clinic offered more accessible, on-demand care. Toyota boosted car ownership. Some companies lost, but consumers generally won.  When it comes to human connection, AI companies are flipping that script. Nonconsumers arent people who cant afford computers, cars, or caretheyre the millions of lonely individuals seeking connection. Improvements that make AI appear more empathetic, emotionally savvy, and there for users stand to quietly shrink connections, degrading trust and well-being. It doesnt help that human connection is ripe for disruption. Loneliness is rampant, and isolation persists at an alarmingly high rate. Weve traded face-to-face connections for convenience and migrated many of our social interactions with both loved ones and distant ties online. AI companions fit seamlessly into those digital social circles and are, therefore, primed to disrupt relationships at scale.  The impact of this disruption will be widely felt across many domains where relationships are foundational to thriving. Being lonely is as bad for our health as smoking up to 15 cigarettes a day. An estimated half of jobs come through personal connections. Disaster-related deaths are a fraction (sometimes even a tenth) in connected communities compared to isolated ones.  What can be done when our relationshipsand the benefits they provide usare under attack? Unlike data that tells us only whats in the rearview mirror, disruption offers foresight about the trajectory innovations are likely to takeand the unintended consequences they may unleash. We dont need to wait for evidence on how AI companions will reshape our relationships; instead, we can use our existing knowledge of disruption to anticipate risks and intervene early. Action doesnt mean halting innovation. It means steering it with a moral compass to guide our innovation trajectoryone that orients investments, ingenuity, and consumer behavior toward a more connected, opportunity-rich, and healthy society.  For Big Tech, this is a call for a bulwark: an army of investors and entrepreneurs enlisting this new technology to solve societys most pressing challenges, rather than deepening existing ones. For those building gen AI companies, theres a moral tightrope to walk. Its worth asking whether the innovations youre pursuing today are going to create the future you want to live in. Are the benefits youre creating sustainable beyond short-term growth or engagement metrics? Does your innovation strengthen or undermine trust in vital social and civic institutions, or even individuals? And just because you can disrupt human relationships, should you? Consumers have a moral responsibility as well, and it starts with awareness. As a society, we need to be aware of how the market and cultural forces are shaping which products scale, and how our behaviors are being shaped as a resultespecially when it comes to the ways we interact with one another.  Regulators have a role in shaping both supply and demand. We dont need to inhibit AI innovation, but we do need to double down on prosocial policies. That means curbing the most addictive tools and mitigating risks to children, but also investing in drivers of well-being, such as social connections that improve health outcomes.   By understanding the acute threats AI poses to human connection, we can halt disruption in its tracks, not by abandoning AI but by embracing one another. We can congregate with fellow humans and advocate for policies that support pro-social connectionin our neighborhoods, schools, and online. By connecting, advocating, and legislating for a more human-centered future, we have the power to change how this story unfolds.   Disruptive innovation can expand access and prosperity without sacrificing our humanity. But that requires intentional design. And if both sides of the market dont acknowledge whats at risk, the future of humanity is at stake.  That might sound alarmist, but thats the thing about disruption: It starts at the fringes of the market, causing incumbents to downplay its potential. Only years later do industry leaders wake up to the fact that theyve been displaced. What they initially thought was too fringe to matter puts them out of business.  Right now, humansand our connections with one anotherare the industry leaders. AI that can emulate presence, empathy, and attachment is the potential disruptor.  In this world where disruption is inevitable, the question isnt whether AI will reshape our lives. Its whether we will summon the foresightand the moral compassto ensure it doesnt disrupt our humanity. 

Category: E-Commerce
 

2026-01-27 10:00:00| Fast Company

While Silicon Valley argues over bubbles, benchmarks, and who has the smartest model, Anthropic has been focused on solving problems that rarely generate hype but ultimately determine adoption: whether AI can be trusted to operate inside the worlds most sensitive systems. Known for its safety-first posture and the Claude family of large language models (LLMs), Anthropic is placing its biggest strategic bets where AI optimism tends to collapse fastest, i.e., regulated industries. Rather than framing Claude as a consumer product, the company has positioned its models as core enterprise infrastructuresoftware expected to run for hours, sometimes days, inside healthcare systems, insurance platforms, and regulatory pipelines. Trust is what unlocks deployment at scale, Daniela Amodei, Anthropic cofounder and president, tells Fast Company in an exclusive interview. In regulated industries, the question isnt just which model is smartestits which model you can actually rely on, and whether the company behind it will be a responsible long-term partner. That philosophy took concrete form on January 11, when Anthropic launched Claude for Healthcare and Life Sciences. The release expanded earlier life sciences tools designed for clinical trials, adding support for such requirements as HIPAA-ready infrastructure and human-in-the-loop escalation, making its models better suited to regulated workflows involving protected health information. We go where the work is hard and the stakes are real, Amodei says. What excites us is augmenting expertisea clinician thinking through a difficult case, a researcher stress-testing a hypothesis. Those are moments where a thoughtful AI partner can genuinely accelerate the work. But that only works if the model understands nuance, not just pattern matches on surface-level inputs. That same thinking carried into Cowork, a new agentic AI capability released by Anthropic on January 12. Designed for general knowledge workers and usable without coding expertise, Claude Cowork can autonomously perform multistep tasks on a users computerorganizing files, generating expense reports from receipt images, or drafting documents from scattered notes. According to reports, the launch unintentionally intensified market and investor anxiety around the durability of software-as-a-service businesses; many began questioning the resilience of recurring software revenue in a world where general-purpose AI agents can generate bespoke tools on demand. Anthropics most viral product, Claude Code, has amplified that unease. The agentic tool can help write, debug, and manage code faster using natural-language prompts, and has had a substantial impact among engineers and hobbyists. Users report building everything from custom MRI viewers to automation systems entirely with Claude.  Over the past three years, the companys run-rate revenue has grown from $87 million at the end of 2023 to just under $1 billion by the end of 2024 and to $9 billion-plus by the end of 2025. That growth reflects enterprises, startups, developers, and power users integrating Claude more deeply into how they actually work. And we’ve done this with a fraction of the compute our competitors have, Amodei says.  Building for Trust in the Most Demanding Enterprise Environments According to a mid-2025 report by venture capital firm Menlo Ventures, AI spending across healthcare reached $1.4 billion in 2025, nearly tripling the total from 2024. The report also found that healthcare organizations are adopting AI 2.2 times faster than the broader economy. The largest spending categories include ambient clinical documentation, which accounted for $600 million, and coding and billing automation, at $450 million.  The fastest-growing segments, however, reflect where operational pressure is most acute, like patient engagement, where spending is up 20 times year over year, and prior authorization, which grew 10 times over the same period. Claude for Healthcare is being embedded directly into the latters workflows, attempting to take on time-consuming and error-prone tasks such as claims review, care coordination, and regulatory documentation.  Claude for Life Sciences has followed a similar pattern. Anthropic has expanded integrations with Medidata, ClinicalTrials.gov, Benchling, and bioRxiv, enabling Claude to operate inside clinical trial management and scientific literature synthesis. The company has also introduced agent skills for protocol drafting, bioinformatics pipelines, and regulatory gap analysis. Customers include Novo Nordisk, Banner Health, Sanofi, Stanford Healthcare, and Eli Lilly. According to Anthropic, more than 85% of its 22,000 providers at Banner Health reported working faster with higher accuracy using Claude-assisted workflows. Anthropic also reports that internal teams at Novo Nordisk have reduced clinical documentation timelines from more than 12 weeks to just minutes. Amodei adds that what surprised her most was how quickly practitioners defined their relationship with the companys AI models on their own terms. They’re not handing decisions off to Claude, she says. They’re pulling it into their workflow in really specific wayssynthesizing literature, drafting patient communications, pressure-testing their reasoningand then applying their own judgment. That’s exactly the kind of collaboration we hoped for. But honestly, they got there faster than I expected. Industry experts say the appeal extends beyond raw performance. Anthropics deliberate emphasis on trust, restraint, and long-horizon reliability is emerging as a genuine competitive moat in regulated enterprise sectors. This approach aligns with bounded autonomy and sandboxed execution, which are essential for safe adoption where raw speed often introduces unacceptable risk, says Cobus Greyling, chief evangelist at Kore.ai, a vendor of enterprise AI platforms. He adds that Anthropics universal agent concept introduced a third architectural model for AI agents, expanding how autonomy can be safely deployed. Other AI competitors are also moving aggressively into the healthcare sector, though with different priorities. OpenAI debuted its healthcare offering, ChatGPT Health, in January 2026. The product is aimed primarily at broad consumer and primary care use cases such as symptom triage and health navigation outside clinic hours. It benefits from massive consumer-scale adoption, handling more than 230 million health-related queries globally each week.  While GPT Health has proven effective in generalist tasks such as documentation support and patient engagement, Claude is gaining traction in more specialized domains that demand structured reasoning and regulatory rigorincluding drug discovery and clinical trial design. Greyling cautions, however, that slow procurement cycles, entrenched organizational politics, and rigid compliance requirements can delay AI adoption across healthcare, life sciences, and insurance. Even with strong technical performance in models like Claude 4.5, enterprise reality demands extensive validation, custom integrations, and risk-averse stakeholders, he says. The strategy could stall if deployment timelines stretch beyond economic justification or if cost ad latency concerns outweigh reliability gains in production. In January, Travelers announced it would deploy Claude AI assistants and Claude Code to nearly 10,000 engineers, analysts, and product ownersone of the largest enterprise AI rollouts in insurance to date. Each assistant is personalized to employee roles and connected to internal data and tools in real time. Likewise, Snowflake committed $200 million to joint development. Salesforce integrated Claude into regulated-industry workflows, while Accenture expanded multiyear agreements to scale enterprise deployments. AI Bubble or Inflection Point? Skeptics argue that todays agent hype resembles past automation cyclesbig promises followed by slow institutional uptake. If valuations reflect speculation rather than substance, regulated industries should expose weaknesses quickly, and Anthropic appears willing to accept that test. Its capital posture reflects confidence, through a $13 billion Series F at a $183 billion valuation in 2025, followed by reports of a significantly larger round under discussion. Anthropic is betting that the AI race will ultimately favor those who design for trust and responsibility first. We built a company where research, product, and policy are integratedthe people building our models work deeply with the people studying how to make them safer. That lets us move fast without cutting corners, Amodei says. Countless industries are putting Claude at the center of their most critical work. That trust doesn’t happen unless you’ve earned it.

Category: E-Commerce
 

2026-01-27 09:30:00| Fast Company

Many people spend an incredible amount of time worrying about how to be more successful in life. But what if thats the wrong question? What if the real struggle for lots of us isnt how to be successful, but how to actually feel successful? Thats the issue lots of strivers truly face, according to ex-Googler turned neuroscientist and author Anne-Laure Le Cunff. In her book Tiny Experiments, she explores how to get off the treadmill of constantly chasing the next milestone, and instead find joy in the process of growth and uncertainty.  Youre probably doing better than you give yourself credit for, she explained on LinkedIn recently, before offering 10 telltale signs that what you need isnt to achieve more but to recognize your achievements more.   Are you suffering from success dysmorphia?  Before we get to those signs, let me try to convince you that youre probably being way too hard on yourself about how well youre doing in life. Start by considering the concept of dysmorphia. Youve probably heard the term in relation to eating disorders. In that context, dysmorphia is when you have a distorted picture of your body. You see a much larger person in the mirror than the rest of the world sees when they look at you.  But dysmorphia doesnt just occur in relation to appearance. One recent poll found that 29% of Americans (and more than 40% of young people) experience money dysmorphia. That is, even though theyre doing objectively okay financially, they constantly feel as if theyre falling behind.  Financial experts agree that thanks to a firehose of unrealistic images and often dubious money advice online, its increasingly common for people to have a distorted sense of how well theyre actually doing when it comes to money.  Or take the idea of productivity dysmorphia, popularized by author Anna Codrea-Rado. In a widely shared essay, she outed herself as a sufferer, revealing that despite working frantically and fruitfully, she never feels that shes done enough.  When I write down everything Ive done since the beginning of the pandemicpitched and published a book, launched a media awards, hosted two podcastsI feel overwhelmed. The only thing more overwhelming is that I feel like Ive done nothing at all, she wrote back in 2021.  Which means she did all that in just over a year and still feels inadequate. Thats crazy. But its not uncommon to drive ourselves so relentlessly. In Harvard Business Review, Jennifer Moss, author of The Burnout Epidemic, cites a Slack report showing that half of all desk workers say they rarely or never take breaks during the workday. She calls this kind of toxic productivity, a common sentiment in todays work culture. 10 signs of success  All together, this evidence paints a picture of a nation that is pretty terrible at gauging and celebrating success. The roots of the issue obviously run deep in our culture and economy. Reorienting our collective life to help us all recognize that there is such a thing as enough is beyond the scope of this column.  But in the meantime, neuroscience can help you take a small step toward greater mental peace by reminding you youre probably doing better than you sometimes feel you are. Especially, Le Cunff stresses, if you notice these signs of maturity, growth, and balance in your life.  You celebrate small wins.  You try again after failing.  You pause before reacting.  You take breaks without guilt.  You recover from setbacks faster.  You ask for help when you need it.  Youre kind to yourself when you make mistakes.  You notice patterns instead of judging them.  You make decisions based on values, not pressure.  Youre more curious than anxious about whats next.  A neuroscientist and a writer agree: Practice becoming Writer Kurt Vonnegut once advised a young correspondent, Practice any art, music, singing, dancing, acting, drawing, painting, sculpting, poetry, fiction, essays, reportage, no matter how well or badly, not to get money and fame, but to experience becoming, to find out whats inside you, to make your soul grow. In other words, artists agree with neuroscientists. Were all works in progress. Youre always going to be in the middle of becoming who you are. You may as well learn to appreciate yourself and the process along the way. We often feel like we need to reach just one more milestone before we can feel successful. But the tme to celebrate isnt when youre arrived at successnone of us fully ever gets thereits at every moment of growth and wisdom along the journey.  By Jessica Stillman This article originally appeared in Fast Company‘s sister publication, Inc.  Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy. 

Category: E-Commerce
 

2026-01-27 09:00:00| Fast Company

January arrives with a familiar hangover. Too much food. Too much drink. Too much screen time. And suddenly social media is full of green juices, charcoal supplements, foot patches, and seven-day liver resets, all promising to purge the body of mysterious toxins and return it to a purer state. In the first episode of Strange Health, a new visualized podcast from The Conversation, hosts Katie Edwards and Dr. Dan Baumgardt put detox culture under the microscope and ask a simple question: Do we actually need to detox at all? Strange Health explores the weird, surprising, and sometimes alarming things our bodies do. Each episode takes a popular health or wellness trend, viral claim, or bodily mystery and examines what the evidence really says, with help from researchers who study this stuff for a living. Edwards, a health and medicine editor at The Conversation, and Baumgardt, a general practicioner and lecturer in health and life sciences at the University of Bristol, share a long-standing fascination with the bodys improbabilities and limits, plus a healthy skepticism for claims that sound too good to be true. This opening episode dives straight into detoxing. From juice cleanses and detox teas to charcoal pills, foot pads, and coffee enemas, Edwards and Baumgardt watch, wince, and occasionally laugh their way through some of the internets most popular detox trends. Along the way, they ask what these products claim to remove, how they supposedly work, and why feeling worse is often reframed online as a sign that a detox is working. The episode also features an interview with Trish Lalor, a liver expert from the University of Birmingham, whose message is refreshingly blunt. Your body is really set up to do it by itself, she explains. The liver, working alongside the kidneys and gut, already detoxifies the body around the clock. For most healthy people, Lalor says, there is no need for extreme interventions or pricey supplements. That does not mean everything labeled detox is harmless. Lalor explains where certain ingredients can help, where they make little difference, and where they can cause real damage if misused. Real detoxing looks less like a sachet or a foot patch and more like hydration, fiber, rest, moderation, and giving your liver time to do the job it already does remarkably well. If youre buying detox patches and supplements, then its probably your wallet that is about to be cleansed, not your liver. Strange Health is hosted by Katie Edwards and Dan Baumgardt. The executive producer is Gemma Ware, with video and sound editing by Sikander Khan. Artwork by Alice Mason. Edwards and Baumgardt talk about two social media clips in this episode, one from 30.forever on TikTok and one from velvelle_store on Instagram. Listen to Strange Health via any of the apps listed above, download it directly via our RSS feed, or find out how else to listen here. A transcript is available via the Apple Podcasts or Spotify apps. Katie Edwards is a commissioning editor for health and medicine and host of the Strange Health podcast at The Conversation. Dan Baumgardt is a senior lecturer at the School of Psychology and Neuroscience at the University of Bristol. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Category: E-Commerce
 

2026-01-27 09:00:00| Fast Company

The Grammy Awards return February 1 at a pivotal moment for the music industry, one shaped by trending Latin artists, resurgent rock legends, and even charting AI acts. To unpack what will make this years broadcast distinctive, the Recording Academy CEO Harvey Mason Jr. shares how Grammy winners are chosen, and how music both reflects and influences the broader business marketplace. This is an abridged transcript of an interview from Rapid Response, hosted by former Fast Company editor-in-chief Robert Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with todays top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. This year’s Grammy Awards come at an intriguing inflection point for the music business. I mean, the music business is always changing, but I was looking at your Album of the Year nominees, which feature a bunch of mega artists: Justin Bieber, Tyler the Creator, Lady Gaga, Kendrick Lamar, Bad Bunny. How much do Grammy nominees reflect the marketplace? The Grammy nominees are meant to reflect the marketplace, and that’s our hope, but it really reflects the voters will. And you don’t know what’s going to resonate with the voting body year over year. We have roughly 15,000 voting members. Those members are all professional music people, whether they’re writers or arrangers or producers or artists. So they’re the peers of the people that are being nominated. Sometimes they surprise you and they vote for something that I wasn’t thinking of and sometimes they are right down the middle. But the hope is that the nominations are a direct and unencumbered reflection of what the voters appreciate and want to vote for. And in this sort of more fragmented media ecosystem . . . do the biggest artists have the same kind of cultural sway, or is the cultural impact more diffuse? It’s debatable. . . . I’m sure everyone has an opinion, but the big artists are always going to be impactful and important and shift the direction of music. And there’s always going to be a new class of creators coming up. KPop Demon Hunters [is] the animated band [from] this breakthrough filmthe most-watched movie ever on Netflix. But the [soundtrack] album charted No. 1 on Billboard also. Did that surprise you? Are there any messages in that about music and where it’s going in the future? It didn’t surprise me, because it was really, really good. And the message that it sends is you can come from anywhere, any country, any medium. You can come off a streaming platform, off a show, off of a garage studio. And if your music resonates, it’s going to be successful. It’s going to find an audience. And that’s what’s exciting to me right now about music is the diverse places where you’re finding it being created and sourced from. And also, the accessibility to audiences. You don’t have to record a record and then hopefully it gets mixed and mastered and hopefully somebody releases it and markets it the right way. You can make something and put it out. And if it creates excitement . . . people are going to love it and gravitate towards it. One of the bands that ended up putting up big streaming numbers was the Velvet Sundown, an AI-based artist. I’m curious, is there going to be a point where AI acts have their own Grammy category? Are there any award restrictions on artists who use AI in their music now? I know there was a lot of tumult about that with the Oscars last year with The Brutalist. AI is moving so darn fast. . . . Month to month it’s doing new things and getting better and changing what it’s doing. So we’re just going to have to be very diligent and watch it and see what happens. My perspective is always going to be to protect the human creators, but I also have to acknowledge that AI is definitely a tool that’s going to be used. People like me or others in the studios around the world are going to be figuring out, How can I use this to make some great music? So for now, AI does not disqualify you from being able to submit for a Grammy. There are certain things that you have to abide by and there are certain rules that you have to follow, but it does not disqualify you from entering. You’re a songwriter, you’re a producer. Are you using AI in your own stuff? I am. I’m fine to admit that I am using it as a creative tool. There are times when I might want to hear a different sound or some different instrumentation. . . . I’m not going to be the creator that ever relies on AI to create something from scratch, because that’s what I love more than anything in the world is making music, being able to sit down at a piano and come up with something that represents my feelings, my emotions, what I’m going through in my life, my stories. So I don’t think I’ll ever be that person that just relies on a computer or software or platform to do that for me. But I do think much like auto-tune, or like a drum machine, or like a synthesizer, there are things that can enhance what I’m trying to get from here out to here. And if those are things that come in that form, I think we’re all going to be ultimately taking advantage of them. But we have to do it thoughtfully. We have to do it with guardrails. We have to do it respectfully. What is the music being trained on? Are there the right approvals? Are artists being remunerated properly? Those are all things that we have to make sure are in place. So, let me ask you about Latin music. I know the Latin Recording Academy split off from the Recording Academy 20 years ago or so. Do you rethink that these days? Latin music is all over the mainstream charts, and plenty of acts are getting Grammy nominations. Should Latin music be separated out? The history of it is a little different. We were representing music, the Latin music on the main show, and the popularity of it demanded that we have more categories. In order to feature more categories and honor the full breadth of the different genres of Latin music, we created the Latin Grammy so they could have that spotlight. Currently, members of the Latin Academy are members of the U.S. Academy. So we’ve not set aside the Latin genres. We’ve not tried to separate them. We’ve only tried to highlight them and lift those genres up. As you know, in the U.S. show we feature Latin categories, we feature many Latin artists, and that will be the same this year, maybe more so, especially with the Bad Bunny success. So in no way does that try to separate the genres. And I think we’ll see some more of that in the future as other genres and other regions continue to make their music even more globally known. It’s not just about music that’s made in one country, right? At least it shouldn’t be. It should be about music everywhere in the world. Instead of narrowing, you might have . . . additional or supplemental academies or projects so that you have tat expertise in those new and growing areas across the globe? Absolutely. We’re going to have to continue to expand our membership. In order for us to honor all the different music that’s being made now, which is more than ever and music coming from more places than ever, our membership has to be reflective of that. Just like, I don’t know what type of music you’re a fan of, but I wouldn’t ask you if you didn’t know everything about classical to go into the classical categories and say, “What did you think was the best composing?” [There are] so many categories you wouldn’t be able to evaluate other than say, “Oh, I recognize that name. Let me vote for that.” And that’s what we can’t have. We have to have people that know the genres. And you’re seeing K-pop, you’re seeing Afrobeats, you’re seeing Latin, you’re seeing growth in the Middle East, you’re seeing growth coming out of India. There are so many great artists and so many great records. And you’re hearing a blend of genres where you’re seeing Western artists interact or collaborate with artists from different parts of the world. That’s what’s happening. You can’t argue it. You can’t deny it. You can’t pretend that it’s not what’s going on.

Category: E-Commerce
 

Sites: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] next »

Privacy policy . Copyright . Contact form .