|
|||||
On January 22, President Donald Trump unveiled the logo for the Board of Peace, an international coalition his administration is forming to oversee the reconstruction of war-torn Gaza and address other global conflicts. There’s just one issue: The logo leaves out half the world. Trump initiated the effort last year, but has expanded its scope since then, imagining an organization that he leads personally and that member countries pay at least $1 billion to remain a part of. From left: The United Nations seal; the Board of Peace logo Longtime allies and NATO members including Canada, France, Italy, Norway, Sweden, and the U.K. are not members, while member nations include authoritarian countries or illiberal democracies like Saudi Arabia and Belarus that the nonprofit Freedom House rates as “not free.” It’s “like if Law & Order: SVU starred Diddy,” Saturday Night Lives Colin Jost joked about the board’s membership during SNLs Weekend Update” segment on January 24. Yet the group’s logo leans on the visual tropes of global peace to suggest a much different story. A page from the past The logo for the group riffs off the U.N. emblem, but in typical Trump fashion, it’s goldand cuts off more than half the rest of the world from the United States. Reaction online has been similar to the reaction to the board itself: negative. A team led by American designer Oliver Lincoln Lundquist created the United Nations emblem in 1945. Lundquist was a World War II veteran who also designed the blue-and-white Q-Tip box and was on the team that designed the Chrysler Motors Exhibition at the 1939 New York Worlds Fair, according to his 2009 obituary. For the U.N., Lundquist and his team designed a mark showing the globe centered on the North Pole and encircled by a laurel wreath for the official badges worn by conference delegates. That mark was later modified to the current U.N. emblem by spinning it around so Alaska and Russia are on top of the world, and it’s now zoomed out to include more of the globe, as the original badge mark cut off Argentina and the bottom of South Africa and Australia. The U.N. Blue color used by the organization was chosen because it’s “the opposite of red, the war color,” Lundquist said. Trump’s board logo is presumably gold because it’s Trump’s favorite color, and it centers roughly on the U.S. sphere of influence as Trump sees it, from Greenland to Venezuela, though Alaska is cut off and Africa peeks out. The logo is housed inside a shield instead of a circle. A version of the logo initially shared by the White House X account has been criticized as made by AI (among its inaccurate details: a U.S.-Canada border that cuts off a big chunk of Ontario). A modified version of the logo that appeared onstage during the Board of Peace signing ceremony in Davos, Switzerland, was shinier and used a different map that covers roughly the same area. Curiously, the logo’s map doesn’t include the very place the coalition was created to oversee. That means slides shared by the White House showing a nebulous timeline for a development plan of Gaza are all stamped with a logo that shows the U.S., but not Gaza. Trump said at the signing that the Board of Peace represents the first steps to “a brighter day for the Middle East.” That’s not the story his logo tells.
Category:
E-Commerce
When you think of dangerous jobs, an office job that requires you to sit for hours probably doesnt come to mind. And while many jobs are objectively riskier, a sedentary job can pose a serious risk to your health. The average office worker spends 70% of their workday sitting down, according to data by workplace supplies firm Banner. Yet, research shows that sitting for prolonged periods without any physical activity significantly increases the risk of ill effects such as high blood pressure, numerous musculoskeletal issues, and potentially heart disease. All in all, a desk job increases your risk of mortality by 16%, according to a study published by JAMA. Our main objective at Zing Coach is to help millions take up exercise and lead healthier lives. And as a fitness coaching company, we wanted to avoid falling into the classic corporate trap of working long hours and leading a sedentary lifestyle. We didnt want to sacrifice our employees health in the pursuit of our goals. Were seeing more and more workplaces spotlight mental health, which is important. However, physical health is just as important. Not only does it have a huge impact on productivity and performance, but its also a huge component of mental well-being. How we took the right steps towards success Like most companies, we felt the pressure to optimize productivity through processes and technology. Yet, as productivity gradually plateaued, it was evident to me that the real issue was a lack of energy. I knew that a huge part of this came from sedentary work. As a cofounder, I decided to implement a culture of wellness and vitality. This included practical steps like providing a small but welcoming in-house training space, so that employees can do short, flexible workout sessions during gaps in the workday. When employees feel their minds wandering or their backs aching, they can stand up, head to the training area, complete a workout, or even just walk and stretch a little. Science supports this approach. Physical activity increases blood flow throughout your body, including to the brain, and particularly to the prefrontal cortex. This is the part in charge of planning, decision-making, problem-solving, working memory, and impulse control. We suspected (and found) that this practice ended up boosting overall energy, which in turn sharpened focus, improved output, and reduced distractions. It was also a great way to build in more opportunities for interactions. Being a fitness company, these social workout sessions often led to innovative ideas. Small moves, big returns: what I learned by introducing workout breaks It doesnt take long to see results People are often put off improving their physical health by a perceived lack of progress. Sure, it takes time to see your hard work paying off substantially, if youre solely focusing on the physical and visual aspects. Encouraging employees to get up and move isnt just a way to counteract the harms of prolonged sitting; it actively and instantly improves mental function and overall energy. Research shows exercise boosts brain function immediately, with effects lasting hours. Even 10 minutes of moderate activity has been found to increase cognitive performance by 14%, according to research published by Neuropsychologia. We havent crunched the numbers, but the difference in focus during meetings and the higher energy levels throughout the day are obvious. And weve seen this across multiple teams. Better health leads to better teamwork Introducing workout breaks didnt just boost individual performance. It improved the team collectively. Exercise releases endorphins, the bodys natural mood elevators, which help us manage stress and deal with discomfort. Its the same chemical behind the runners highthat euphoric feeling you get after a good workout. It also improves sleep quality. It helps the person get better nighttime rest, reducing the likelihood of low-energy afternoons that are otherwise the norm. As it turns out, feeling good both mentally and physically makes it easier for colleagues to get along and work together. We also found that teams that are energetic and enthusiastic automatically become less irritable and conflictual, which fuels far stronger cross-team collaboration. Time at the desk and productivity arent the same One important lesson is how little time at a desk actually correlates with output. Sure, youll see more empty chairs throughout the day, but that doesnt mean productivity will drop. Far from it. Workers arent machines, and after 60 to 90 minutes, many lose focus and effectiveness. Short breaks in general can help refocus and recharge, and teams said that they experienced restorative effects after a physical break. They noticed improvement in all aspects of work performance and personal engagement with the next task after the active break. When it comes to working out, theres a saying that quality often beats quantity. Turns out this is also true in a corporate job. Health is the best productivity tool Ultimately, good health equals good performance. Sure, software and systems can go so far, but if you dont take the steps to prioritize your employees health and well-being, youll never be able to get them to perfom to their true potential.
Category:
E-Commerce
Saudi Arabia is officially gutting Neom and turning The Line into a server farm. After a year-long review triggered by financial reality, the Financial Times reports that Crown Prince Mohammed bin Salmans flagship project is being “significantly downscaled.” The futuristic linear city known as The Line, originally designed to stretch 150 miles across the desert, is scrapping its sci-fi ambitions to become a far smaller project focused on industrial sectors, says the FT. It’s a rumor that the Saudis originally dismissed when The Guardian first reported on it in 2024. The redesign confirms what skeptics have long suspected: the laws of physics and economics have finally breached the walls of the kingdom’s futuristic Saudi Vision 2030, a country reconversion program aimed at lowering Saudi Arabia’s dependency on oil and transforming the country into a more modern society. Satellite view of construction progress at the Western portion of NEOM, The Line, Saudi Arabia, 2023. [Photo: Gallo Images/Orbital Horizon/Copernicus Sentinel Data 2023] The glossy renderings of the mile-long skyscraper and vertical forests that was The Line are now dissolving into a pragmatic, if desperate, attempt to salvage the sunk costs. The development, once framed as a “civilization revolution” was originally imagined as a 105-mile long, 1,640-foot high, 656-foot wide car-free smart city designed to house 9 million residents. The redesign pivots toward making Neom a hub for data centers to support the kingdom’s aggressive AI push. An insider told the FT the logic is purely utilitarian: “Data centers need water cooling and this is right on the coast,” signaling that the ambitious city has been downgraded to server farm with a view of the Red Sea. The end of the line The scaling back follows years of operational chaos and financial bleeding. Since its 2017 launch, the project promised a 105-mile strip of high-density living. But reality struck early. By April 2024, The Guardian reported that planners were already being forced to slash the initial phase to just 2.4 kilometers (1.5 miles) by 2030, reducing the projected population from 1.5 million to fewer than 300,000. Satellite view of construction progress at the Western portion of NEOM, The Line, Saudi Arabia, 2023. [Photo: Gallo Images/Orbital Horizon/Copernicus Sentinel Data 2023] While the public infrastructure stalledleaving what critics called “giant holes in the middle of nowhere”satellite imagery revealed that construction resources were successfully diverted to a massive royal palace with 16 buildings and a golf course. Internally, the situation was dire. The Wall Street Journal reported an audit revealing “deliberate manipulation of finances” by management to justify soaring costs, with the “end-state” estimate ballooning to an impossible $8.8 trillionmore than 25 times the annual Saudi budget. [Screenshot: Business Insider] The turmoil culminated in the abrupt departure of longtime CEO Nadhmi al-Nasr in November 2024, leaving behind a legacy marred by allegations of abuse. An ITV documentary claimed 21,000 workers had died since the inception of Saudi Vision 2030, with laborers describing 16-hour shifts for weeks on end. Even completed projects failed to launch; the high-end island resort Sindalah sat idle despite being finished, reportedly plagued by design flaws that prevented its opening. By July 2025, the sovereign wealth fundfacing tightening liquidity and oil prices hovering around $71 a barrelfinally hit the brakes. Bloomberg reported that Saudi Arabia had hired consultants to conduct a “strategic review” to determine if The Line was even feasible. The goal was to “recalibrate” Vision 2030, a polite euphemism for slashing expenditures as the kingdom faced hard deadlines for the 2030 Expo and the 2034 World Cup. The review’s conclusion is stripping away even the most publicized milestones. Trojena, the ski resort that defied meteorological logic, will no longer host the Asian Winter Games in 2029 as planned. The resort is being downsized, a casualty of the realization that the kingdom needs to “prioritize market readiness and sustainable economic impact” over snow in the desert. What remains of The Line will be unrecognizable to those who bought into the sci-fi dream. The FT says that sources briefed on the redesign state it will be a “totally different concept” that utilizes existing infrastructure in a “totally different manner.” The new Neom CEO, Aiman al-Mudaifer, is now tasked with managing a “modest” development that aligns with the Public Investment Fund’s need to actually generate returns rather than burn cash. Even bin Salman has publicly given up, although he’s framing it not as a failure but a strategic pivot. Addressing the Shura Councila consultative body for the kingdomhe framed the move as flexibility, stating, “we will not hesitate to cancel or make any radical amendment to any programs or targets if we find that the public interest so requires. And thats how a “civilization revolution” ends, my friends, not with a bang, but with a whimper. The hum of cooling fans in yet another farm producing AI slop that always was (and still is) more believable than The Line and Neom projects.
Category:
E-Commerce
While Silicon Valley argues over bubbles, benchmarks, and who has the smartest model, Anthropic has been focused on solving problems that rarely generate hype but ultimately determine adoption: whether AI can be trusted to operate inside the worlds most sensitive systems. Known for its safety-first posture and the Claude family of large language models (LLMs), Anthropic is placing its biggest strategic bets where AI optimism tends to collapse fastest, i.e., regulated industries. Rather than framing Claude as a consumer product, the company has positioned its models as core enterprise infrastructuresoftware expected to run for hours, sometimes days, inside healthcare systems, insurance platforms, and regulatory pipelines. Trust is what unlocks deployment at scale, Daniela Amodei, Anthropic cofounder and president, tells Fast Company in an exclusive interview. In regulated industries, the question isnt just which model is smartestits which model you can actually rely on, and whether the company behind it will be a responsible long-term partner. That philosophy took concrete form on January 11, when Anthropic launched Claude for Healthcare and Life Sciences. The release expanded earlier life sciences tools designed for clinical trials, adding support for such requirements as HIPAA-ready infrastructure and human-in-the-loop escalation, making its models better suited to regulated workflows involving protected health information. We go where the work is hard and the stakes are real, Amodei says. What excites us is augmenting expertisea clinician thinking through a difficult case, a researcher stress-testing a hypothesis. Those are moments where a thoughtful AI partner can genuinely accelerate the work. But that only works if the model understands nuance, not just pattern matches on surface-level inputs. That same thinking carried into Cowork, a new agentic AI capability released by Anthropic on January 12. Designed for general knowledge workers and usable without coding expertise, Claude Cowork can autonomously perform multistep tasks on a users computerorganizing files, generating expense reports from receipt images, or drafting documents from scattered notes. According to reports, the launch unintentionally intensified market and investor anxiety around the durability of software-as-a-service businesses; many began questioning the resilience of recurring software revenue in a world where general-purpose AI agents can generate bespoke tools on demand. Anthropics most viral product, Claude Code, has amplified that unease. The agentic tool can help write, debug, and manage code faster using natural-language prompts, and has had a substantial impact among engineers and hobbyists. Users report building everything from custom MRI viewers to automation systems entirely with Claude. Over the past three years, the companys run-rate revenue has grown from $87 million at the end of 2023 to just under $1 billion by the end of 2024 and to $9 billion-plus by the end of 2025. That growth reflects enterprises, startups, developers, and power users integrating Claude more deeply into how they actually work. And we’ve done this with a fraction of the compute our competitors have, Amodei says. Building for Trust in the Most Demanding Enterprise Environments According to a mid-2025 report by venture capital firm Menlo Ventures, AI spending across healthcare reached $1.4 billion in 2025, nearly tripling the total from 2024. The report also found that healthcare organizations are adopting AI 2.2 times faster than the broader economy. The largest spending categories include ambient clinical documentation, which accounted for $600 million, and coding and billing automation, at $450 million. The fastest-growing segments, however, reflect where operational pressure is most acute, like patient engagement, where spending is up 20 times year over year, and prior authorization, which grew 10 times over the same period. Claude for Healthcare is being embedded directly into the latters workflows, attempting to take on time-consuming and error-prone tasks such as claims review, care coordination, and regulatory documentation. Claude for Life Sciences has followed a similar pattern. Anthropic has expanded integrations with Medidata, ClinicalTrials.gov, Benchling, and bioRxiv, enabling Claude to operate inside clinical trial management and scientific literature synthesis. The company has also introduced agent skills for protocol drafting, bioinformatics pipelines, and regulatory gap analysis. Customers include Novo Nordisk, Banner Health, Sanofi, Stanford Healthcare, and Eli Lilly. According to Anthropic, more than 85% of its 22,000 providers at Banner Health reported working faster with higher accuracy using Claude-assisted workflows. Anthropic also reports that internal teams at Novo Nordisk have reduced clinical documentation timelines from more than 12 weeks to just minutes. Amodei adds that what surprised her most was how quickly practitioners defined their relationship with the companys AI models on their own terms. They’re not handing decisions off to Claude, she says. They’re pulling it into their workflow in really specific wayssynthesizing literature, drafting patient communications, pressure-testing their reasoningand then applying their own judgment. That’s exactly the kind of collaboration we hoped for. But honestly, they got there faster than I expected. Industry experts say the appeal extends beyond raw performance. Anthropics deliberate emphasis on trust, restraint, and long-horizon reliability is emerging as a genuine competitive moat in regulated enterprise sectors. This approach aligns with bounded autonomy and sandboxed execution, which are essential for safe adoption where raw speed often introduces unacceptable risk, says Cobus Greyling, chief evangelist at Kore.ai, a vendor of enterprise AI platforms. He adds that Anthropics universal agent concept introduced a third architectural model for AI agents, expanding how autonomy can be safely deployed. Other AI competitors are also moving aggressively into the healthcare sector, though with different priorities. OpenAI debuted its healthcare offering, ChatGPT Health, in January 2026. The product is aimed primarily at broad consumer and primary care use cases such as symptom triage and health navigation outside clinic hours. It benefits from massive consumer-scale adoption, handling more than 230 million health-related queries globally each week. While GPT Health has proven effective in generalist tasks such as documentation support and patient engagement, Claude is gaining traction in more specialized domains that demand structured reasoning and regulatory rigorincluding drug discovery and clinical trial design. Greyling cautions, however, that slow procurement cycles, entrenched organizational politics, and rigid compliance requirements can delay AI adoption across healthcare, life sciences, and insurance. Even with strong technical performance in models like Claude 4.5, enterprise reality demands extensive validation, custom integrations, and risk-averse stakeholders, he says. The strategy could stall if deployment timelines stretch beyond economic justification or if cost ad latency concerns outweigh reliability gains in production. In January, Travelers announced it would deploy Claude AI assistants and Claude Code to nearly 10,000 engineers, analysts, and product ownersone of the largest enterprise AI rollouts in insurance to date. Each assistant is personalized to employee roles and connected to internal data and tools in real time. Likewise, Snowflake committed $200 million to joint development. Salesforce integrated Claude into regulated-industry workflows, while Accenture expanded multiyear agreements to scale enterprise deployments. AI Bubble or Inflection Point? Skeptics argue that todays agent hype resembles past automation cyclesbig promises followed by slow institutional uptake. If valuations reflect speculation rather than substance, regulated industries should expose weaknesses quickly, and Anthropic appears willing to accept that test. Its capital posture reflects confidence, through a $13 billion Series F at a $183 billion valuation in 2025, followed by reports of a significantly larger round under discussion. Anthropic is betting that the AI race will ultimately favor those who design for trust and responsibility first. We built a company where research, product, and policy are integratedthe people building our models work deeply with the people studying how to make them safer. That lets us move fast without cutting corners, Amodei says. Countless industries are putting Claude at the center of their most critical work. That trust doesn’t happen unless you’ve earned it.
Category:
E-Commerce
At the Consumer Electronics Show in early January, Razer made waves by unveiling a small jar containing a holographic anime bot designed to accompany gamers not just during gameplay, but in daily life. The lava-lamp-turned-girlfriend is undeniably bizarrebut Razers vision of constant, sometimes sexualized companionship is hardly an outlier in the AI market. Mustafa Suleyman, Microsoft’s AI CEO, who has long emphasized the distinction between AI with personality and AI with personhood, now suggests that AI companions will live life alongside youan ever-present friend helping you navigate lifes biggest challenges. Others have gone further. Last year, a leaked Meta memo revealed just how distorted the companys moral compass had become in the realm of simulated connection. The document detailed what chatbots could and couldnt say to children, deeming acceptable messages that included explicit sexual advances: Ill show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. (Meta is currently being suedalong with TikTok and YouTubeover alleged harms to children caused by its apps. On January 17, the company stated on its blog that it will halt teen access to AI chatbot characters.) Coming from a sector that once promised to build a more interconnected world, Silicon Valley now appears to have lost the plotdeploying human-like AI that risks unraveling the very social fabric it once claimed to strengthen. Research already shows that in our supposedly connected world, social media platforms often leave us feeling more isolated and less well, not more. Layering AI companions onto that fragile foundation risks compounding what former Surgeon General Vivek Murthy called a public health crisis of loneliness and disconnection. But Meta isnt alone in this market. AI companions and productivity tools are reshaping human connection as we know it. Today more than half of teens engage with synthetic companions regularly, and a quarter believe AI companions could replace real-life romance. Its not just friends and lovers getting replaced: 64% of professionals who use AI frequently say they trust AI more than their coworkers. These shifts bear all the hallmarks of the late Harvard Business School professor Clayton Christensens theory of disruptive innovation. Disruptive innovation is a theory of competitive response. Disruptive innovations enter at the bottom of markets with cheaper products that arent as good as prevailing solutions. They serve nonconsumers or those who cant afford existing solutions, as well as those who are overserved by existing offerings. When they do this, incumbents are likely to ignore them, at first. Because disruption theory is predictive, not reactive, it can help us see around corners. Thats why the Christensen Institute is uniquely positioned to diagnose these threats early and to chart solutions before its too late. Christensens timeless theory has helped founders build world-changing companies. But today, as AI blurs the line between technical and human capabilities, disruption is no longer just a market forceits a social and psychological one. Unlike many of the market evolutions that Christensen chronicled, AI companions risk hollowing out the very foundations of human well-being. Yet AI is not inherently disruptive; its the business model and market entry points that firms pursue that define the technologys impact. All disruptive innovations have a few things in common: They start at the bottom of the market, serving nonconsumers or overserved customers with affordable and convenient offerings. Over time, they improve, luring more and more demanding customers away from industry leaders with a cheaper and good enough product or service. Historically, these innovations have democratized access to products and services otherwise out of reach. Personal computers brought computing power to the masses. Minute Clinic offered more accessible, on-demand care. Toyota boosted car ownership. Some companies lost, but consumers generally won. When it comes to human connection, AI companies are flipping that script. Nonconsumers arent people who cant afford computers, cars, or caretheyre the millions of lonely individuals seeking connection. Improvements that make AI appear more empathetic, emotionally savvy, and there for users stand to quietly shrink connections, degrading trust and well-being. It doesnt help that human connection is ripe for disruption. Loneliness is rampant, and isolation persists at an alarmingly high rate. Weve traded face-to-face connections for convenience and migrated many of our social interactions with both loved ones and distant ties online. AI companions fit seamlessly into those digital social circles and are, therefore, primed to disrupt relationships at scale. The impact of this disruption will be widely felt across many domains where relationships are foundational to thriving. Being lonely is as bad for our health as smoking up to 15 cigarettes a day. An estimated half of jobs come through personal connections. Disaster-related deaths are a fraction (sometimes even a tenth) in connected communities compared to isolated ones. What can be done when our relationshipsand the benefits they provide usare under attack? Unlike data that tells us only whats in the rearview mirror, disruption offers foresight about the trajectory innovations are likely to takeand the unintended consequences they may unleash. We dont need to wait for evidence on how AI companions will reshape our relationships; instead, we can use our existing knowledge of disruption to anticipate risks and intervene early. Action doesnt mean halting innovation. It means steering it with a moral compass to guide our innovation trajectoryone that orients investments, ingenuity, and consumer behavior toward a more connected, opportunity-rich, and healthy society. For Big Tech, this is a call for a bulwark: an army of investors and entrepreneurs enlisting this new technology to solve societys most pressing challenges, rather than deepening existing ones. For those building gen AI companies, theres a moral tightrope to walk. Its worth asking whether the innovations youre pursuing today are going to create the future you want to live in. Are the benefits youre creating sustainable beyond short-term growth or engagement metrics? Does your innovation strengthen or undermine trust in vital social and civic institutions, or even individuals? And just because you can disrupt human relationships, should you? Consumers have a moral responsibility as well, and it starts with awareness. As a society, we need to be aware of how the market and cultural forces are shaping which products scale, and how our behaviors are being shaped as a resultespecially when it comes to the ways we interact with one another. Regulators have a role in shaping both supply and demand. We dont need to inhibit AI innovation, but we do need to double down on prosocial policies. That means curbing the most addictive tools and mitigating risks to children, but also investing in drivers of well-being, such as social connections that improve health outcomes. By understanding the acute threats AI poses to human connection, we can halt disruption in its tracks, not by abandoning AI but by embracing one another. We can congregate with fellow humans and advocate for policies that support pro-social connectionin our neighborhoods, schools, and online. By connecting, advocating, and legislating for a more human-centered future, we have the power to change how this story unfolds. Disruptive innovation can expand access and prosperity without sacrificing our humanity. But that requires intentional design. And if both sides of the market dont acknowledge whats at risk, the future of humanity is at stake. That might sound alarmist, but thats the thing about disruption: It starts at the fringes of the market, causing incumbents to downplay its potential. Only years later do industry leaders wake up to the fact that theyve been displaced. What they initially thought was too fringe to matter puts them out of business. Right now, humansand our connections with one anotherare the industry leaders. AI that can emulate presence, empathy, and attachment is the potential disruptor. In this world where disruption is inevitable, the question isnt whether AI will reshape our lives. Its whether we will summon the foresightand the moral compassto ensure it doesnt disrupt our humanity.
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] next »