|
|||||
With a ring of massive columns and seating for more than 70,000 people, President Donald Trump may be getting the football stadium of his dreams. Renderings have just been released of the proposed design for a new stadium for the Washington Commanders NFL team, and the aesthetic is right in line with an architectural style the Trump administration has been championing with increasing passion. The stadium is an oval of dozens of white columns recalling the classical-influenced architecture of some of the capital’s most recognizable buildings. [Image: HKS] Designed by the architecture firm HKS, the stadium’s concept takes one of the most familiar elements of classical architecturethe columnand turns it into the defining feature of the building. Cascading around the stadium’s perimeter with heights upward of 100 feet, the columns are topped by a concave ellipse, also a marble-like white color, that holds a semi-transparent roof. Glass between the columns offers views into the structure, which would glow from within during events. The stadium’s design is a reflection of the Trump administration’s desire for an official embrace of the classical and neoclassical architecture that has typified federal buildings since the earliest days of the republic. Drawing influence from the columns and pediments abundant in the buildings of ancient Greece and the Roman Empire, this classical architecture style can be seen at the White House, the Capitol Building, and the Supreme Court, among many other buildings across the city and country. It’s a style the Trump administration has sought to reassert as the federal standard, issuing executive orders in both of his terms to make classical architecture the preferred style for new federal projects. The group behind this effort, the National Civic Art Society, has been working for decades to convince national leaders that traditional design, not the modernism that emerged in the postwar years, is the most appropriate style for federal architecture. [Image: HKS] Trump’s architectural preferences Trump, the longtime real estate developer, has made this a key part of his agenda. His desire for more classical architecture has trickled down through Trump appointees to the agency that oversees the design of all significant projects in Washington, D.C., the National Capitol Planning Commission (NCPC). NCPC chair Will Scharf, appointed to the commission in July 2025 by Trump, recently called on officials from the Washington Commanders to ensure the new stadium “incorporates architectural features in keeping with the capital more generallyclassical, neoclassical elements.” Speaking at a recent NCPC meeting, Scharf said, “I think really going back to classical antiquity, arenas and stadiums have played a vital role in the urban cityscape . . . I think there were several decades in American history where we unfortunately really got away from that, much to the detriment of the fan experience.” [Image: HKS] The stadium would sit on the site of the demolished Robert F. Kennedy Memorial Stadium, the team’s former home. That site aligns directly with Washington, D.C.’s L’Enfant Plan, the city’s 1790 urban plan that crisscrossed the area with diagonal axes and carefully configured views of buildings like the Capitol, the Washington Monument, and the White House. Trump’s preference for classical architecture in the capital is beginning to influence development in the city. Under the NCPC’s authority, the stadium project could be its most imposing expression. The design from HKS shows a willingness to play along. In a press release, HKS global venues director Mark A. Williams says the project’s design was guided by its “significance of place.” [Image: HKS] “Monumental in presence, grounded in the L’Enfant Plan, and scaled to the urban fabric of the District, the stadium design will be a bold civic landmark that carries the city’s architectural legacy forward in a way that is confident, dynamic, and unmistakably Washington, D.C.,” he says. It could also become unmistakably Trump, as the president’s architectural preferences reverberate through the capital. (Trump has also called for the stadium to be named after himself.) Construction on the Commanders stadium could start in 2027, with an opening date in 2030, a year after the constitutionally mandated end of Trump’s final term.
Category:
E-Commerce
An ailing astronaut returned to Earth with three others on Thursday, ending their space station mission more than a month early in NASAs first medical evacuation. SpaceX guided the capsule to a middle-of-the-night splashdown in the Pacific near San Diego, less than 11 hours after the astronauts exited the International Space Station. Their first stop was a hospital for an overnight stay. Obviously, we took this action (early return) because it was a serious medical condition, NASA’s new administrator Jared Isaacman said following splashdown. The astronaut in question is fine right now, in good spirits and going through the proper medical checks. It was an unexpected finish to a mission that began in August and left the orbiting lab with only one American and two Russians on board. NASA and SpaceX said they would try to move up the launch of a fresh crew of four; liftoff is currently targeted for mid-February. NASA’s Zena Cardman and Mike Fincke were joined on the return by Japans Kimiya Yui and Russias Oleg Platonov. Officials have refused to identify the astronaut who developed the health problem last week or explain what happened, citing medical privacy. While the astronaut was stable in orbit, NASA wanted them back on Earth as soon as possible to receive proper care and diagnostic testing. The entry and splashdown required no special changes or accommodations, officials said, and the recovery ship had its usual allotment of medical experts on board. The astronauts emerged from the capsule, one by one, within an hour of splashdown. They were helped onto reclining cots and then whisked away for standard medical checks, waving to the cameras. Isaacman monitored the action from Mission Control in Houston, along with the crew’s families. NASA decided a few days ago to take the entire crew straight to a San Diego-area hospital following splashdown and even practiced helicopter runs there from the recovery ship. The astronaut in question will receive in-depth medical checks before flying with the rest of the crew back to Houston on Friday, assuming everyone is well enough. Platonovs return to Moscow was unclear. NASA stressed repeatedly over the past week that this was not an emergency. The astronaut fell sick or was injured on Jan. 7, prompting NASA to call off the next days spacewalk by Cardman and Fincke, and ultimately resulting in the early return. It was the first time NASA cut short a spaceflight for medical reasons. The Russians had done so decades ago. Spacewalk preparations did not lead to the medical situation, Isaacman noted, but for anything else, it would be very premature to draw any conclusions or close any doors at this point. It’s unknown whether the same thing could have happened on Earth, he added. The space station has gotten by with three astronauts before, sometimes even with just two. NASA said it will be unable to perform a spacewalk, even for an emergency, until the arrival of the next crew, which has two Americans, one French and one Russian astronaut. Isaacman said it’s too soon to know whether the launch of station reinforcements will take priority over the agency’s first moonshot with astronauts in more than a half-century. The moon rocket moves to the pad this weekend at Florida’s Kennedy Space Center, with a fueling test to be conducted by early next month. Until all that is completed, a launch date cannot be confirmed; the earliest the moon flyaround could take off is Feb. 6. For now, NASA is working in parallel on both missions, with limited overlap of personnel, according to Isaacman. If it comes down to a point in time to where we have to deconflict between two human spaceflight missions, that is a very good probMARCIA DUNN AP Aerospace Writerem to have at NASA, he told reporters. ___ The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institutes Department of Science Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content. Marcia Dunn, AP aerospace writer
Category:
E-Commerce
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. Im Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy. This week, Im focusing on how and why AI will grow from something that chats to something that works in 2026. I also focus on a new privacy-focused AI platform from the maker of Signal, and on Googles work on e-commerce agents. Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan. Our relationship with AI is changing rapidly Anthropic kicked off 2026 with a bang. It announced Coworker, a new version of its powerful Claude Code coding assistant thats built for non-developers. As I wrote on January 14, Coworker lets users put AI agents, or teams of agents, to work on complex tasks. It offers all the agentic power of Claude Code while being far more approachable for regular workers (it runs within the Claude chatbot desktop app, not in the terminal as Claude Code does). It also runs at the file system level on the users computer, and can access email and third-party work apps such as Teams. Coworker is very likely just the first product of its kind that well see this year. Some have expressed surprise that OpenAI hasnt already offered such an agentic tool to consumers and enterprisesit probably will, as may Google and Microsoft, in some form. I think well look back at Coworker a year from now and recognize it as a real shift in the way we think about and use AI for our work tasks. AI companies have been talking for a long time about viewing AI as a coworker or copilot, but Cowork may make that concept a reality for many nontechnical workers. OpenAIs ChatGPT, which debuted in late 2022, gave us a mental picture of how consumer AI would look and act. It was just a little dialog box, mainly nonvisual and text-based. This shouldnt have been too surprising. After all, the chatbot interface was built by a bunch of researchers who spent their careers teaching machines how to understand words and text. Functionally, early chatbots could act like a search engine. They could write or summarize text, or listen to problems and give supportive feedback. But their outputs were driven almost entirely by their pretraining, in which they ingested and processed a compressed version of the entire internet. Using ChatGPT was something like text messaging with a smart and informed friend. Large language models do way, way more than that today. They understand imagery, they reason, they search the web, and call external tools. But the AI labs continue to try to push much of their new functionality through that same chatbot-style interface. Its time to graduate from that mindset and put more time and effort into meeting human users where they livethat is, delivering intelligence through lots of different interfaces that match the growing number of tasks where AI can be profitably applied. That will begin to happen in 2026. AI will expand into a full workspace, or into a full web browser ( la OpenAIs Atlas), and will eventually disappear into the operating system. As we saw at this years Consumer Electronics Show, it may go further: An AI tool may come wrapped in a cute animal form factor. Interacting with AI will become more flexible, too. Youll see more AI systems that accept real-time voice input this year. Anthropic added a feature to (desktop) Claude in October that lets users talk to the chatbot in natural language after hitting a keyboard shortcut. And Wispr Flow lets users dictate into any input window by holding down a function key. Signal creator Moxie Marlinspike launches encrypted AI chatbot People talk to AI chatbots about all kinds of things, including some very personal matters. Personally, I hesitate to discuss just anything with a chatbot, because I cant be sure that my questions and prompts, and the answers the AI gives, wont somehow be shared with someone who shouldnt see them. My worry is well-founded, it turns out. Last year a federal court ordered OpenAI to retain all user inputs and AI outputs, because they may be relevant to discovery in a copyright case. And theres always a possibility that unencrypted conversations stored by an AI company could be stolen as part of a hack. Meanwhile, the conversational nature of chatbots invites users to share more and more personal information, including the sensitive kind. In short, theres a growing need for provably secure and private AI tools. Now the creator of the popular encrypted messaging platform Signal, who goes by the pseudonym Moxie Marlinspike, has created an end-to-end encrypted AI chatbot called Confer. The new platform protects user prompts and AI responses, and makes it impossible to connect users online identities with their real-world ones. Marlinspike told Ars Technica that Confer users have better conversations with the AI because theyre empowered to speak more freely. When I signed up for a Confer account, the first thing the site asked was that I set up a six-digit encryption passkey, which would be stored within the secure element of my computer (or phone), which hackers cant access. Another key is created for the Confer server, and both keys must match before the user can interact with the chatbot. Confer is powered by open-source AI models it hosts, not by models accessed from a third party. Confers developers are serious about supporting sensitive conversations. After I logged in, I saw that Confer displays a few suggested conversations near the input window, such as practice a difficult conversation, negotiate my salary, and talk through my mental health. Google is building the foundations of agentic e-commerce Agents, of course, will do more than work tasks. Theyll be involved in more personal things, too, like online shopping. Right now human shoppers move through a long process of searching, clicking, data input, and payment-making in order to buy something. Merchants and brands hope that AI agents will one day do a lot of that work on the humans behalf. But for this to work, a whole ecosystem of agents, consumer-shopping sites, and brand back-end systems must be able to exchange information in standardized ways. For example, a consumer might want to use a shopping agent to buy a product that comes up in a Google AI Mode search, so the shopping agent would need to shake hands with the Google platform and the product merchant, and theyd both have to connect through a payment agent in the middle. Goole is off to a strong start on building the agentic infrastructure that will make this all work. On January 11, the company announced a new Universal Commerce Protocol (UCP) that creates a common language for consumers, agents, and businesses to ensure that all types of commerce actions are standardized and secure. The protocol relieves all parties involved from having to create an individual agent handshake for every consumer platform and tech partner. UCP now standardizes three key aspects of a transaction: It offers a standard for guaranteeing the identity of the buyer and seller, a standard for the buying workflow, and a standard for the payment, which uses Googles Agent Payment Protocol (AP2) extension. Vidhya Srinivasan, Googles VP/GM of Advertising & Commerce, tells Fast Company that this is just the beginning, that the company intends to build out the UCP to support more parts of the sales process, including related-product suggestion and post-purchase support. Google developed UCP with merchant platforms including Shopify, Etsy, Target, and Walmart. UCP is endorsed byAmerican Express, Mastercard, Stripe, Visa, and others. More AI coverage from Fast Company: Why Anthropics new Cowork could be the first really useful general-purpose AI agent Governments are considering bans on Groks app over AI sexual image scandal Docusigns AI will now help you understand what youre signing CES 2026: The year AI got serious Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
In 2025, employers cited artificial intelligence as the rationale for nearly 55,000 layoffs at companies like Amazon and Microsoft. And with the new year barely underway, were already seeing a new crop of AI-related job cuts. Citigroup is cutting over a thousand jobs, according to Bloomberg, and in a memo this week, CEO Jane Fraser warned of more layoffs later this year. Over time, we can expect automation, AI and further process simplification to reshape how work gets done, she added. Meanwhile, Meta is conducting more layoffs in its virtual reality division, cutting about 1,500 jobs as part of a broader strategic shift to invest further in AI. Given these reports, many observers have been quick to believe that workers are losing jobs to generative AI. But there is little evidence that automation is displacing workers en masse just yetor even drastically changing how businesses operate. According to a recent analysis by the Brookings Institution and the Budget Lab at Yale University, the proportion of workers in jobs that are ripe for AI disruption has remained steady since ChatGPT launched in 2022. Whats more, there are all kinds of forces shaping the labor market right now, including changes in immigration policy that have curbed employment growth. What we’re seeing overall right now is consistent with a labor market that has been hit with a lot of uncertainty in the macroeconomic environment, says Martha Gimbel, executive director of the Budget Lab. The immigration changes are making it really hard to interpret changes in the jobs numbers. And if you look for any signs of changes that seem to be due to AI, those are not yet showing up. Still, a number of experts have pointed to AI adoption to explain the recent spike in labor productivity, which measures hourly worker output. In the third quarter of 2025, labor productivity climbed by 4.9%, the highest increase in two years. Some economists have speculated this is a sign that the growing adoption of AI across companies may in fact be boosting efficiency, despite the slow rate of hiring in 2025. But Gimbel argues productivity is too noisy a metric to accurately capture the impact of AI, particularly over just one quarter. Productivity growth will be really high or really low in one quarter, she says. And if it fits their preferred narrative, people will jump on that. A single quarter of high productivity should not be seen as a clear indicator of anything, she says, in part because labor productivity is imprecise and vulnerable to measurement error. That has been especially true in recent years because the pandemic threw a wrench in the system that is still being sorted out. You had all these issues with productivity measurement in the pandemic because people largely fired low-wage workers who tend to be less productive, Gimbel says. So you saw this huge jump in productivity, and then it came back down as those people were hired back. Was there actually a change in productivity in the economy? No. Research also shows that while AI might improve efficiency to some extent, it creates additional work that can hamper productivity. A new Workday report found that nearly 40% of the time saved by using AI is lost to rework; on average, workers spend 1.5 weeks annually correcting or otherwise fixing AI-generated content. As for whether AI is eliminating jobs, thats not evident in jobs data just yetand unemployment figures do not reflect any notable changes either. While the most recent jobs report does indicate a marked decline in employment across specific sectors, namely professional and business services, Gimbel says its too soon to say whether any of that is actually due to AI. She says it might take an economic downturn to really see that shift. The place to start looking for the impacts of AI is when we have a recession, she says. That is usually when technological change really takes off. All that said, Gimbel is closely watching sectors that have high adoption of AI, which includes not just tech, but also the arts and education. Even if concerns about AI usage in the workplace are overblown at the moment, workers will certainly start to feel the effects of it in the years to come. It would be unusual for a new technology to have no impact on the labor market, Gimbel says. We just still need to find out how fast, and where.
Category:
E-Commerce
I was born an only child, but now I have a twin. Hes an exact duplicate of medown to my clothing, my home, my facial expressions, and even my voice. I built him with AI, and I can make him say whatever I want. Hes so convincing that he could fool my own mother. Heres how I built himand what AI digital twins mean for the future of people. Deepfake yourself From the moment generative AI was born, criminals started using it to trick people. Deepfakes were one of the first widespread uses of the tech. Today, theyre a scourge to celebrities and even everyday teenagers, and a massive problem for anyone interested in the truth. As criminals were leveraging deepfakes to scam and blackmail people, though, a set of white-hat companies started quietly putting similar digital cloning technologies to use for good. Want to record a training video for your team, and then change a few words without needing to reshoot the whole thing? Want to turn your 400-page Stranger Things fanfic into an audiobook without spending 10 hours of your life reading it aloud? Digital cloning tech has you covered. You basically deepfake yourselfcloning your likeness, your voice, or bothand then mobilize your resulting digital twin to create mountains of content just as easily as youd prompt ChatGPT or Claude. I wanted to try the tech out for myself. So I fired up todays best AI cloning tools and made Digital Toma perfect digital copy of myself. Hear me out I decided to start by cloning my voice. A persons voice feels like an especially intimate, personal thing. Think back on a loved one youve lost. Ill bet you can remember exactly how they sounded. You can probably even remember a specific, impactful conversation you had with them. Cloning a voicewith all the nuance of accent, speaking style, pitch, and breathis also a tough technical challenge. People are fast to forgive crappy video, chalking up errors or glitchiness in deepfakes to a spotty internet connection or an old webcam. Content creators everywhere produce bad video every day without any help from AI! A bad AI voice sounds way creepier, though. Its easier to land in the uncanny valley unless every aspect of a voice clone is perfect. To avoid that fate, I turned to ElevenLabs. The company has been around since 2022 but has exploded in popularity over the last year, with its valuation doubling to more than $6.6 billion. ElevenLabs excels at handling audioif youve listened to an AI-narrated audiobook, interacted with a speaking character in a video game, or heard sound effects in a TV show or movie, its a good bet youve inadvertently experienced ElevenLabs tech. To clone my own voice, I shelled out $22 for a Creator account. I then uploaded about 90 minutes of recordings from my YouTube channel to the ElevenLabs interface. The company says you can create a professional voice clone with as little as 30 minutes of audio. You can even create a basic clone with just 10 seconds of speech. ElevenLabs makes you record a consent clip in order to ensure that youre not trying to deepfake a third party. In a few hours, my professional voice clone was ready. Using it is shockingly easy. ElevenLabs provides an interface that looks a lot like ChatGPT. You enter what you want your clone to say, press a button, and in seconds, your digital twin voice speaks the exact words you typed out. I had my digital twin record an audio update about this article for my Fast Company editor. He described it as terrifyingly realistic. Then, I sent a clip to my mom. She responded, It would have fooled me. In my natural habitat I was extremely impressed with the voice clone. I could use it right away to spin up an entire AI-generated podcast, prank my friends, or maybe even hack into my bank. But I didnt just want a voice. I wanted a full Digital Tom that I could bend to my will. For the next stage in my cloning experiment, I turned to Synthesia. I originally met Synthesias CEO Victor Riparbelli in 2019 at a photo industry event, when his company was a scrappy startup. Today, its worth $4 billion. Synthesia specializes in creating digital Avatarsessentially video clones of a real person. Just as with ElevenLabs, you can type text into an interface and get back a video of your avatar reading it aloud, complete with realistic facial expressions and lip movement. I started a Synthesia trial account and set about creating my personal avatar. Synthesia asked for access to my webcam, and then recorded me reading a preset script off the screen for about 10 minutes. A day later, my avatar was ready. It was a perfect digital clone of my likeness, right down to the shirt I was wearing on the day I made it and my (overly long) winter haircut. It even placed me in my natural habitat: my comfy, cluttered home office. As with my voice clone, I could type in any text I could imagine, and in about 10 minutes I would receive a video of Digital Tom reading it aloud. Synthesia even duplicated the minutiae of my presenting style, right down to my smile and tendency to look to the camera every few seconds when reading a script from the screen. If I recorded a video with Digital Tom for my YouTube channel, Im certain most users would have no idea its a fake. The value of people My experiment shows that todays AI cloning technology is extremely impressive. I could easily create mountains of audio content with my clone from ElevenLabs, or create an entire social media channel with my Digital Tom as the star. The bigger question, though, is why Id want to. Sure, there are tons of good use cases for working with a digital twin. Again, Synthesia specializes in creating corporate training videos. Companies can rapidly create specialized teaching materials without renting a studio, hiring a videographer, and shooting countless takes of a talking head in front of a green screen. They can also edit them by altering a few written wordsfor example, if a product feature changes subtly. For their part, ElevenLabs does a brisk business in audiobooks and customer service agents. But they also provide helpful services, like creating accessible, read-aloud versions of web pages for visually impaired users. But my experiment convinced me that there are fewer good reasons to work with your digital twin. In an internet landscape where anyone can spin up a thousand-page website in a few minutes using Gemini, and compelling videos are a dime a dozen, thanks to Sora, content is cheap. There are not many good ways left for users to sort the wheat from the chaff. Personality is one of the few remaining ones. People like to follow people. For creators, developing a personal relationship with your audience is the best way to keep them consuming your content, instead of cheaper (and often better) AI alternatives. Compromising that by shoving an undisclosed digital twin in their face, however convincing it might be, seems like the fastest possible way to ruin that relationship. People want to hear from the meat-based Thomas Smith, even if the artificial intelligence version never forgets a word or gets interrupted by his chickens mid-video. I could see using one of ElevenLabs or Synthesias built-in characters to create (fully disclosed) content. But I cant see putting my digital twins to real-world use. I can see one use for the tech, though. It struck me during my experiment that the best reason to build an AI digital twin isnt to replace your voice or likeness, but to preserve it. I sometimes lose my voice, and its incredibly disruptive to my content production. If I was ever affected by a vocal disorder and lost it permanently, its nice to know that theres a highly realistic backup sitting on ElevenLabs servers. Its also cool to think that in 10 yearswhen Im inevitably older and wrinklier than todayI could bring my 2026 Digital Tom back to life. Hed be frozen in time, a perfect replica of my appearance, mannerisms, and environment in this specific moment, recallable for all eternity. I wont be using Digital Tom to augment my YouTube channel, get into podcasting, or read my kids a bedtime story anytime soon. But theres a strange part of me thats happy hes out there, just in case.
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] next »