|
|||||
The healthcare industry faces major challenges in creating new drugs that can improve outcomes in the treatment of all kinds of diseases. New generative AI models could play a major role in breaking through existing barriers, from lab research to successful clinical trials. Eventually, even AI-powered robots could help in the cause. Nvidia VP of healthcare Kimberly Powell, one of Fast Companys AI 20 honorees, has led the companys health efforts for 17 years, giving her a big head start on understanding how to turn AIs potential to improve our well-being into reality. Since it’s likely that everything from drug-discovery models to robotic healthcare aides would be powered by Nvidia chips and software, shes in the right place to have an impact. This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. A high percentage of drugs make it to clinical trials and then fail. How can new frontier models using lots of computing power help us design safer and more effective drugs? Drug discovery is an enormous problem. It’s a 10-year journey at best. It costs several billions to get a drug to market. Back in 2017, very shortly after the transformer [generative AI model] was invented to deal with text and language, it was applied by the DeepMind team to proteins. And one of the most consequential contributions to healthcare today is still [DeepMinds] invention of AlphaFold. Everything that makes [humans] work is based on proteins and how they fold and their physical structure. We need to study that, [because] you might build a molecule that changes or inhibits the protein from folding the wrong way, which is the cause of disease. So instead of using the transformer model to predict words, they used a transformer to predict the effects of a certain molecule on a protein. It allowed the world to see that its possible to represent the world of drugs in a computer. And the world of drugs really starts with human biology. DNA is represented. After you take a sample from a human, you put it through a sequencing machine and what comes out is a 3 billion character sequence of lettersA‘s, C‘s, T‘s, and G‘s. Luckily, transformer models can be trained on this sequence of characters and learn to represent them. DNA is represented in a sequence of characters. Proteins are represented in a sequence of characters. So how will this new approach end up giving us breakthrough drugs? If you look at the history of drug discovery, we’ve been kind of circling around the same targetsthe target is the thing that causes the disease in the first placefor a very long time. And we’ve largely exhausted the drugs for those targets. We know biology is more complex than any one singular target. It’s probably multiple targets. And that’s why cancer is so hard, because it’s many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently. Once we’ve cracked the biology, and we’ve understood more about these multiple targets, molecular design is the other half of this equation. And so similarly, we can use the power of generative models to generate ideas that are way outside a chemist’s potential training or even their imagination. It’s a near infinite search space. These generative models can open our aperture. I imagine that modeling this vast new vocabulary of biology places a whole new set of requirements on the Nvidia chips and infrastructure. We have to do a bunch of really intricate data science work to apply this [transformer] method to these crazy data domains. Because we’re [going from] the language model and [representing] these words that are just short little sequences to representing sequences that are 3 billion [characters] long. So things like context lengthhow much context length is how much information can you put into a prompthas to be figured out for these long proteins and DNA strings. We have to do a lot of tooling and invention and new model architectures that have transformers at the core. That’s why we work with the community to really figure out what are the new methods or the new tooling we have to build so that new models can be developed for this domain. That’s in the area of really understanding biology better. Can you say more about the company youre working with that is using digital twins to simulate an expensive clinical trial before the trial begins? ConcertAI is doing exactly that. They specialize in oncology. They simulate the clinical trials so they can make the best decisions. They can see if they don’t have enough patients, or patients of the right type. They can even simulate it, depending on where the site selection is, to predict how likely the patients are to stay on protocol. Keeping the patients adhering to the clinical trial is a huge challenge, because not everybody has access to transportation or enough capabilities to take off work. They build that a lot into their model so that they can try to set up the clinical trial for its best success factors. How might AI agents impact healthcare? You have these digital agents who are working in the computer and working on all the information. But to really imagine changing how healthcare is delivered, we’re going to need these physical agents, which I would call robots, that can actually perform physical tasks. You can think about the deployment of robots, everything from meeting and greeting a patient at the door, to delivering sheets or a glass of ice chips to a patient room, to monitoring a patient while inside a room, all the way through to the most challenging of environments, which is the operating room with surgical robotics. Nvidia sells chips, but I think what I’ve heard in your comments is a whole tech stack, including in healthcare. There are models, there are software layers, things like that. I’ve been at the company 17 years working on healthcare, and it’s not because healthcare lives in a chip. We build full systems. There are the operating systems, there are the AI models, there are the tools. And a model is never doneyou have to be constantly improving it. Through every usage of that model, you’re learning something, and you’ve got to make sure that that agent or model is continuously improving. We’ve got to create whole computing infrastructure systems to serve that.
Category:
E-Commerce
Amid an uncertain economythe growth of AI, tariffs, rising costscompanies are pulling back on hiring. As layoffs increase, the labor market cools, and unemployment ticks up, were seeing fewer people quitting their jobs. The implication: Many workers will be job hugging and sitting tight in their roles through 2026. Put more pessimistically: Employees are going to feel stuck where they are for the foreseeable future. In many cases, that means staying in unsatisfying jobs. Gallups 2025 State of the Global Workforce report shows that employee engagement has fallen to 21%. And a March 2025 study of 1,000 U.S. workers by advisory and consulting firm Fractional Insights showed that 44% of employees reported feeling workplace angst, despite often showing intent to stay. So if these employees are hugging their current roles, its not an act of affection. Its often in desperation. Being a job hugger means youre feeling anxious, insecure, more likely to stay but also more likely to want to leave, says Erin Eatough, chief science officer and principal adviser at Fractional Insights, which applies organizational psychology insights to the workplace. You often see a self-protective response: Nothing to see here, Im doing a good job, Im not leaving. This performative behavior can be psychologically damaging, especially in a culture of layoffs. If I was scared of losing my job Id try everything to keep it: complimenting my boss, staying late, going to optional meetings, being a good organizational citizen, says Anthony Klotz, professor of organizational behavior at the UCL School of Management in London. But we know that when people arent loving their jobs but are still going above and beyond, that its a one-way trip to burnout. The tight squeeze In cases where jobs arent immediately under threat, the effects of hugging are more likely to be slow burning. When an employees only motivation is to collect a consistent paycheck, discretionary effort drops. Theyre less productive. Engagement takes a huge hit. Over time, that gradually chips away at their well-being. Humans want to feel useful, that they care about the work theyre doing, and that theyre investing their time well, Eatough says. When efforts are low, that can impact a persons sense of value. The effects stretch beyond the workplace, too. Frustrated and reluctant stayers can quickly end up in a vicious cycle, Klotz says, noting, When youre in a situation that feels like its sucking life out of you, you end up ruminating about how depleting it is, then end up so tired that you dont have energy for restorative activities outside of work. So its this downward spiralyou begin your workday even more depleted. Longer term, job hugging stunts growth. When youre looking out for yourself, rather than the team or organization, your investment in working relationships begins to break down, Eatough says. Over time, staying in that situation means youre more likely to become deeply cynical, which hurts the individual and their career trajectory. When hugging becomes clinging Feeling stuck is nothing new. At some point in their careers, most workers will be in a situation where if they could leave for a better role, they would, says Klotz, who predicted the Great Resignation. But what distinguishes job hugging is that its anxiously clinging to a role during unfavorable labor markets. Its not that employees dont want to quitits that they cant. Its human nature that when theres a threat of any sort that we move away from it and towards stability, Klotz says. Your job represents that stability. And currently, its not a great time to switch jobs. There are few options for job huggers. The first is speaking up and working with a manager to improve the situation. But this might be unlikely for employees who feel trapped or lack motivation in the first place. Klotz says cognitive reframing can helpfocusing purely on the positive aspects of a draining role, such as a friendly team, and tuning out the rest. Finally, slowly backing away from extra tasksin other words, quiet quittingcould mean workers can redraw work-life boundaries in the interim at least. Otherwise, beyond Stoic philosophy or a benevolent boss, there is little choice but to wait it out. In some cases, a job hugger may eventually turn it around, ease their grip, and become quietly content in their role. But more often, wanting to quit usually leads to actually quitting. In effect, job hugging is damage control: hanging on until the situation changes. I think well see some people be resilient, wait it out, and find another role, Klotz says. But therell be others in the quagmire of struggling with exhaustion of spending eight hours a day in a job they dont like.
Category:
E-Commerce
The rapid expansion of artificial intelligence and cloud services has led to a massive demand for computing power. The surge has strained data infrastructure, which requires lots of electricity to operate. A single, midsize data center here on Earth can consume enough electricity to power about 16,500 homes, with even larger facilities using as much as a small city. Over the past few years, tech leaders have increasingly advocated for space-based AI infrastructure as a way to address the power requirements of data centers. In space, sunshinewhich solar panels can convert into electricityis abundant and reliable. On November 4, 2025, Google unveiled Project Suncatcher, a bold proposal to launch an 81-satellite constellation into low Earth orbit. It plans to use the constellation to harvest sunlight to power the next generation of AI data centers in space. So instead of beaming power back to Earth, the constellation would beam data back to Earth. For example, if you asked a chatbot how to bake sourdough bread, instead of firing up a data center in Virginia to craft a response, your query would be beamed up to the constellation in space, processed by chips running purely on solar energy, and the recipe sent back down to your device. Doing so would mean leaving the substantial heat generated behind in the cold vacuum of space. As a technology entrepreneur, I applaud Googles ambitious plan. But as a space scientist, I predict that the company will soon have to reckon with a growing problem: space debris. The mathematics of disaster Space debristhe collection of defunct human-made objects in Earths orbitis already affecting space agencies, companies, and astronauts. This debris includes large pieces, such as spent rocket stages and dead satellites, as well as tiny flecks of paint and other fragments from discontinued satellites. Space debris travels at hypersonic speeds of approximately 17,500 mph in low Earth orbit. At this speed, colliding with a piece of debris the size of a blueberry would feel like being hit by a falling anvil. Satellite breakups and anti-satellite tests have created an alarming amount of debris, a crisis now exacerbated by the rapid expansion of commercial constellations such as SpaceXs Starlink. The Starlink network has more than 7,500 satellites providing global high-speed internet. The U.S. Space Force actively tracks more than 40,000 objects larger than a softball using ground-based radar and optical telescopes. However, this number represents less than 1% of the lethal objects in orbit. The majority are too small for these telescopes to identify and track reliably. In November 2025, three Chinese astronauts aboard the Tiangong space station were forced to delay their return to Earth because their capsule had been struck by a piece of space debris. Back in 2018, a similar incident on the International Space Station challenged relations between the U.S. and Russia, as Russian media speculated that a NASA astronaut may have deliberately sabotaged the station. The orbital shell Googles project targetsa sun-synchronous orbit approximately 400 miles above Earthis a prime location for uninterrupted solar energy. At this orbit, the spacecrafts solar arrays will always be in direct sunshine, where they can generate electricity to power the onboard AI payload. But for this reason, sun-synchronous orbit is also the single most congested highway in low Earth orbit, and objects in this orbit are the most likely to collide with other satellites or debris. As new objects arrive and existing objects break apart, low Earth orbit could approach Kessler syndrome. In this theory, once the number of objects in low Earth orbit exceeds a critical threshold, collisions between objects generate a cascade of new debris. Eventually, this cascade of collisions could render certain orbits entirely unusable. Implications for Project Suncatcher Project Suncatcher proposes a cluster of satellites carrying large solar panels. They would fly with a radius of just 1 kilometer, each node spaced less than 200 meters apart. To put that in perspective, imagine a racetrack roughly the size of the Daytona International Speedway, where 81 cars race at 17,500 mph while separated by gaps about the distance you need to safely brake on the highway. This ultradense formation is necessary for the satellites to transmit data to each other. The constellation splits complex AI workloads across all its 81 units, enabling them to think and process data simultaneously as a single, massive, distributed brain. Google is partnering with a space company to launch two prototypesatellites by early 2027 to validate the hardware. But in the vacuum of space, flying in formation is a constant battle against physics. While the atmosphere in low Earth orbit is incredibly thin, it is not empty. Sparse air particles create orbital drag on satellites; this force pushes against the spacecraft, slowing it down and forcing it to drop in altitude. Satellites with large surface areas have more issues with drag, as they can act like a sail catching the wind. To add to this complexity, streams of particles and magnetic fields from the sunknown as space weathercan cause the density of air particles in low Earth orbit to fluctuate in unpredictable ways. These fluctuations directly affect orbital drag. When satellites are spaced less than 200 meters apart, the margin for error evaporates. A single impact could not only destroy one satellite but also send it blasting into its neighbors, triggering a cascade that could wipe out the entire cluster and randomly scatter millions of new pieces of debris into an orbit that is already a minefield. The importance of active avoidance To prevent crashes and cascades, satellite companies could adopt a leave no trace standard, which means designing satellites that do not fragment, release debris, or endanger their neighbors, and that can be safely removed from orbit. For a constellation as dense and intricate as Suncatcher, meeting this standard might require equipping the satellites with reflexes that autonomously detect and dance through a debris field. Suncatchers current design doesnt include these active avoidance capabilities. In the first six months of 2025 alone, SpaceXs Starlink constellation performed a staggering 144,404 collision-avoidance maneuvers to dodge debris and other spacecraft. Similarly, Suncatcher would likely encounter debris larger than a grain of sand every five seconds. Todays object-tracking infrastructure is generally limited to debris larger than a softball, leaving millions of smaller debris pieces effectively invisible to satellite operators. Future constellations will need an onboard detection system that can actively spot these smaller threats and maneuver the satellite autonomously in real time. Equipping Suncatcher with active collision-avoidance capabilities would be an engineering feat. Because of the tight spacing, the constellation would need to respond as a single entity. Satellites would need to reposition in concert, similar to a synchronized flock of birds. Each satellite would need to react to the slightest shift of its neighbor. Paying rent for the orbit Technological solutions, however, can go only so far. In September 2022, the Federal Communications Commission created a rule requiring satellite operators to remove their spacecraft from orbit within five years of the missions completion. This typically involves a controlled de-orbit maneuver. Operators must now reserve enough fuel to fire the thrusters at the end of the mission to lower the satellites altitude, until atmospheric drag takes over and the spacecraft burns up in the atmosphere. However, the rule does not address the debris already in space, nor any future debris, from accidents or mishaps. To tackle these issues, some policymakers have proposed a use tax for space debris removal. A use tax or orbital-use fee would charge satellite operators a levy based on the orbital stress their constellation imposes, much like larger or heavier vehicles paying greater fees to use public roads. These funds would finance active debris-removal missions, which capture and remove the most dangerous pieces of junk. Avoiding collisions is a temporary technical fix, not a long-term solution to the space debris problem. As some companies look to space as a new home for data centers, and others continue to send satellite constellations into orbit, new policies and active debris-removal programs can help keep low Earth orbit open for business. Mojtaba Akhavan-Tafti is an associate research scientist at the University of Michigan. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||