Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2025-12-04 10:00:00| Fast Company

The healthcare industry faces major challenges in creating new drugs that can improve outcomes in the treatment of all kinds of diseases. New generative AI models could play a major role in breaking through existing barriers, from lab research to successful clinical trials. Eventually, even AI-powered robots could help in the cause.  Nvidia VP of healthcare Kimberly Powell, one of Fast Companys AI 20 honorees, has led the companys health efforts for 17 years, giving her a big head start on understanding how to turn AIs potential to improve our well-being into reality. Since it’s likely that everything from drug-discovery models to robotic healthcare aides would be powered by Nvidia chips and software, shes in the right place to have an impact. This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. A high percentage of drugs make it to clinical trials and then fail. How can new frontier models using lots of computing power help us design safer and more effective drugs? Drug discovery is an enormous problem. It’s a 10-year journey at best. It costs several billions to get a drug to market. Back in 2017, very shortly after the transformer [generative AI model] was invented to deal with text and language, it was applied by the DeepMind team to proteins. And one of the most consequential contributions to healthcare today is still [DeepMinds] invention of AlphaFold. Everything that makes [humans] work is based on proteins and how they fold and their physical structure. We need to study that, [because] you might build a molecule that changes or inhibits the protein from folding the wrong way, which is the cause of disease.  So instead of using the transformer model to predict words, they used a transformer to predict the effects of a certain molecule on a protein. It allowed the world to see that its possible to represent the world of drugs in a computer. And the world of drugs really starts with human biology. DNA is represented. After you take a sample from a human, you put it through a sequencing machine and what comes out is a 3 billion character sequence of lettersA‘s, C‘s, T‘s, and G‘s. Luckily, transformer models can be trained on this sequence of characters and learn to represent them. DNA is represented in a sequence of characters. Proteins are represented in a sequence of characters.  So how will this new approach end up giving us breakthrough drugs? If you look at the history of drug discovery, we’ve been kind of circling around the same targetsthe target is the thing that causes the disease in the first placefor a very long time. And we’ve largely exhausted the drugs for those targets. We know biology is more complex than any one singular target. It’s probably multiple targets. And that’s why cancer is so hard, because it’s many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently. Once we’ve cracked the biology, and we’ve understood more about these multiple targets, molecular design is the other half of this equation. And so similarly, we can use the power of generative models to generate ideas that are way outside a chemist’s potential training or even their imagination. It’s a near infinite search space. These generative models can open our aperture. I imagine that modeling this vast new vocabulary of biology places a whole new set of requirements on the Nvidia chips and infrastructure. We have to do a bunch of really intricate data science work to apply this [transformer] method to these crazy data domains. Because we’re [going from] the language model and [representing] these words that are just short little sequences to representing sequences that are 3 billion [characters] long. So things like context lengthhow much context length is how much information can you put into a prompthas to be figured out for these long proteins and DNA strings.  We have to do a lot of tooling and invention and new model architectures that have transformers at the core. That’s why we work with the community to really figure out what are the new methods or the new tooling we have to build so that new models can be developed for this domain. That’s in the area of really understanding biology better. Can you say more about the company youre working with that is using digital twins to simulate an expensive clinical trial before the trial begins? ConcertAI is doing exactly that. They specialize in oncology. They simulate the clinical trials so they can make the best decisions. They can see if they don’t have enough patients, or patients of the right type. They can even simulate it, depending on where the site selection is, to predict how likely the patients are to stay on protocol. Keeping the patients adhering to the clinical trial is a huge challenge, because not everybody has access to transportation or enough capabilities to take off work. They build that a lot into their model so that they can try to set up the clinical trial for its best success factors. How might AI agents impact healthcare? You have these digital agents who are working in the computer and working on all the information. But to really imagine changing how healthcare is delivered, we’re going to need these physical agents, which I would call robots, that can actually perform physical tasks. You can think about the deployment of robots, everything from meeting and greeting a patient at the door, to delivering sheets or a glass of ice chips to a patient room, to monitoring a patient while inside a room, all the way through to the most challenging of environments, which is the operating room with surgical robotics.  Nvidia sells chips, but I think what I’ve heard in your comments is a whole tech stack, including in healthcare. There are models, there are software layers, things like that. I’ve been at the company 17 years working on healthcare, and it’s not because healthcare lives in a chip. We build full systems. There are the operating systems, there are the AI models, there are the tools. And a model is never doneyou have to be constantly improving it. Through every usage of that model, you’re learning something, and you’ve got to make sure that that agent or model is continuously improving. We’ve got to create whole computing infrastructure systems to serve that. 


Category: E-Commerce

 

2025-12-04 10:00:00| Fast Company

Last year, OpenAI decided it had to pay more attention to its power users, the ones with a knack for discovering new uses for AI: doctors, scientists, and coders, along with companies building their own software around OpenAIs API. And so the company turned to post-training research lead Michelle Pokrass to spin up a team to better understand them. The AI field is moving so quickly, the power-user use cases of today are really the median-user use cases a year from now, or two years from now, Pokrass says. Its really important for us to stay on the leading edge and build to where capabilities are emerging, rather than just focusing on what people are using the models for now. Pokrass, a former software engineer for Coinbase and Clubhouse, came to OpenAI in 2022, fully sold on AI after experiencing the magic of coding tools such as GitHub Copilot. She played key roles in developing OpenAIs GPT-4.1 and GPT-5, and now she focuses on testing and tweaking models based on users who are pushing AI to its limits. Specifically, Pokrasss team works on post-training, a process that helps large language models understand the spirit of user requests. This refining allows ChatGPT to code, say, a fully polished to-do list app rather than just instructions on how to theoretically make one. Theres been lots of examples of GPT-5 helping with scientific breakthroughs, or being able to discover new mathematical proofs, or working on important biological problems in healthcare, saving doctors and specialists a lot of time, Pokrass says. These are examples of exactly the kinds of capabilities we want to keep pushing. Creating a team with this niche focus is unusual among Big Tech companies, which tend to target broad audiences they can monetize at scale through, say, targeted ads. Catering to power users isnt a revenue play, Pokrass says, even if many pay $200 per month for ChatGPT Pro subscriptions. Instead, its a way to assess the why of AI, with power users pointing to unforeseen opportunities. With traditional tech, its usually clear how people will use a product a few years down the road, Pokrass says. With AI, were all discovering with our users, live, what exactly is highest utility, and how people can get value out of this. Eventually, OpenAI figures those use cases will help inform the features that it builds for everyone else. Pokrass gives the example of medical professionals using AI in their decision-making, which in turn could help ChatGPT better understand the kind of medical questions people are asking it (for better or worse). Theres always work for this team, because as we push boundaries for what our models can do, the frontier just gets moved out, and then we start to see an influx of new activity of people using these new capabilities, Pokrass says. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.


Category: E-Commerce

 

2025-12-04 10:00:00| Fast Company

Andreessen Horowitz investors (and identical twins) Justine and Olivia Moore have been in venture capital since their undergraduate days at Stanford University, where, in 2015, they cofounded an incubator called Cardinal Ventures to help students pursue business ideas while still in school. Founding it also gave the Moores an entry point into the broader VC industry. The thing about starting a startup incubator at Stanford is all the VCs want to meet you, even if you have no idea what youre doing, which we did not back then, Olivia says. At the time, the app economy was booming, and services around things like food delivery and dating proliferated, recalls Justine. But that energy pales in comparison to the excitement around AI the sisters now experience at Andreessen Horowitz. Theres so many more opportunities in terms of what people are able to build than what were able to invest in, she says. To identify the right opportunities, the Moores track business data such as paid conversion rates and closely examine founders backgroundswhether theyve worked at a cutting-edge AI lab or deeply studied the needs of a particular industry. They attend industry conferences, stay current on the latest AI research papers, and, perhaps most critically, spend significant time testing AI-powered products. That means going beyond staged demos to see what tools can actually do and spotting founders who quickly intuit user needs and add features accordingly. From using the products, you get a pretty quick, intuitive sense of how much of something is marketing hype, says Olivia, whose portfolio includes supply chain and logistics operations company HappyRobot and creative platform Krea.The sisters also value Andreessen Horowitzs scale, which allows the firm to stick to its convictions rather than chase trends, and its track record of supporting founders beyond simply investing. (Andreessen Horowitz is reportedly seeking to raise $20 billion to support its AI-focused investments.) Its most fun to do this job when you can work with the best founders and when you can actually really help them with the core stuff that theyre struggling with, theyre working on, or striving to do in their business, says Justine, a key early investor in voice-synthesis technology company ElevenLabs. Though the sisters live together and work at the same firm, where they frequently bounce ideas off each other, theyve carved out their own lanes. Olivia focuses more on AI applications, while Justine spends more time on AI infrastructure and foundational models. At this point, they say, its not unheard of for industry contacts to not even realize theyre related. If I see [her] on a pitch meeting in any given day, thats maybe more of the exception than the rule, Justine says. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.


Category: E-Commerce

 

2025-12-04 10:00:00| Fast Company

What if the chatbots we talk to every day actually felt something? What if the systems writing essays, solving problems, and planning tasks had preferences, or even something resembling suffering? And what will happen if we ignore these possibilities? Those are the questions Kyle Fish is wrestling with as Anthropics first in-house AI welfare researcher. His mandate is both audacious and straightforward: Determine whether models like Claude can have conscious experiences, and, if so, how the company should respond.Were not confident that there is anything concrete here to be worried about, especially at the moment, Fish says, but it does seem possible. Earlier this year, Anthropic ran its first predeployment welfare tests, which produced a bizarre result: Two Claude models, left to talk freely, drifted into Sanskrit and then meditative silence as if caught in what Fish later dubbed a spiritual bliss attractor.Trained in neuroscience, Fish spent years in biotech, cofounding companies that used machine learning to design drugs and vaccines for pandemic preparedness. But he found himself drawn to what he calls pre-paradigmatic areas of potentially great importancefields where the stakes are high but the boundaries are undefined. That curiosity led him to cofound a nonprofit focused on digital minds, before Anthropic recruited him last year.Fishs role didnt exist anywhere else in Silicon Valley when he started at Anthropic. To our knowledge, Im the first one really focused on it in an exclusive, full-time way, he says. But his job reflects a growing, if still tentative, industry trend: Earlier this year, Google went about hiring post-AGI scientists tasked partly with exploring machine consciousness.At Anthropic, Fishs work spans three fronts: running experiments to probe model welfare, designing practical safeguards, and helping shape company policy. One recent intervention gave Claude the ability to exit conversations it might find distressing, a small but symbolically significant step. Fish also spends time thinking about how to talk publicly about these issues, knowing that for many people the very premise sounds strange.Perhaps most provocative is Fishs willingness to quantify uncertainty. He estimates a 20% chance that todays large language models have some form of conscious experience, though he stresses that consciousness should be seen as a spectrum, not binary. Its a kind of fuzzy, multidimensional combination of factors, he says.For now, Fish insists the field is only scratching the surface.Hardly anybody is doing much at all, us included, he admits. His goal is less to settle the question of machine consciousness than to prove it can be studied responsibly and to sketch a road map others might follow. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.


Category: E-Commerce

 

2025-12-04 10:00:00| Fast Company

Rachel Taylor began her career as a creative director in the advertising business, a job that gave her plenty of opportunity to micromanage the final product. I had control of the script, she remembers. I could think about the intonation, and I could give the actor notes. That was before she pivoted to helping AI companies shape the personality of their assistants. Rather than handing a digital helper a script, the best she can do is point it in the right direction: The technology sometimes feels like a toddler that you give a permanent marker to and see what it writes on the wall, she says. After joining DeepMind cofounder Mustafa Suleymans startup Inflection AI in 2023, Taylor was one of dozens of staffers who followed Suleyman to Microsoft, where they worked on the consumer version of Copilot. In October, she returned to startup life, departing Microsoft for Sesame, whose CEO, Brendan Iribe, also cofounded VR pioneer Oculus. Sesame has built two talking assistants, Maya and Miles, that are powered by its own AI models. Its also developing a voice-AI-enabled pair of smart glasses. Taylors arrival coincided with its announcement of a $250 million Series B funding round led by Sequoia. Though the company isnt yet saying much about its long-term plans, Taylors responsibilities once again involve keeping AI personas friendly and helpful. Shes also steering them away from traits that can be dangerous if users take them too seriously, such as sycophancy. Its weird how much the study of culture comes into play with thinking all that through, she says of her purview. Its not simply tech. Calling consumer AIs current incarnation both magical and primitive, Taylor muses about her grandchildren being impressed someday that she was there at the start. For now, she stresses, Were just scratching the surface of this new mode of communication. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.


Category: E-Commerce

 

Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] next »

Privacy policy . Copyright . Contact form .