Bringing a new drug to market usually requires a decade-long, multibillion-dollar journey, with a high failure rate in the clinical trial phase. Nvidias Kimberly Powell is at the center of a major industry effort to apply AI to the challenge.
If you look at the history of drug discovery, weve been kind of circling around the same targets for a long time, and weve largely exhausted the drugs for those targets, she says. A target is a biological molecule, often a protein, thats causing a disease. But human biology is extraordinarily complex, and many diseases are likely caused by multiple targets.Thats why cancer is so hard, says Powell. Because its many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently.Nvidia, which in July became the first publicly traded company to cross $4 trillion in market capitalization, is the primary provider of the chips and infrastructure that power large AI models, both within the tech companies developing the models and the far larger number of businesses relying on them. New generative AI models are quite capable of encoding and generating words, numbers, images, and computer code. But much of the work in the healthcare space involves specialized data sets, including DNA and protein structures. The sheer number of molecule combinations is mind-bogglingly big, straining the capacity of language models. Nvidia is customizing its hardware and software to work in that world.[W]e have to do a bunch of really intricate data science work to . . . take this method and apply it to these crazy data domains, Powell says. Were going from language and words that are just short little sequences to something thats 3 billion [characters] long.Powell, who was recruited by Nvidia to jump-start its investment in healthcare 17 years ago, manages the companys relationships with healthcare giants and startups, trying to translate their business and research problems into computational solutions. Among those partners are 5,000 or so startups participating in Nvidias Inception accelerator program.I spend a ton of my time talking to the disrupters, she explains. Because theyre really thinking about what [AI computing] needs to be possible in two to three years time.
This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
Raquel Urtasun is the founder and CEO of self-driving truck startup Waabi as well as a computer science professor at the University of Toronto. Unlike some competitors, Waabis AI technology is designed to drive goods all the way to their destinations, rather than merely to autonomous vehicle hubs near highways.
Urtasun, one of Fast Companys AI 20 honorees for 2025, spoke with us about the relationship between her academic and industry work, what sets Waabi apart from the competition, and the role augmented reality and simulation play in teaching computers to drive even in unusual road conditions.
This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity.
Can you tell me a bit about your background and how Waabi got started?
Ive been working in AI for the last 25 years, and I started in academia, because AI systems werent ready for the real world. There was a lot of innovation that needed to happen in order to enable the revolution that we see today.
For the last 15 years, Ive been dedicated to building AI systems for self-driving. Eight years ago, I made a jump to industry: I was chief scientist and head of R&D for Ubers self-driving program, which gave me a lot of visibility in terms of what building a world-class program and bringing the technology to market would look like. One of the things that became clear was that there was a tremendous opportunity for a disrupter in the industry, because everybody was going with an approach that was extremely complex and brittle, where you needed to incorporate by hand all the knowledge that the system should have. It was not something that was going to provide a scalable solution.
So a little bit over four years ago, I left Uber to go all in on a different generation of technology. I had deep conviction that we should build a system designed with AI-first principles, where its a single AI system end-to-end, but at the same time a system that is built for the physical world. It has to be verifiable and interpretable. It has to have the ability to prove the safety of the system, be very efficient, and run onboard the vehicle.
The second core pillar was that the data is as important as the model. You will never be able to observe everything and fully test the system by deploying fleets of vehicles. So we built a best-in-class simulator, where we can actually prove its realism.
And what differentiates your approach from the competition today?
The big difference is that other players have a black-box architecture, where they train the system basically with imitation learning to imitate what humans do. Its very hard to validate and verify and impossible to trace a decision. If the system does something wrong, you cant really explain why that is the case, and its impossible to really have guarantees about the system.
Thats okay for a level two system [where a human is expected to be able to take over], but when you want to deploy level four, without a human, that becomes a huge problem.
We built something very different, where the system is forced to interpret and explain at every fraction of a second all the things it could do, and how good or bad those decisions are, and then it chooses the best maneuver. And then through the simulator, we can learn much better how to handle safety-critical situations, and much faster as well.
How are you able to ensure the simulator works as well as real-world driving?
The goal of the simulator is to expose the self-driving vehicles full stack to many different situations. You want to prove that under each specific situation, how the system drives is the same as if the situation happens in the real world. So we take all the situations where Waabi driver has driven in the real world, and clone them in simulation, and then we see, did the truck do the same thing.
We also recently unveiled a really exciting breakthrough with mixed-reality testing. The way the industry does safety testing is they bring a self-driving vehicle to a closed course and they expose it to a dozen, maybe two dozen, scenarios that are very simple in order to say it has basic capabilities. Its very orchestrated, and they use dummies in order to test things that are safety critical. Its a very small number of non-repeatable tests.
But you can actually do safety testing in a much better way if you can do augmented reality on the self-driving vehicle. With our truck driving around in a closed course, we can intercept the live sensor data and create a view where theres a mix of reality and simulation, so in real time, as its driving in the world, its seeing all kinds of simulated situations as though they were real.
That way, you can have orders of magnitude more tests. You can test all kinds of things that are otherwise impossible, like accidents on the road, a traffic jam, construction, or motorbikes cutting in front of you. You can mix real vehicles with things that are not real, like an emergency vehicle in the opposite lane.
Youre also a full professor. Are you still teaching and supervising graduate students?
I do not teachI obviously do not have time to teach at all. I do have graduate students, but they do their studies at the company. We have this really interesting partnership with the University of Toronto.
If you want to really learn and do research in self-driving, it is a must that you get access to a full product. And thats impossible in academia. So a few years ago, we designed this program where students can do research within the company. Its one of a kind, and to me, this is the future of education for physical AI.
When did you realize the time was ripe for moving from academic research to industry work?
That was about eight and a half years ago. We were at the forefront of innovation, and I saw companies were using our technology, but it was hard for me to understand if we were working on the right things and if there was something that I hadnt thought of that is important when deploying a real product in the real world.
And I decided at the time to join Uber, and I had an amazing almost four years. It blew my mind in terms of how the problem of self-driving is much bigger than I thought. I thought, Okay, autonomy is basically it, and then I learned about how you need to design the hardware, the software, the systems around safety, etc., in a way that everything is scalable and efficient.
It was very clear to me that end-to-end systems and foundational models would be the thing. And four and a half years in, our rate of hitting milestones really speaks to this technology. Its amazingto give an example, the first time that we drove in rain, the system had never seen rain before. And it drove with no interventions in rain, even though it never saw the phenomenon before.
That for me was the “aha” moment. I was actually [in the vehicle] with some investors on the track, so it was kind of nerve-racking. But it was amazing to see. I always have very, very high expectations, but it blew my mind what it could do.
As gaming platforms Roblox and Fortnite have exploded in popularity with Gen Alpha, its no surprise that more than half of children in the U.S. are putting video games high on their holiday wish lists.
Entertainment Software Association (ESA) surveyed 700 children between the ages of 5 and 17 and found three in five kids are asking for video games this holiday season. However, the most highly requested gift isnt a console or even a specific game: Its in-game currency.
The survey didnt dig into which currency is proving most popular, but the category as a whole tops the list with a 43% request rate, followed by 39% for a console, 37% for accessories, and 37% for physical games.
A study published by Circana this year revealed only 4% of video game players in the U.S. buy a new game more often than once per month, with a third of players not buying any games at all. Behind this shift is the immense popularity of live service games such as Fortnite and those offered on the Roblox platform.
Both are free to play, which means the app has to generate money in other ways. Much of Robloxs $3.6 billion revenue in 2024 was made via in-game microtransactions, particularly through purchases of its virtual currency Robux. Here, $5 will get you 400 Robux to spend in the game on emotes, character models, and skins, among other items.
Players can also earn currency just by playing, but as with any free-to-play game, the process of earning in-game points will be slow and tedious compared to purchasing them outright.
Its worth noting that while these games often seem innocent enough, about half of parents surveyed by Ygam, an independent U.K. charity dedicated to preventing gaming and gambling harms among young people, noted there are gambling-like mechanisms in the games their child plays, including mystery boxes and loot boxes, which may be harmful to children.
Still, the average parent intends to spend $737 on game-related gifts, ESA reported.
Parents who arent ableor willingto drop hundreds on Robux and V-bucks this holiday may be pleased to learn that more than half of the kids surveyed said they would like to spend more time playing games with their parents, with 73% of those ages 5 through 7.
Turns out, the best gift you can give your child is quality time.
Most people say they want to live to a ripe old age. But that isnt really true. What people really want is to live to a ripe, old age in good mental and physical health. Some of us actually get to live this dream. These folks are known as super-agers and they make it well into their 80s not just in decent physical shape, but also with minds at least as sharp as people 30 years younger.
How do they manage it? Thats the question Northwestern University researchers have been aiming to answer with a 25-year-long study. It examined the brains and lifestyles of almost 300 super-agers.
As youd expect, a quarter century of data shows it really helps to be born with lucky biology. The neuroscientists found a number of physical differences between the brains of super-agers and the average person. There isnt much non-scientists can do with that information. We have to make the most of the brains bequeathed to us by our DNA.
Luckily, the researchers also discovered one big difference in behavior that sets apart super-agers who are still going strong into their 80s and beyond. Its something any of us can adopt in our own lives.
Super-agers brains are different
When you scan or posthumously autopsy the brains of super-agers, they look different than average brains, according to Sandra Weintraub, a Northwestern psychology professor involved in the study. Normal brains generally show some accumulation of the plaques and protein tangles that are characteristic of Alzheimers disease. Super-agers brains are largely free of them.
The study also revealed that while the outer layer of the brain, known as the cortex, tends to thin out as we age, it stays thick in super-agers. They also have a different mix of cell types in their brain.
Our findings show that exceptional memory in old age is not only possible but is linked to a distinct neurobiological profile. This opens the door to new interventions aimed at preserving brain health well into the later decades of life, Weintraub commented to Northwestern Now.
Thats of huge interest in scientists looking for treatments that can help us stay healthier longer. Weintraub calls the findings earth-shattering for us. But for those of us without medical degrees, theres little we can do with this information. You cant vacuum rogue proteins out of your brain or plump its cortex. (Though other studies do suggest sleep helps to wash proteins and other gunk out of your brain, so maybe dont skimp on shut-eye.)
And so are their social lives
Further complicating those looking for an easy takeaway from the research, the super-agers also didnt have a lot of lifestyle factors in common. Some were athletes. Others confirmed loafers. Some drank. Others smoked. They ate different things and kept different habits. But there was one big exception. Super-agers, it turns out, tend to be incredibly social.
The group was particularly sociable and relished extracurricular activities. Compared to their cognitively average, same-aged peers, they rated their relationships with others more positively. Similarly, on a self-reported questionnaire of personality traits they tended to endorse high levels of extraversion, the researchers reported in recent paper published in Alzheimers & Dementia.
Want to be a super-ager? Focus on your relationships
This might come as a surprise to laypeople who think aging well is all about HIIT workouts and plentiful kale. But it likely isnt a huge shock to other scientists. The Harvard Study of Adult Development has been minutely tracking the lives of some 724 original participants (and now some of their descendants) since 1938.
It discovered the biggest predictor of a long, healthy life isnt biological. Its social. The better the quality of your relationships, the more likely you are to age well. And while you have only indirect influence on things like your cholesterol level and brain health, you are directly in control of your social life.
Its something we can and should prioritize, according to study director Robert Waldinger. We think of physical fitness as a practice, as something we do to maintain our bodies. Our social life is a living system, and it needs maintenance too, he told the Harvard Gazette.
The effects of keeping up your social ties arent minor. Neuroscientist Bryan James, author of another study on aging and social contact, summed up his findings this way: Social activity is associated with a decreased risk of developing dementia and mild cognitive impairment [] the least socially active older adults developed dementia an average of five years before the most socially active.
Keeping up with friends helps with healthy aging. But so does keeping up with learning. Research has shown a strong link between keeping your brain active and maintaining cognitive performance deep into your later years. One study found that just joining a class to learn a new skill or hobby improved brain performance as if subjects were 30 years younger. Another one, done at Stanford, found no cognitive decline at all until retirement and beyond if you stay mentally active.
Are you getting your 5-3-1?
All of which suggests that staying social and mentally engaged is one of the most impactful moves you can make if you dream of becoming a super-ager yourself. The basic takeaway when it comes to mental function and aging is, use it or lose it.
But experts have offered more detailed guidance too. Harvard-trained ocial scientist and author Kasley Killam, for instance, has suggested the 531 rule:
Spend time with five different people a week. This could be anyone from your gym buddy or book club bestie to the person the next pew over at church.
Nurture three close relationships. Equally important is maintaining tighter bonds with three of the people closest to you, usually family and dear friends.
Aim for one hour of social interaction a day. That doesnt have to be all at once. It could be 10 minutes here, 10 minutes there, Killam explained to Business Insider. You can also combine social time with other activities, walking the dog with a neighbor, say.
Even just chatting on the phone can have more of an impact than many people suspect. According to a recent study in the U.S., talking on the phone for 10 minutes two to five times a week significantly lowered peoples levels of loneliness, depression, and anxiety, Killam reports in Psychology Today.
Change what you can influence
The bad news from science is that super-agers really are different physically. Their brains have biological quirks that help them stay sharp longer. Theres no way, unfortunately, to borrow that magic. But there is something else that sets super-agers apart that you can steal.
Its not a diet or exercise plan. Its a love for getting out and seeing other people and learning new things. It turns out the more you maintain your social connections and mental stimuli, the more likely you are to get not just more years, but more healthy, active, and sharp years.
Jessica Stillman
This article originally appeared on Fast Companys sister publication, Inc.
Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
Rachel Taylor began her career as a creative director in the advertising business, a job that gave her plenty of opportunity to micromanage the final product. I had control of the script, she remembers. I could think about the intonation, and I could give the actor notes.
That was before she pivoted to helping AI companies shape the personality of their assistants. Rather than handing a digital helper a script, the best she can do is point it in the right direction: The technology sometimes feels like a toddler that you give a permanent marker to and see what it writes on the wall, she says.
After joining DeepMind cofounder Mustafa Suleymans startup Inflection AI in 2023, Taylor was one of dozens of staffers who followed Suleyman to Microsoft, where they worked on the consumer version of Copilot. In October, she returned to startup life, departing Microsoft for Sesame, whose CEO, Brendan Iribe, also cofounded VR pioneer Oculus.
Sesame has built two talking assistants, Maya and Miles, that are powered by its own AI models. Its also developing a voice-AI-enabled pair of smart glasses. Taylors arrival coincided with its announcement of a $250 million Series B funding round led by Sequoia. Though the company isnt yet saying much about its long-term plans, Taylors responsibilities once again involve keeping AI personas friendly and helpful. Shes also steering them away from traits that can be dangerous if users take them too seriously, such as sycophancy. Its weird how much the study of culture comes into play with thinking all that through, she says of her purview. Its not simply tech.
Calling consumer AIs current incarnation both magical and primitive, Taylor muses about her grandchildren being impressed someday that she was there at the start. For now, she stresses, Were just scratching the surface of this new mode of communication.
This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
What if the chatbots we talk to every day actually felt something? What if the systems writing essays, solving problems, and planning tasks had preferences, or even something resembling suffering? And what will happen if we ignore these possibilities?
Those are the questions Kyle Fish is wrestling with as Anthropics first in-house AI welfare researcher. His mandate is both audacious and straightforward: Determine whether models like Claude can have conscious experiences, and, if so, how the company should respond.Were not confident that there is anything concrete here to be worried about, especially at the moment, Fish says, but it does seem possible. Earlier this year, Anthropic ran its first predeployment welfare tests, which produced a bizarre result: Two Claude models, left to talk freely, drifted into Sanskrit and then meditative silence as if caught in what Fish later dubbed a spiritual bliss attractor.Trained in neuroscience, Fish spent years in biotech, cofounding companies that used machine learning to design drugs and vaccines for pandemic preparedness. But he found himself drawn to what he calls pre-paradigmatic areas of potentially great importancefields where the stakes are high but the boundaries are undefined. That curiosity led him to cofound a nonprofit focused on digital minds, before Anthropic recruited him last year.Fishs role didnt exist anywhere else in Silicon Valley when he started at Anthropic. To our knowledge, Im the first one really focused on it in an exclusive, full-time way, he says. But his job reflects a growing, if still tentative, industry trend: Earlier this year, Google went about hiring post-AGI scientists tasked partly with exploring machine consciousness.At Anthropic, Fishs work spans three fronts: running experiments to probe model welfare, designing practical safeguards, and helping shape company policy. One recent intervention gave Claude the ability to exit conversations it might find distressing, a small but symbolically significant step. Fish also spends time thinking about how to talk publicly about these issues, knowing that for many people the very premise sounds strange.Perhaps most provocative is Fishs willingness to quantify uncertainty. He estimates a 20% chance that todays large language models have some form of conscious experience, though he stresses that consciousness should be seen as a spectrum, not binary. Its a kind of fuzzy, multidimensional combination of factors, he says.For now, Fish insists the field is only scratching the surface.Hardly anybody is doing much at all, us included, he admits. His goal is less to settle the question of machine consciousness than to prove it can be studied responsibly and to sketch a road map others might follow.
This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
Andreessen Horowitz investors (and identical twins) Justine and Olivia Moore have been in venture capital since their undergraduate days at Stanford University, where, in 2015, they cofounded an incubator called Cardinal Ventures to help students pursue business ideas while still in school. Founding it also gave the Moores an entry point into the broader VC industry.
The thing about starting a startup incubator at Stanford is all the VCs want to meet you, even if you have no idea what youre doing, which we did not back then, Olivia says.
At the time, the app economy was booming, and services around things like food delivery and dating proliferated, recalls Justine. But that energy pales in comparison to the excitement around AI the sisters now experience at Andreessen Horowitz.
Theres so many more opportunities in terms of what people are able to build than what were able to invest in, she says.
To identify the right opportunities, the Moores track business data such as paid conversion rates and closely examine founders backgroundswhether theyve worked at a cutting-edge AI lab or deeply studied the needs of a particular industry. They attend industry conferences, stay current on the latest AI research papers, and, perhaps most critically, spend significant time testing AI-powered products. That means going beyond staged demos to see what tools can actually do and spotting founders who quickly intuit user needs and add features accordingly.
From using the products, you get a pretty quick, intuitive sense of how much of something is marketing hype, says Olivia, whose portfolio includes supply chain and logistics operations company HappyRobot and creative platform Krea.The sisters also value Andreessen Horowitzs scale, which allows the firm to stick to its convictions rather than chase trends, and its track record of supporting founders beyond simply investing. (Andreessen Horowitz is reportedly seeking to raise $20 billion to support its AI-focused investments.)
Its most fun to do this job when you can work with the best founders and when you can actually really help them with the core stuff that theyre struggling with, theyre working on, or striving to do in their business, says Justine, a key early investor in voice-synthesis technology company ElevenLabs.
Though the sisters live together and work at the same firm, where they frequently bounce ideas off each other, theyve carved out their own lanes. Olivia focuses more on AI applications, while Justine spends more time on AI infrastructure and foundational models. At this point, they say, its not unheard of for industry contacts to not even realize theyre related.
If I see [her] on a pitch meeting in any given day, thats maybe more of the exception than the rule, Justine says.
This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
Last year, OpenAI decided it had to pay more attention to its power users, the ones with a knack for discovering new uses for AI: doctors, scientists, and coders, along with companies building their own software around OpenAIs API. And so the company turned to post-training research lead Michelle Pokrass to spin up a team to better understand them.
The AI field is moving so quickly, the power-user use cases of today are really the median-user use cases a year from now, or two years from now, Pokrass says. Its really important for us to stay on the leading edge and build to where capabilities are emerging, rather than just focusing on what people are using the models for now.
Pokrass, a former software engineer for Coinbase and Clubhouse, came to OpenAI in 2022, fully sold on AI after experiencing the magic of coding tools such as GitHub Copilot. She played key roles in developing OpenAIs GPT-4.1 and GPT-5, and now she focuses on testing and tweaking models based on users who are pushing AI to its limits.
Specifically, Pokrasss team works on post-training, a process that helps large language models understand the spirit of user requests. This refining allows ChatGPT to code, say, a fully polished to-do list app rather than just instructions on how to theoretically make one. Theres been lots of examples of GPT-5 helping with scientific breakthroughs, or being able to discover new mathematical proofs, or working on important biological problems in healthcare, saving doctors and specialists a lot of time, Pokrass says. These are examples of exactly the kinds of capabilities we want to keep pushing.
Creating a team with this niche focus is unusual among Big Tech companies, which tend to target broad audiences they can monetize at scale through, say, targeted ads. Catering to power users isnt a revenue play, Pokrass says, even if many pay $200 per month for ChatGPT Pro subscriptions.
Instead, its a way to assess the why of AI, with power users pointing to unforeseen opportunities. With traditional tech, its usually clear how people will use a product a few years down the road, Pokrass says. With AI, were all discovering with our users, live, what exactly is highest utility, and how people can get value out of this.
Eventually, OpenAI figures those use cases will help inform the features that it builds for everyone else. Pokrass gives the example of medical professionals using AI in their decision-making, which in turn could help ChatGPT better understand the kind of medical questions people are asking it (for better or worse).
Theres always work for this team, because as we push boundaries for what our models can do, the frontier just gets moved out, and then we start to see an influx of new activity of people using these new capabilities, Pokrass says.
This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
The healthcare industry faces major challenges in creating new drugs that can improve outcomes in the treatment of all kinds of diseases. New generative AI models could play a major role in breaking through existing barriers, from lab research to successful clinical trials. Eventually, even AI-powered robots could help in the cause.
Nvidia VP of healthcare Kimberly Powell, one of Fast Companys AI 20 honorees, has led the companys health efforts for 17 years, giving her a big head start on understanding how to turn AIs potential to improve our well-being into reality. Since it’s likely that everything from drug-discovery models to robotic healthcare aides would be powered by Nvidia chips and software, shes in the right place to have an impact.
This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity.
A high percentage of drugs make it to clinical trials and then fail. How can new frontier models using lots of computing power help us design safer and more effective drugs?
Drug discovery is an enormous problem. It’s a 10-year journey at best. It costs several billions to get a drug to market. Back in 2017, very shortly after the transformer [generative AI model] was invented to deal with text and language, it was applied by the DeepMind team to proteins. And one of the most consequential contributions to healthcare today is still [DeepMinds] invention of AlphaFold. Everything that makes [humans] work is based on proteins and how they fold and their physical structure. We need to study that, [because] you might build a molecule that changes or inhibits the protein from folding the wrong way, which is the cause of disease.
So instead of using the transformer model to predict words, they used a transformer to predict the effects of a certain molecule on a protein. It allowed the world to see that its possible to represent the world of drugs in a computer. And the world of drugs really starts with human biology. DNA is represented.
After you take a sample from a human, you put it through a sequencing machine and what comes out is a 3 billion character sequence of lettersA‘s, C‘s, T‘s, and G‘s. Luckily, transformer models can be trained on this sequence of characters and learn to represent them. DNA is represented in a sequence of characters. Proteins are represented in a sequence of characters.
So how will this new approach end up giving us breakthrough drugs?
If you look at the history of drug discovery, we’ve been kind of circling around the same targetsthe target is the thing that causes the disease in the first placefor a very long time. And we’ve largely exhausted the drugs for those targets. We know biology is more complex than any one singular target. It’s probably multiple targets. And that’s why cancer is so hard, because it’s many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently.
Once we’ve cracked the biology, and we’ve understood more about these multiple targets, molecular design is the other half of this equation. And so similarly, we can use the power of generative models to generate ideas that are way outside a chemist’s potential training or even their imagination. It’s a near infinite search space. These generative models can open our aperture.
I imagine that modeling this vast new vocabulary of biology places a whole new set of requirements on the Nvidia chips and infrastructure.
We have to do a bunch of really intricate data science work to apply this [transformer] method to these crazy data domains. Because we’re [going from] the language model and [representing] these words that are just short little sequences to representing sequences that are 3 billion [characters] long. So things like context lengthhow much context length is how much information can you put into a prompthas to be figured out for these long proteins and DNA strings.
We have to do a lot of tooling and invention and new model architectures that have transformers at the core. That’s why we work with the community to really figure out what are the new methods or the new tooling we have to build so that new models can be developed for this domain. That’s in the area of really understanding biology better.
Can you say more about the company youre working with that is using digital twins to simulate an expensive clinical trial before the trial begins?
ConcertAI is doing exactly that. They specialize in oncology. They simulate the clinical trials so they can make the best decisions. They can see if they don’t have enough patients, or patients of the right type. They can even simulate it, depending on where the site selection is, to predict how likely the patients are to stay on protocol.
Keeping the patients adhering to the clinical trial is a huge challenge, because not everybody has access to transportation or enough capabilities to take off work. They build that a lot into their model so that they can try to set up the clinical trial for its best success factors.
How might AI agents impact healthcare?
You have these digital agents who are working in the computer and working on all the information. But to really imagine changing how healthcare is delivered, we’re going to need these physical agents, which I would call robots, that can actually perform physical tasks.
You can think about the deployment of robots, everything from meeting and greeting a patient at the door, to delivering sheets or a glass of ice chips to a patient room, to monitoring a patient while inside a room, all the way through to the most challenging of environments, which is the operating room with surgical robotics.
Nvidia sells chips, but I think what I’ve heard in your comments is a whole tech stack, including in healthcare. There are models, there are software layers, things like that.
I’ve been at the company 17 years working on healthcare, and it’s not because healthcare lives in a chip. We build full systems. There are the operating systems, there are the AI models, there are the tools.
And a model is never doneyou have to be constantly improving it. Through every usage of that model, you’re learning something, and you’ve got to make sure that that agent or model is continuously improving. We’ve got to create whole computing infrastructure systems to serve that.
A few years ago, Tara Feeners career took an unexpected pivot. Shes spent nearly two decades working on creative tools for companies like Adobe, FiftyThree, WeTransfer, and Vimeo, and was content to keep working in that domain.
But then the Browser Company came along, and Feener saw an opportunity to build something even more ambitious. Feenerone of Fast Companys AI 20 honorees for 2025is now the companys head of engineering, overseeing its AI-focused Dia browser and its earlier Arc browser.
The browser is suddenly an area of intense interest for AI companies, and Feener understands why: Its the first stop for looking up information, and it’s already connected to the apps and services you use every day. OpenAI and Perplexity both offer their own browsers now, borrowing some Dia features like the ability to summarize across multiple tabs and interrogate your browser history. The Browser Company itself was acquired in September by Atlassian for $610 million, proclaiming that it would transform how work gets done in the AI era.
Feener says her team has never felt more creative. We’ve never seen more prototypes flying around, and I think I’m doing my job successfully as a leader here if that motion is happening, she says.
This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity.
Howd you end up at the Browser Company?
[The Browser Company CEO] Josh Miller started texting me. We were both in that 2013 early New York tech bubble, we had a couple conversations, and he pitched me on the Browser Company.
At first I couldn’t connect it to the arc of my career in creativity, but then it just became this infectious idea. I was like, “Wait a minute, I think the browser is actually the largest creative canvas of my entire career. It’s where you live your life and where you create within.”
Why does it feel like AI browsers are having a moment right now?
I really do believe that the browser is the most compelling, accessible AI layer. It’s the number-one text box you use. And what we do is, as youre typing, we can distinguish a Google search from an assistant or a chat question. In the future, you can imagine other things like taking action or tapping into other search engines. It basically becomes an air traffic control center as you type, and that’s going to help introduce folks to AI just so much faster because you don’t have to go to ChatGPT to ask a question.
Thats part one. Part two is just context. We have all of your stuff. We have all of your tabs. We have your cookies. With other AI tools, the barrier to connecting to your other web apps or tools is still high. We get around that with cookies within the browser, so we’re able to just do things like draft your email, or create your calendar event, or tap into your Salesforce workflow.
How do you think about which AI features are worth doing?
I just see it as another bucket of Play-Doh. I never wanted to do AI for the sake of AI but for leveraging AI in the right moment to do things that would have been really hard for us to do before.
A great example is being able to tidy your tabs for you in Arc. There’s a little broom you can click, and it starts sweeping, and it auto-renames, organizes, and tidies up your tabs. We always had ambitions and prototypes, but with large language models, we were able to just throw your tabs at it and say, “Tidy for me.
With Arc, it was a lot about tab management. With Dia, we have context, we have memory, we have your cookies, so it’s like we actually own the entire layer. We leverage that as a tool for things like helping you compare your tabs, or rewriting this tab in the voice of this other tab, which is something I do almost every day. Being able to do that all within the browser has just been a huge unlock.
Can you elaborate on how Dia taps into users browser histories?
Browser history has always been that long laundry list of all the places you’ve been, but actually that long list is context, and nothing is more important in AI than context. Just like TikTok gets better with every swipe, every time you open something in Dia we learn something about you. It’s not in a creepy way, but it helps you tap into your browser history.
Just like you can @ mention a tab in Dia and ask a question, like give me my unread emails, with your history you can do things like, Break down my focus time over the past week, or analyze my week and tell me something about myself given my history. We have a bunch of use cases like that in our skills gallery that you can check out, and those are pretty wild. In ChatGPT and other chat tools, it feels like you have to give a lot to build up that context body. Were able to tap into that as a tool in a very direct way.
Some AI browsers offer agent features that can navigate through web pages on your behalf. Will Dia ever browse the web for you?
We’ve done a bunch of prototypes and for us, the experience of just literally going off and browsing for you and clicking through web pages hasn’t felt yet fast enough or seamless enough. We’re all over it in terms of making sure we’re harnessing it at the right moment and the right way when we think it’s ready.
We don’t want to hide the web or replace the web. Something I like to say about Dia is that we want to be one arm around you and one arm around the internet. And it’s like, how can we make tapping into your context in your browser feel the same way it would feel to write a document, or even just to create something with plain, natural language? I think that’s like the most powerful thing.
Its like the same feeling I had when I was young and tapped into Flash, and that people had with HTML. With AI, literally my mom can write a sentence like, “turn this New York Times recipe into a salad,” and in some way she’s created an app that does some kind of transformation. And that just gets me really excited.