Last week, I published a deep exploration into Palantir and its founder factory and how the companys power and success can be explained by its ability to attract elite talent and how it empowers them to develop their skills and learn new ones in the projects they pursue. That talent then goes on to found their own startups, invariably seeking to address hard, intractable problems much as they did in their work at Palantir.
(In the few days since I published my first story, Ive found another 21 former Palantir employees turned founders, bringing what was already the largest public dataset of these people to 335. If you havent already, check it out here.)
There are a number of high-profile companies founded by Palantir alums that many people have heard of. These include:
Anduril, the defense contractor ($30.5 billion valuation), cofounded by Brian Schimpf, Matt Grimm, and Trae Stephens;
Kalshi, the predictions market ($11 billion valuation), cofounded by Tarek Mansour;
Eleven Labs, the voice AI platform ($6.6 billion valuation), cofounded by Mati Staniszewski;
Handshake, the marketplace for early-career workers, colleges, and employers which has recently focused more on matching specialized talent with AI training opportunities ($3.3 billion valuation), cofounded by Garrett Lord; and
Partiful, the planning tool for IRL experiences ($400 million valuation), cofounded by Shreya Murthy and Joy Tao.
But there are so many fascinating stories among the cadre of startups founded and led by Palantir alums. The nine companies showcased below, which span healthcare, government services, cybersecurity, law, clean energy, hardware development, and not-for-profits, exemplify the power of being acculturated to finding a big, hard problemand having the skills to tackle it.
Angle Health
Palantir alum founders: Ty Wang (CEO), Anirban Gangopadhyay
Other founders: None
What Angle does: AI-native healthcare benefits platform, particularly serving employees of small and medium-size businesses
Employees: Approximately 100
Funding: $197 million raised to date, including a $134 million Series B in December 2025 led by Portage, with funding also from Blumberg Capital, Y Combinator, and others
Secret sauce: Creating a full-stack solution. To work toward achieving the companys goal of changing how people approach and access healthcare, as Wang says, and democratize access to the kinds of modern healthcare services . . . that are still not available to a lot of the people that really need them, Angle had to rebuild the technology infrastructure that powers the way that the vast majority of Americans access healthcare today, which is through their health plan.
Angle has centralized data assets and focused on enabling AI-driven, human-in-the-loop workflows across its products and operations. That allows it to offer such things as digital behavioral health programs or digital pharmacies, for examplenewer services that have become more routine for large employers to include in their health plansand that can reduce the overall cost of care for the thousands of small businesses that Angle serves.
One key learning from Palantir: Gangopadhyay explains that Angles culture encourages a lot of slow thinking and having discourse on a monthly basis to develop a clear plan, and then we’re very intentionally head-in-the-ground and hands-to-keyboards executing. Although Palantir itself is not structured this way in terms of a “monthly sync,” he adds that we’re very light on meetings. We leave the individuals to execute in their own way. That is spiritually aligned with how Palantir operates.
Avandar Labs
Palantir alum founders: Pablo Sarmiento (CEO and CTO)
Other founders: None
What Avandar does: Software for social enterprises and nonprofits to manage their data
Employees: One
Funding: Bootstrapped. In fact, Sarmiento says he will not raise equity-based funding, choosing instead to pursue non-dilutive capital sources, including revenue-based financing to align better with the goals of socially focused companies.
Secret sauce: Think about it, says Sarmiento, we shouldn’t be building the software we need to fight a crisis during the crisis, referring to COVID as well as the work he did after he left Palantir, at Zenysis Technologies, helping create software so that the National Health Institute in Mozambique could successfully fight a cholera epidemic. It should exist.
Avandar Labs lets not-for-profits and social enterprises build a unified data platform to integrate and analyze an organization’s program data. (It’s currently in beta but when complete, Sarmiento says it’ll be “customizable to any mission.”) The platform’s core technological difference is that it was built from the start for social sector use cases, such as “ensuring it can support epidemic response, humanitarian emergencies, and cross-sector coordination.” He promises it’ll be far cheaper than any alternatives, too.
One key learning from Palantir: Bias towards action.
Chapter
Palantir alum founders: Cobi Blumenfeld-Gantz
Other founders: Corey Metzman, former Presidential candidate (and current Ohio gubernatorial candidate), Vivek Ramaswamy
What Chapter does: AI to help American seniors find the optimal Medicare plan at the lowest cost
Employees: Approximately 200
Funding: $186 million raised to date, most recently at a valuation of approximately $1.5 billion. Investors include Stripes, XYZ, and Susa Ventures, among others.
Secret sauce: Using AI to help seniors navigate Medicare. Chapters recommendation engine identifies which of the 24,000 Medicare options that exist is right for an individual customer, taking into account their doctors, prescriptions regimen, usual pharmacies, the benefits that are most important to them, their ability and willingness to pay, and more. Each one of those inputs is a huge data problem in and of its own, Blumenfeld-Gantz notes. It has an app that can determine from a picture of a users Medicare card which plan theyre on and then curate every single item that’s eligible for your plan and check them out without a litany of phone calls, he adds. Speaking of calls, Chapter ingests every phone communication its brokers have to assess if they’re making high-quality, compliant recommendations and offer real-time feedback.
One keylearning from Palantir: Being relentless. Working in a regulated space, where you have to get federal and state licenses and get licensed by insurance carriers in every state you operate, its just not accepting no, says Blumenfeld-Gantz. [You have to be] really annoying to state departments of insurance until they take your call and move your paperwork forward. The way I think about it is that you have to make it less work for them to do what you want them to do. Status quo, the easier thing is for them to do nothing. So you have to change the status quo so its easier for them to do something than nothing.
Draftwise
Palantir alum founders: James Ding (CEO), Emre Ozen
Other founders: Ozan Yalti (former senior associate at the global law firm Clifford Chance)
What Draftwise does: AI software for law firms and in-house legal teams to automate contract drafting, review, and negotiation
Employees: Approximately 60
Funding: $28 million raised to date, from Index Ventures, Y Combinator, and others
Secret sauce: Every other well-funded legal tech company in the space is building an application layer tool trying to put LLMs inside of bespoke interfaces to try to increase productivity for lawyers, says Ding. Draftwise is a data platform. We started by recognizing that the pain point we wanted to solve was one where the challenge is that big-ticket deals require data, and if you can’t have the data, you can’t make good decisions. We started from that foundation, integrating data across a variety of silos, bringing it together, and shaping it into an ontology. Then we also happen to have interfaces to serve that data to people inside their workflow.
For example, Ding cites an add-in for Microsoft Word that Draftwise made. You’re drafting a contract, you’re negotiating financial covenants, Draftwise can pull together into a single view all the data you need to actually make the decision of what covenants to give.
One key learning from Palantir: The thing I wanted to bring was immense agency, immense accountability, a sense of high integrity, says Ding, but also high effort where we’re just getting things done, we’re doing it right, and we’re doing the best we can.
Fourth Age
Palantir alum founders: Zach Romanow, plus founding partners Jesse Rickard, Pete Mills, and Samuel Tarng
Other founders: None
What Fourth Age does: Specialized forward-deployed engineering for Palantir customers to build complex applications on top of Palantirs platforms
Employees: More than 50
Funding: Bootstrapped
Secret sauce: My first customer is really the engineers, says Romanow, the best and brightest FDEs, or the people that could become the best and brightest FDEs if they’re in the right place and have the right teams around them. . . . hire the best possible people that at scale provide differentiated outcomes for customers, and the customers will pay you accordingly.
One key learning from Palantir: If you have a very, very high bar for the people . . . then A players want to join the A team, Romanow says. Let’s really stay true to our principles of what we know great looks like.
Manifest
Palantir alum founders: Daniel Bardenstein (CEO); Marc Frankel (former CEO)
Other founders: N/A
What Manifest does: Software and AI bill of materials to protect everything from healthcare systems to military aircraft
Employees: Approximately 30
Funding: $21 million raised to date from such investors as AE Industrial Partners (Boeing’s venture arm), Palumni VC, XYZ, and others
Secret sauce: Provides both vendors and buyers with visibility into the provenance of the elements in the software and AI they depend on to eliminate the risk of introducing a potentially calamitous vulnerability. Software is the only thing that we buy that you don’t get to know what’s in it, says Frankel. Everything else in our lives comes with an ingredients list.
One key learning from Palantir: Low ego, high ops tempo.”
Nira Energy
Palantir alum founders: Andy Chen (CTO)
Other founders: Chris Ariante (CEO, ex-Exxon Mobil), Andrew Martin
What Nira does: Software for clean energy developers, data centers, and utilities that helps them understand where theres available capacity on the electric grid for new projects
Employees: Approximately 30
Funding: $65.5 million from Energize Capital, Y Combinator, and others
Secret sauce: Focusing on one of the most painful roadblocks to building renewables, the hidden pain point impeding the goal to accelerate America’s power grid to be fossil free as quickly as possible, as Chen says. That is what’s known in the energy business as interconnectionadding renewable projects to the grid. Nira’s built mapping tools to help developers identify sites with capacity and another one to estimate costs while a project is in queue to come online.
One key learning from Palantir: Learning about transmission planning is a critical part to being successful at Nira, Chen says. If you’re not interested, you’re not going to be able to learn it. One thing that’s similar culturally between Palantir and the people we have here is this fundamental curiosity and willingness to learn about totally random stuff that will never help you in a future job, but you want to do it because you’re fundamentally interested in it. Chen adds that he’s now hiring for a forward deployed engineering role.
Nominal
Palantir alum founders: Jason Hoch
Other founders: Cameron McCord (CEO, former Naval submarine officer, ex-Anduril), Bryce Strauss (ex-Lockheed Martin)
What Nominal does: Software to help hardware engineering teams, people who build such things as nuclear fusion reactors and satellites, test and deliver complex systems faster
Employees: Approximately 100
Funding: $102.5 million raised to date, from Sequoia Capital, Lightspeed, Lux Capital, Founders Fund, and others
Secret sauce: Speed and solving the data challenges that hardware manufacturers face. When mechanical and electrical engineers work on hard hardware problems, they [also] have software problems, they have data infrastructure problems, Hoch says. We’re speeding up the workflows. We’re increasing the maximum complexity of what the hardware engineers and or customers can accomplish. When they finish a task or a simulation, they don’t need to crack open Claude Code to start understanding their data. It’s just right there in front of them. They’re able to ask the hard physics and engineering questions of the data. That’s the speed. For that data issue, when you’re building complex software systems, Hoch explains, you have this incredible toolkit of SaaS companies that have been building ways to make your job better for 30 years. Hardware engineers, by contrast, You’ll have 10,000 data points a second, a million data points a second coming off of a sensor, meaning that the nature of helping them process that data is not a solved problem.
One key learning from Palantir: Remaining customer obsessed, remaining technically obsessed. Our customers are wildly technical. The things that I would have to teach people 12 years ago when I was onsite with a customer, these people already know it. It’s keeping us honest to making sure we’re really staying at the cutting edge.
Sage
Palantir alum founders: Raj Mehra (CEO), Matt Lynch (CTO)
Other founders: Ellen Johnston (chief product officer)
What Sage does: A hardware and software platform to deliver better eldercare, particularly in assisted living facilities
Employees: More than 100
Funding: $59 million raised to date, from IVP, Friends & Family Capital, Maveron, and others
Secret sauce: Building hardware to collect the critical data to support its software. How do we give caregivers better tools to care for residents? How do we give residents of these communities tools to call for help and get help when they need it? asks Mehra. Realizing that existing systems werent measuring relevant data, Sage has built Core, which tracks nurse calls and helps caregivers manage tasks, which operators can then “use to improve quality of care and caregiver performance,” Lynch notes.
It also built Detect, which is AI-powered fall detection that enables care providers to respond proactively to those kind of emergency events. We can measure and pull in all of the telemetry from the physical devices that we’re deploying, Mehra adds. Based on all of that, you can then synthesize it and provide value to folks up the value chain.
One key learning from Palantir: One of us is responsible for every single person we bring in, says Mehra. One of Palantirs founders interviewed every hire for a long time. We haven’t departed from that, he continues, and I don’t think we ever should because it’s how we keep the culture intact. Adds Lynch: Then, if we make a mistake, we own it . . . when bets don’t pay off, we can’t sacrifice the culture for that.
A new book by a former Deloitte executive turned workplace well-being expert argues exactly that
In her new book Hope Is the Strategy, Jen Fisher, an expert on workplace well-being and human sustainability, makes a clear and timely case that hope isn’t a soft skill or a leadership afterthought; it’s a practical, learnable approach to navigating uncertainty and building healthier, more resilient organizations. In the following excerpt, Fisher draws on her personal experience grappling with burnout, as well as her research on well-being, leadership, and corporate culture, to reframe hope as something we can all learn and implement for ourselves and those we work with.
We’ve long misunderstood hope in the workplace. We’ve treated it as wishful thinkinga nice-to-have feeling that emerges when things are going well. But research from psychologist C.R. Snyder reveals something far more powerful: Hope is a cognitive process with three essential components: goals (what we want to achieve), pathways (our ability to identify routes to those goals), and agency (our belief that we can pursue those paths). This isn’t passive optimism; it’s an active strategy for navigating uncertainty and driving meaningful change.
After my own experience with burnout, I discovered that hope isn’t what you turn to after strength failshope is the strength we’ve been looking for all along. It’s not the light at the end of the tunnel; it’s the torch we need to lead others through it. And when organizations embed hope into their leadership practices and culture, they unlock something remarkable: the capacity to transform not just how people feel about work, but what they can actually accomplish together.
As more organizations prioritize helping their employees become healthier, more skilled for the future, and connected to a sense of purpose and belonging, they have an opportunity to instill hope in leadership and encourage it in workers.
A roadmap for the future
A leader who has hope can map out a path for an employee, offering a solid roadmap rather than an empty promise. They might say, “I can’t promise you complete job security, but I can provide you with the skills that will make you attractive in the job market.” That, in turn, helps foster hope in the worker, because they know that they’ll have more tools in their success toolkit, no matter what the future holds.
That’s not just a win for the individual, but for the group. An organization (of any typeit could also be a community, or a family) filled with people tapped into their meaning and purpose is stronger than one made up of disengaged, unhealthy, and unhappy people. In fact, hope is a strategy for a variety of prevalent workplace problems: It can improve mental well-being and stress management; it can drive action and reduce catastrophic thinking; and it can help overcome the disengagement crisis at work. What’s more, hope will support our transition to a more human-centered workplace as AI takes on the more mundane, tactical aspects of work.
Creating new ripples from leadership on down is possibleand as with the negative ones, it starts with modeling behaviors to set the tone for your team and your peers. That is, modeling the sustainable work behaviors and values that will drive purpose and well-being. Here are four examples:
1. Get clear on what your own boundaries are
If you’re following someone else’s vision of success instead of your own, you’re going to end up miserable and probably burned out. So take that PTOreally. The company will not crumble without you. And don’t answer that email at midnightreply in the morning, during work hours. A leader who actually sets healthy boundaries and lives by them gives employees permission to do the same.
As I reevaluated the role that work played in my life, I set my own new boundaries. I got clear on what my definition of success was, instead of allowing the external world to define that for me. And I brought hope into my life: I started each day with a set of “what if” questions, looking at the day ahead through the lens of possibility: What if this goes right? What if I do things this way? Then I’d end each day with reflection: How did it go? It helped me to see challenges as an opportunity for change.
Here are some other daily practices I put in place, all of which I still follow today:
Treat sleep as a nonnegotiable. I protect my eight hours like the business asset it actually is, recognizing that sleep isn’t a luxury but the foundation that makes everything else possible.
Schedule humanity into the calendar. Not vague “personal time” but specific blocks for connections that make me human: dinner with my husband, phone calls with friends, reading fiction that has nothing to do with work.
Incorporate daily recovery rituals. Three-minute breathing breaks between meetings, a proper lunch away from my desk, a brief walk outside to reset my nervous systemthese small moments of renewal prevent depletion from accumulating.
Defend the calendar against the tyranny of urgency. Breakfast, lunch, dinner, exercise, and sleep aren’t just activities to fit around “real work”they comprise the immovable infrastructure that sustains my performance. Everything else has to work around them, not the other way around.
2. Embrace the unknown
When we temporarily suspend our need for certainty, a different kind of productivity emerges. I call these my Possibility Days: Once a week, I grant myself permission to coexist with uncertainty. Instead of trying to control outcomes, I deliberately seek experiences with unknown results. I have conversations without preparing talking points. I explore ideas that seem impractical. I follow curiosity down rabbit holes without worrying where they lead. My most innovative solutions and deepest insights almost always trace back to these deliberate ventures into possibility thinking.
3. Walk the walk
The old ways of leading through power and control are giving way to something more human, more hopeful, and more whole. The future of leadership isn’t just about what we doit’s about how we show up, how we hold space for both struggle and possibility, and how we cultivate well-being as a vital way of being.
There’s this old thinking that we should check our feelings or emotions at work. It’s basically telling people: Don’t show up as who you truly are. When leaders normalize having no energy, no life, no nothing beyond work, it becomes not just accepted but expected. Emotions, whether they’re positive or negative, are really a sign of the things we care aboutand when we’re told not to bring emotions into the workplace, it stunts creativity, growth, innovation, connection, and understanding. The answer is simple: Show your emotions.
Your employees look to you to set the pace, tone, and stakes of the team and the work being done. Be vulnerable and authentic about when you’ve made a mistake, when you said one thing and you did another, when you screwed up. Your actions show themthat decisions to support their own health and well-being and career growth aren’t going to be viewed negatively or make it seem like they’re less committed to their work.
4. Build teams grounded in trust
True organizational and individual success depends on teams built on mutual trustteams that prioritize deep relationships alongside personal well-being. Trust-based teams require leaders who actively invite people to show up authentically and provide genuine support when they do. This means fostering psychological safety where team members feel confident giving honest feedback, taking calculated risks, learning from missteps, and growing from challenges rather than facing punishment for them.
Organizations with the strongest well-being cultures maintain ongoing dialogue between leaders and team members. Within trust-based environments, people develop a growth-oriented perspective. Colleagues treat each other with genuine care and respect, creating workplaces rooted in kindness. This positive energy extends far beyond individual teams, helping organizations attract diverse talent, improve retention, spark innovation, and build lasting resilience.
When one of the founders of modern AI walks away from one of the worlds most powerful tech companies to start something new, the industry should pay attention.
Yann LeCuns departure from Meta after more than a decade shaping its AI research is not just another leadership change. It highlights a deep intellectual rift about the future of artificial intelligence: whether we should continue scaling large language models (LLMs) or pursue systems that understand the world, not merely echo it.
Who Yann LeCun is, and why it matters
LeCun is a French American computer scientist widely acknowledged as one of the Godfathers of AI. Alongside Geoffrey Hinton and Yoshua Bengio, he received the 2018 Association for Computing Machinerys A.M. Turing Award for foundational work in deep learning.
He joined Meta (then Facebook) in 2013 to build its AI research organization, eventually known as FAIR (Facebook/META Artificial Intelligence Research), a lab that tried to advance foundational tools such as PyTorch and contributed to early versions of Llama.
Over the years, LeCun became a global figure in AI research, frequently arguing that current generative models, powerful as they are, do not constitute true intelligence.
What led him to leave Meta
LeCuns decision to depart, confirmed in late 2025, was shaped by both strategic and philosophical differences with Metas evolving AI focus.
In 2025, Meta reorganized its AI efforts under Meta Superintelligence Labs, a division emphasizing rapid product development and aggressive scaling of generative systems. This reorganization consolidated research, product, infrastructure, and LLM initiatives under leadership distinct from LeCuns traditional domain.
Within this new structure, LeCun reported not to a pure research leader, but to a product and commercialization-oriented chain of command, a sign of shifting priorities.
But more important than that, theres a deep philosophical divergence: LeCun has been increasingly vocal that LLMs, the backbone of generative AI, including Metas Llama models, are limited. They predict text patterns, but they do not reason or understand the physical world in a meaningful way. Contemporary LLMs excel at surface-level mimicry, but lack robust causal reasoning, planning, and grounding in sensory experience.
As he has said and written, LeCun believes LLMs are useful, but they are not a path to human-level intelligence.
This tension was compounded by strategic reorganizations inside Meta, including workforce changes, budget reallocations, and a cultural shift toward short-term product cycles at the expense of long-term exploratory research.
The big idea behind his new company
LeCuns new venture is centered on alternative AI architectures that prioritize grounded understanding over language mimicry.
While details remain scarce, some elements have emerged:
The company will develop AI systems capable of real-world perception and reasoning, not merely text prediction.
It will focus on world models, AI that understands environments through vision, causal interaction, and simulation rather than only statistical patterns in text.
LeCun has suggested the goal is systems that understand the physical world, have persistent memory, can reason, and can plan complex actions.
In LeCuns own framing, this is not a minor variation on todays AI: Its a fundamentally different learning paradigm that could unlock genuine machine reasoning.
Although Meta founders and other insiders have not released official fundraising figures, multiple reports indicate that LeCun is in early talks with investors and that the venture is attracting atention precisely because of his reputation and vision.
Why this matters for the future of AI
LeCuns break with Meta points to a larger debate unfolding across the AI industry.
LLMs versus world models:LLMs have dominated public attention and corporate strategy because they are powerful, commercially viable, and increasingly useful. But there is growing recognition, echoed by researchers like LeCun, that understanding, planning, and physical reasoning will require architectures that go beyond text.
Commercial urgency versus foundational science:Big Tech companies are understandably focused on shipping products and capturing market share. But foundational research, the kind that may not pay off for years, requires a different timeline and incentives structure. LeCuns exit underscores how those timelines can diverge.
A new wave of AI innovation:If LeCuns new company succeeds in advancing world models at scale, it could reshape the AI landscape. We may see AI systems that not only generate text but also predict outcomes, make decisions in complex environments, and reason about cause and effect.
This would have profound implications across industries, from robotics and autonomous systems to scientific research, climate modeling, and strategic decision-making.
What it means for Meta and the industry
Metas AI strategy increasingly looks short-term, shallow, and opportunistic, shaped less by a coherent research vision than by Mark Zuckerbergs highly personalistic leadership style. Just as the metaverse pivot burned tens of billions of dollars chasing a narrative before the technology or market was ready, Metas current AI push prioritizes speed, positioning, and headlines over deep, patient inquiry.
In contrast, organizations like OpenAI, Google DeepMind, and Anthropic, whatever their flaws, remain anchored in long-horizon research agendas that treat foundational understanding as a prerequisite for durable advantage. Metas approach reflects a familiar pattern: abrupt strategic swings driven by executive conviction rather than epistemic rigor, where ambition substitutes for insight and scale is mistaken for progress. Yann LeCuns departure is less an anomaly than a predictable consequence of that model.
But LeCuns departure is also a reminder that the AI field is not monolithic. Different visions of intelligence, whether generative language, embodied reasoning, or something in between, are competing for dominance.
Corporations chasing short-term gains will always have a place in the ecosystem. But visionary research, the kind that might enable true understanding, may increasingly find its home in independent ventures, academic partnerships, and hybrid collaborations.
A turning point in AI
LeCuns decision to leave Meta and pursue his own vision is more than a career move. It is a signal: that the current generative AI paradigm, brilliant though it is, will not be the final word in artificial intelligence.
For leaders in business and technology, the question is no longer whether AI will transform industries, its how it will evolve next. LeCuns new line of research is not unique: Other companies are following the same idea. And this idea might not just shape the future of AI researchit could define it.
TikToks U.S. operations are now managed by a new American joint venture, ending a long-standing debate over whether the app would be permanently banned in the United States. The good news for TikTok users is that this deal guarantees that the app will continue to operate within Americas borders.
But theres some bad news, too.
Successive U.S. administrationsboth Bidens and Trump’sargued that TikTok posed a national security threat to America and its citizens, partly because of the data the app collected about them. While all social media apps collect data about their users, officials argued that TikToks data collection was a danger (while, say, Facebooks was not) because the worlds most popular short-form video app was owned by ByteDance, a Chinese company.
The ironic thing is that TikTok will actually collect more data about them now than it did under ByteDance ownership. The company’s new mostly American ownersLarry Ellison’s Oracle, private equity company Silver Lake, and the Emirati investment company MGXmade this clear in a recent update to TikToks privacy policy and its terms of service.
If this new data collection unnerves you, there are some things you can do to mitigate it.
How to stop TikToks new U.S. owners from getting your precise location
When TikToks U.S. operations were still owned by ByteDance, the app did not collect the GPS phone location data of users in the United States. TikToks new U.S. owners have now changed that policy, stating, if you choose to enable location services for the TikTok app within your device settings, we collect approximate or precise location information from your device.
While allowing TikTokor any social media appto access your location can mean you see more relevant content from events or creators in your area, theres no reason that app should need to know your precise GPS location, which reveals where in the world you are down to a few feet.
Thankfully, you can block TikToks access to your GPS location data by using the settings on your phone.
On iPhone:
Open the Settings app.
Tap Apps.
Tap TikTok.
Tap Location.
Set location access to Never.
On Android:
Find the TikTok app on your home screen and tap and hold on its icon.
Tap the App information menu item from the pop-up.
Tap Permissions.
Tap Location.
Tap Dont Allow.
How to limit new targeted advertising
When TikToks U.S. operations were owned by ByteDance, the companys terms of service informed users that it analyzed their content to provide tailored advertising to them. This was not surprising. TikToks main way of generating revenue is via showing ads in the app.
But in the updated terms of service posted by TikToks U.S. owners, it now appears that TikTok will use the data it collects about you, as well as the data its third-party partners have on you, to target you with relevant ads both on and off the platform. As the new terms of service states, You agree that we can customize ads and other sponsored content from creators, advertisers, and partners, that you see on and off the Platform based on, among other points, information we receive from third parties.
Unfortunately, as of this writing, TikToks new U.S. owners dont seem to offer a way for U.S. users to disable personalized ads (users in some regions may see the option under Settings and privacy > Ads in the TikTok app).
Still, if you have an iPhone, you can at least stop TikTok from tracking your activity across apps and websites using iOSs App Tracking Transparency feature, which allows users to quickly block an app from tracking what they do on their iPhone outside of the app.
Open the Settings app on your iPhone.
Tap Privacy and Security.
Tap Tracking.
In the list of apps that appears, make sure the toggle next to TikTok is set to off (white).
Currently, Android does not offer a feature like Apples App Tracking Transparency.
TikToks U.S. owners track your AI interactions
Like most social media apps, TikTok has been slowly adding more AI features. (One, called AI Self, lets users upload a picture of themselves and have TikTok turn it into an AI avatar).
As Wired previously noted, TikToks new U.S. owners have now inserted a new section in the privacy policy informing users that it may collect and store any data surrounding your AI interactions, including prompts, questions, files, and other types of information that you submit to our AI-powered interfaces, as well as the responses they generate.
That means anything you upload to use in TikToks AI featuresor prompts you writecould be retained by the company. Unfortunately, theres no internal TikTok app setting, or any iPhone and Android app setting that lets you get around this TikTok AI data collection.
That means TikToks U.S. users only have one choice if they dont want the apps new U.S. owners to collect AI data about them: Dont use TikToks AI features.
EVs hit a new milestone: In December, buyers in Europe registered more electric cars than gas cars for the first time.
EV registrations hit 217,898 in the EU last monthup 50% year-over-year from 2024. Sales of gas cars, on the other hand, dropped nearly 20% to 216,492. The same trend played out in the larger region, including the UK and other non-EU countries like Iceland.
Car buyers have more electric options in Europe than in the U.S., from tiny urban EVs like the $10,000 Fiat Topolino to Chinese cars like the BYD Dolphin.
“We’re actually seeing this trend globally, although the U.S. is a different story: as the availability and quality of EVs goes up, sales have been going up as well,” says Ilaria Mazzocco, who studies electric vehicle markets at the Center for Strategic & International Studies. “There’s a story that some of the major OEMs have been pushing that there’s no demand for EVs. But when you look at the numbers…it turns out there’s a lot of latent demand.”
Some automakers are doing better than others. Teslas market share dropped around 38% last year in Europe as buyers reacted to Elon Musk’s politics. BYD tripled its market share over the same period.
EVs made up 17.4% of car sales in the EU last year, around twice the rate in the U.S. That’s still well behind Norway (not part of the EU), where a staggering 96% of all registrations were fully battery-electric in 2025. Hybrid cars are still more popular than pure electric vehicles in the EU, with 34.5% of market share. Diesel cars, which used to dominate in Europe, now only have around 9% of market share.
It’s not clear exactly what will happen next as the EU may weaken its EV policy. The bloc had targeted an end to new fossil-fueled cars by 2035; in a proposal in December, it suggested cutting vehicle emissions by 90% instead, leaving more room for hybrid cars. Some of the growth also will depend on how willing European countries are to continue letting cheap Chinese EVs on the market. Still, steep growth in EVs is likely to continue.
Generative artificial intelligence technology is rapidly reshaping education in unprecedented ways. With its potential benefits and risks, K-12 schools are actively trying to adapt teaching and learning.
But as schools seek to navigate into the age of generative AI, theres a challenge: Schools are operating in a policy vacuum. While a number of states offer guidance on AI, only a couple of states require local schools to form specific policies, even as teachers, students, and school leaders continue to use generative AI in countless new ways. As a policymaker noted in a survey, You have policy and whats actually happening in the classroomsthose are two very different things.
As part of my labs research on AI and education policy, I conducted a survey in late 2025 with members of the National Association of State Boards of Education, the only nonprofit dedicated solely to helping state boards advance equity and excellence in public education. The survey of the associations members reflects how education policy is typically formed through dynamic interactions across national, state, and local levels, rather than being dictated by a single source.
But even in the absence of hard-and-fast rules and guardrails on how AI can be used in schools, education policymakers identified a number of ethical concerns raised by the technologys spread, including student safety, data privacy, and negative impacts on student learning.
They also expressed concerns over industry influence and that schools will later be charged by technology providers for large language model-based tools that are currently free. Others report that administrators in their state are very concerned about deepfakes: What happens when a student deepfakes my voice and sends it out to cancel school or report a bomb threat?
At the same time, policymakers said teaching students to use AI technology to their benefit remains a priority.
Local actions dominate
Although chatbots have been widely available for more than three years, the survey revealed that states are in the early stages of addressing generative AI, with most yet to implement official policies. While many states are providing guidance or tool kits, or are starting to write state-level policies, local decisions dominate the landscape, with each school district primarily responsible for shaping its own plans.
When asked whether their state has implemented any generative AI policies, respondents said there was a high degree of local influence regardless of whether a state issued guidance or not. We are a local control state, so some school districts have banned [generative AI], wrote one respondent. Our [state] department of education has an AI tool kit, but policies are all local, wrote another. One shared that their state has a basic requirement that districts adopt a local policy about AI.
Like other education policies, generative AI adoption occurs within the existing state education governance structures, with authority and accountability balanced between state and local levels. As with previous waves of technology in K-12 schools, local decision-making plays a critical role.
Yet there is generally a lack of evidence related to how AI will affect learners and teachers, which will take years to become more clear. That lag adds to the challenges in formulating policies.
States as a lighthouse
However, state policy can provide vital guidance by prioritizing ethics, equity, and safety, and by being adaptable to changing needs. A coherent state policy can also answer key questions, such as acceptable student use of AI, and ensure more consistent standards of practice. Without such direction, districts are left to their own devices to identify appropriate, effective uses and to construct guardrails.
As it stands, AI usage and policy development are uneven, depending on how well resourced a school is. Data from a Rand-led panel of educators showed that teachers and principals in higher-poverty schools were about half as likely to report that AI guidance was provided. The poorest schools are also less likely to use AI tools.
When asked about foundational generative AI policies in education, policymakers focused on privacy, safety, and equity. One respondent, for example, said school districts should have the same access to funding and training, including for administrators.
And rather than having the technology imposed on schools and families, many argued for grounding the discussion in human values and broad participation. As one policymaker noted, What is the role that families play in all this? This is something that is constantly missing from the conversation and something to uplift. As we know, parents are our kids first teachers.
Introducing new technology
According to a Feb. 24, 2025, Gallup poll, 60% of teachers report using some AI for their work in a range of ways. Our survey also found there is shadow use of AI, as one policymaker put it, where employees implement generative AI without explicit school or district IT or security approval.
Some states, such as Indiana, offer schools the opportunity to apply for a one-time competitive grant to fund a pilot of an AI-powered platform of their choosing, as long as the product vendors are approved by the state. Grant proposals that focus on supporting students or professional development for educators receive priority.
In other states, schools opt in to pilot tests that are funded by nonprofits. For example, an eighth grade language arts teacher in California participated in a pilot where she used AI-powered tools to generate feedback on her students writing. Teaching 150 kids a day and providing meaningful feedback for every student is not possible; I would try anything to lessen grading and give me back my time to spend with kids. This is why I became a teacher: to spend time with the kids. This teacher also noted the tools showed bias when analyzing the work of her students learning English, which gave her the opportunity to discuss algorithmic bias in these tools.
One initiative from the Netherlands offers a different approach than finding ways to implement products developed by technology companies. Instead, schools take the lead with questions or challenges they are facing and turn to industry to develop solutions informed by research.
Core principles
One theme that emerged from survey respondents is the need to emphasize ethical principles inproviding guidance on how to use AI technology in teaching and learning. This could begin with ensuring that students and teachers learn about the limitations and opportunities of generative AI, when and how to leverage these tools effectively, critically evaluate its output, and ethically disclose its use.
Often, policymakers struggle to know where to begin in formulating policies. Analyzing tensions and decision-making in organizational contextor what my colleagues and I called “dilemma analysis” in a recent reportis an approach schools, districts, and states can take to navigate the myriad of ethical and societal impacts of generative AI.
Despite the confusion around AI and a fragmented policy landscape, policymakers said they recognize it is incumbent upon each school, district, and state to engage their communities and families to co-create a path forward.
As one policymaker put it: Knowing the horse has already left the barn [and that AI use] is already prevalent among students and faculty . . . [on] AI-human collaboration versus an outright ban, where on the spectrum do you want to be?
Janice Mak is an assistant director and clinical assistant professor at Arizona State University.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
“Snow Will Fall Too Fast for Plows,” ICE STORM APOCALYPSE, and Another Big Storm May Be Coming … were all headlines posted on YouTube this past weekend as the biggest snowstorm in years hit New York City.
These videos, each with tens or hundreds of thousands of views, are part of an increasingly popular genre of weather influencers,” as Americans increasingly turn to social media for news and weather updates.
People pay more attention to influencers on YouTube, Instagram, and TikTok than to journalists or mainstream media, a study by the Reuters Institute and the University of Oxford found in 2024. In the U.S., social media is how 20% of adults get their news or weather updates, according to the Pew Research Center.
Its no surprise, then, that a number of online weather accounts have cropped up to cover the increasing number of extreme weather events in the U.S.
While some of these influencers have no science background, many of the most popular ones are accredited meteorologists. One of the most viewed digital meteorologistsor weather influencersis Ryan Hall, who calls himself “The Internet’s Weather Man” on his social media platforms. His YouTube channel, Ryan Hall, Yall, has more than 3 million subscribers.
Max Velocity is another. He’s a degreed meteorologist, according to his YouTube bio, who has 1.66 million followers. Reed Timmer, an extreme meteorologist and storm chaser, also posts to 1.46 million subscribers on YouTube. While most prefer to avoid the bad news that comes with bad weather, I charge towards it, Timmer writes in the description section on his channel.
The rising popularity of weather influencers is stemming not just from a mistrust in mainstream mediawhich is lingering at an all-time lowbut also from an appetite for real-time updates delivered in an engaging way to the social-first generation.
YouTube accounts like Halls will often livestream during extreme weather events, with his comments section hosting a flurry of activity. Theres even merch.
Of course, influencers are not required to uphold the same reporting standards as network weathercasters. Theres also the incentive, in terms of likes and engagement, to sensationalize events with clickbait titles and exaggerated claims, or sometimes even misinformation, as witnessed during the L.A. wildfires last year.
Still, as meteorologists navigate the new media landscape, the American Meteorological Society now offers a certification program in digital meteorology for those meteorologists who meet established criteria for scientific competence and effective communication skills in their weather presentations on all forms of digital media.
While we wait to see whether another winter storm will hit the Northeast this weekend, rest assured, the weather influencers will be tracking the latest updates.
You know the ancient proverb: Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime.
For leaders, first-generation AI tools are like giving employees fish. Agentic AI, on the other hand, teaches them how to fishtruly empowering, and that empowerment lifts the entire organization. According to recent findings from McKinsey, nearly eight in ten companies report using gen AI, yet about the same number report no bottom-line impact. Agentic AI can help organizations achieve meaningful results.
AI agents are highly capable assistants with the ability to execute tasks independently. Equipped with artificial intelligence that simulates human reasoning, they can recognize problems, remember past interactions, and proactively take steps to get things donewhether that means knocking out tedious manual tasks or helping to generate innovative solutions. For CEOs juggling numerous responsibilities, agentic AI can be a powerful ally in simplifying decision-making and scaling impact. Thats why I believe it belongs on every CEOs roadmap for 2026.
As CEO of a SaaS company grounded in automation, Ive made it a priority to incorporate agentic AI into our everyday workflows. Here are three ways you can put it to work in your organization.
1. Take the effort out of scheduling
Starting with one of the most basic functions of any organizationand one that can easily become a time and energy vacuumscheduling is perfect fodder for AI agents. And they go well beyond your typical AI-powered scheduling tool.
For starters, theyre adaptable. AI agents can monitor incoming data and requests, proactively adjust schedules, and notify the relevant parties when issues arise. Lets say your team has a standing brainstorming session every Wednesday and a new client reaches out to request an intro meeting at the same time. Your agent can automatically respond with alternative time slots. On the other hand, if a client needs to connect on a time-sensitive issue, your agent can elevate the request to a human employee to decide whether rescheduling makes sense.
You can also personalize AI agents based on your unique needs and priorities, including past interactions. If, for example, your agent learns that you religiously protect time for deep-focus work first thing in the morning, it wont keep proposing meetings then.
By delegating scheduling tasks, organizationsfrom the CEO to internsfree up time for higher-level priorities and more meaningful work. You can build your own agent, or get started with a ready-to-use scheduling assistant that offers agentic capabilities, like Reclaim.ai.
2. Facilitate idea generation and innovation
When we talk about AI and creativity, the conversation often stirs anxiety about artificial intelligence replacing human creativity. But agentic AI can help spark ideas for engagement, leadership development, and strategic initiatives. The goal is to cultivate the conditions in which these initiatives can thrive, not to replace the actual brainstorming or strategic thinking.
For example, you can create an ideation-focused AI agent and train it on relevant organizational contextperformance data, KPIs, meeting notes, employee engagement data, culture touch points, and more. Your agent can continuously gather new information and update its internal knowledge.
When the time comes for a brainstorming or strategy session (which the agent can also proactively prompt), it can draw on this working organizational memory plus any other resources it can access, and tap generative AI tools like ChatGPT or Gemini to generate themes, propose topics, and help guide the discussion. Meanwhile, leaders remain focused on evaluating ideas, decision-making, and execution.
3. Error-free progress updates and year-end recaps
While generative AI can be incredibly powerful, the issue remains that it is largely reactive, not proactive. When it comes to tracking performance, team KPIs, and organizational progress, manual check-ins are still required. As Ive written before, manual tasks are subject to human error. Calendar alerts go unnoticed. Things slip through the cracks. Minor problems become big issues.
One solution is to design an AI agent that can autonomously monitor your organizations performance. Continuous, real-time oversight helps ensure processes run smoothly and that issues are flagged as soon as they arise. For example, if your company sells workout gear and sees a postNew Year surge in fitness resolutionsand demand for a specific productan agent can track sales patterns and alert the team to inventory shortages. An AI agent can also independently generate reports, including year-end recaps that are critical for continued growth.
Rather than waiting to be prompted by a human, they can do the work alone and elevate only the issues that require human judgment.
Agents have the potential to create real value for organizations. Importantly, leaders have to rethink workflows so AI agents are meaningfully integrated, fully liberating employees from rote, manual tasks and freeing them to focus on more consequential, inspiring work like strategy and critical thinking. Ive found this leaves employees more energized, and the benefits continue to compound.
When people complain about a lack of work-life balance, theyre typically feeling that they are spending too much time working. They may be spending a lot of combined time at the office and commuting, or just putting in a lot of hours both at work and at home.
Fixing that problem cant be done abstractly, though. If youre going to address the balance of work and life activities, you have to start getting specific about where your time is going and where you really want it to go.
Think about how you’re spending your time. At work, youre spending time in meetings, writing documents, engaging with clients, or doing particular technical tasks like coding. Similarly, your non-work life consists of other activities like going to the gym, spending time with family, going to concerts, or reading a novel for pleasure.
Start by taking a look at where your time is going right now. If you keep a good work calendar, then flip through a few weeks and track the hours youre spending on different tasks. If you dont have a good record of the time youre spending at work, then start logging the time spent on different work tasks.
How much of the time youre spending on work tasks is really necessary? Are there activities that are discretionary that you could replace with something else (potentially a non-work something else)? Are you wasting time shifting among tasks or doing other things inefficiently?
Perhaps more importantly, you also need to think more clearly about what activities should go in your life bin. What are the activities or hobbies you wish you had more time for? Who are the people you want to spend more time with? You spend time on specific work tasks, because those end up on your calendar. You have to define life specifically enough that it ends up on your calendar as well.
Then, create a calendar that includes both work events and life events. Dont just log your meetings, tasks (and commute time), but also time for working with your kids on their homework, going on a date with your partner, hanging out with friends, going to the gym, or reading a book. It may seem like micromanaging your life to start scheduling these personal events, but if you dont start doing things differently, the balance of the way you spend your time is not going to change.
This approach also helps you to recognize when your work responsibilities have become overwhelming. If you truly dont have the time to do any of your life activities, then your job may be asking too much of you. Sit down with your supervisor or a mentor and talk through what youre currently doing at work. Ask for help prioritizing tasks so that you have more opportunities to do other things that are important to you. Your supervisor might even change some of your responsibilities to make the load more manageable in a reasonable amount of time.
Ultimately, by scheduling the time for these life activities (and actually doing them), you are shifting your habits to include more regular life activities. You wont necessarily have to create a specific calendar for your life forever. As you start engaging in more non-work activities, that will shift the nature of your daily and weekly routine in ways that are likely to become self-sustaining.
TikTok agreed to settle a landmark social media addiction lawsuit just before the trial kicked off, the plaintiffs attorneys confirmed.
The social video platform was one of three companies along with Metas Instagram and Googles YouTube facing claims that their platforms deliberately addict and harm children. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.
Details of the settlement with TikTok were not disclosed, and the company did not immediately respond to a request for comment.
At the core of the case is a 19-year-old identified only by the initials KGM, whose case could determine how thousands of other similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.
A lawyer for the plaintiff said in a statement Tuesday that TikTok remains a defendant in the other personal injury cases, and that the trial will proceed as scheduled against Meta and YouTube.
Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms. The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum.
KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms.
Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue, the lawsuit says.
Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the trial, which will last six to eight weeks. Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.
Plaintiffs are not merely the collateral damage of Defendants products, the lawsuit says. They are the direct victims of the intentional product design choices made by each Defendant. They are the intended targets of the harmful features that pushed them into self-destructive feedback loops.
The tech companies dispute the claims that their products deliberately harm children, citing a bevy of safeguards they have added over the years and arguing that they are not liable for content posted on their sites by third parties.
Recently, a number of lawsuits have attempted to place the blame for teen mental health struggles squarely on social media companies, Meta said in a recent blog post. “But this oversimplifies a serious issue. Clinicians and researchers find that mental health is a deeply complex and multifaceted issue, and trends regarding teens’ well-being aren’t clear-cut or universal. Narrowing the challenges faced by teens to a single factor ignores the scientific research and the many stressors impacting young people today, like academic pressure, school safety, socio-economic challenges and substance abuse.”
A Meta spokesperson said in a statement Monday the company strongly disagrees with the allegations outlined in the lawsuit and that it’s confident the evidence will show our longstanding commitment to supporting young people.
José Castaeda, a Google spokesperson, said Monday that the allegations against YouTube are simply not true. In a statement, he said Providing young people with a safer, healthier experience has always been core to our work.”
TikTok did not immediately respond to a request for comment Monday.
The case will be the first in a slew of cases beginning this year that seek to hold social media companies responsible for harming children’s mental well-being. A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.
In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.
TikTok also faces similar lawsuits in more than a dozen states.
By Kaitlyn Huamani and Barbara Ortutay, AP technology writers