|
The companies racing to own the AI future have built their technologies and businesses by carefully keeping track of you and your data. But they would prefer you didn’t keep track of them and specifically the ways in which they’ve been adjusting their voluntary ethical and privacy commitmentssome of the few safeguards meant to keep that AI future safe. As the Trump administration actively dismantles safety guardrails to promote “American dominance” in AI and companies disband their safety teams, it’s fallen to a tiny nonprofit with limited resources to track how these trillion-dollar companies are adjusting their policies and honoring their own ethical commitments. Tyler Johnston and his group, the Midas Project, have become the digital world’s equivalent of a one-person fire department, trying to monitor a forest of potential blazes. Launched in mid-2024, the nonprofit’s “AI Safety Watchtower” project now tracks 16 companiesincluding OpenAI, Google, and Anthropicto monitor hundreds of policy documents and web pages for changes. “If every AI company had a change log, this work would be unnecessary, says Johnston. That would be the ultimate transparency. Instead, it’s up to nonprofits and journalists to monitor this, and nobody’s well-equipped enough to catch all of it.” Johnstons concerns about abandoned safety commitments come as the Trump administration actively dismantles AI safety guardrails. On his second day in office this term, Trump signed an executive order revoking Biden’s 2023 AI safety order, replacing it with one focused on “American dominance” in AI. In March, the National Institute of Standards and Technology issued new directives to scientists at the Artificial Intelligence Safety Institute that eliminated mentions of “AI safety, responsible AI,” and “AI fairness.” While various states have taken steps to pass AI regulation and bills have been proposed on Capitol Hill, there are as yet no federal rules specifically governing the use of the technology. In recent weeks, Trumps Office of Science and Technology Policy solicited public comments from companies, academics, and others for a forthcoming “AI action plan”; Silicon Valley, not surprisingly, has urged a light regulatory touch. Johnston came to AI ethics from animal welfare advocacy, where targeted campaigns successfully pushed food companies to adopt cage-free-egg practices. He hoped to replicate that success by becoming the “bad cop” willing to pressure tech giants. With about 1,500 followers across two X accounts, Johnston runs Midas Project full time, with Safety Watchtower taking up about 5% of his time. The group is run on a shoestring budget, so hes DIYing a lot for now, with some help from volunteers. Johnston isn’t backed by billions in venture capital or government fundingjust determination and a basic web-scraping tool that detects when companies quietly delete promises about not building killer robots or enabling bioweapons development. So far, the Watchtower has documented about 30 significant changes, categorizing them by the tags major, slight, and unannounced. The first being OpenAIs slight modification of its core values in October 2023. OpenAI removed values such as impact-driven, which emphasized that employees care deeply about real-world implications, replacing them with values such as AGI focus. Another slight policy change caught by AI Watchtower came from Meta in June 2024, when it made explicit that it can use data from Facebook, Whatsapp, and Instagram to change its model. The Watchtower also flagged a major change by Google last month when the company released a new version of its Frontier Safety Framework. Johnston’s analysis revealed concerning modifications: model autonomy risks were removed and replaced with vaguely defined “alignment risks,” and notably, the company added language suggesting it would only follow its framework if competitors adopted similar measures. At times, companies have responded to Johnston’s alerts. Earlier this month, Watchtower’s web scrapers noticed that Anthropic removed references to the “White House’s Voluntary Commitments for Safe, Secure, and Trustworthy AI” from its Transparency Hub webpage. But Anthropic cofounder Jack Clark clarified on X: “This isn’t a change in substance and has caused some confusionwe’re working on a fix. We continue to follow the White House Voluntary Commitments.” The status of these commitments under the Trump Administration remains unclear. The commitments were independent promises companies made to the Biden White House and the public about managing AI risks, meaning they shouldn’t be affected by Trumps executive order rolling back Bidens AI policies. Several companies including Nvidia, Inflection, and Scale AI confirmed theyre still adhering to the commitments post-election, according to FedScoop. Anthropic eventually restored the reference to its website but added a curious disclaimer: Though these specific commitments are no longer formally maintained under the Trump administration, our organization continues to uphold all these principles. The White House did not respond to a request for clarification. In another case, a commitment flagged as removed from Anthropic’s website had simply been relocated to a different page. For Johnston, this highlights a broader issue with transparency in the industry: the companies, not journalists, should be clear about how and when their policies are changing. The most consequential shift Johnston has documented is AI companies reversing their military stances. According to Johnston, OpenAI’s reversal was particularly calculatinginitially framed as helping prevent veteran suicide and supporting Pentagon cybersecurity. Critics were painted as heartless for questioning this work but by November 2024, OpenAI was developing autonomous drones in what Johnston described as a classic foot-in-the-door strategy. Google followed suit earlier this year, revoking its own military restrictions. “A lot of them are starting to really feel the global [AI] race dynamic,” Johnston says. “They’re like, ‘Well, we have to do this because if we don’t work with militaries, less scrupulous actors will.'” The military pivot is just one example of how AI companies are reframing their ethical stances. OpenAI recently published a document outlining its philosophy on AI safety, claiming it has moved beyond the more cautious “staged deployment” approach it took with GPT-2 in 209, when it initially withheld release citing safety concerns. In a discontinuous world, practicing for the AGI moment is the only thing we can do, and safety lessons come from treating the systems of today with outsize caution relative to their apparent power. This is the approach we took for GPT2, OpenAI wrote. But Miles Brundage, OpenAI’s former head of policy research, publicly challenged this characterization, saying the company was rewriting the history of GPT-2 in a concerning way. “OpenAI’s release of GPT-2, which I was involved in, was 100% consistent with OpenAI’s current philosophy of iterative deployment,” Brundage wrote on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.” Brundage fears OpenAI is now setting up a framework where “concerns are alarmist” and “you need overwhelming evidence of imminent dangers to act on them”a mentality he calls “very dangerous” for advanced AI systems. The pattern of changes extends beyond the companies’ own rules and policies. In February, Johnston’s team launched “Seoul Tracker” to evaluate whether companies were honoring promises made at the 2024 AI Safety Summit in Seoul. The results were damning: Many simply ignored the February deadline for adopting responsible scaling policies, while others implemented hollow versions that barely resembled what they’d promised. Using a letter-grade scoring system based on public evidence of implementation across five key commitment areas, the Seoul Tracker gave Anthropic the highest score, a B-, while companies including IBM, Inflection AI, and Mistral AI received failing grades of F for showing no public evidence that they had fulfilled their commitments. “It’s wild to me,” Johnston says. “These were promises they made not just on some webpage, but to the governments of the United Kingdom and South Korea.” Perhaps what’s perhaps most telling about the impact of Johnston’s work is who’s paying attention. While Midas Project struggles to get 500 signatures on petitions to ask AI companies to take security seriously, and its follower count is still relatively modest, those followers include plenty of the who’s who of AI luminaries, watchdogs, and whistleblowers. Even a Trump White House advisor on AI recently followed the account, too. That got Johnston wondering whether government officials view these ethical reversals as progress rather than problems. I’m so worried that he’s following it like cheering it on, he says, seeing the changes as wins as these companies abandon their commitments.
Category:
E-Commerce
People with a healthy limit on their screen time probably havent noticedbut theres been a meme shortage this March. On TikTok, some have declared a full-blown “Meme Drought,” dubbing it the Great Meme Depression of 2025. The panic began on March 10, when user @goofangel posted a video titled TikTok Great Depression March 2025. He says, Nine days into March and we havent had a single original meme. The post quickly racked up nearly a million views and clearly struck a chord, if the comments are any indication. @goofangel #tiktok #brainrot #brainrotquiz #funny #unemployment original sound – goofangel October to February was an insane run, one commenter reminisced, recalling a time when everyone was holding space for “Defying Gravity” andwho remembers when everyone collectively joined Red Note for a minute? Does the millennial burger restaurant count? another asked. Subarus kinda funny, but not laughing funny, yk? someone else added. But as @goofangel pointed out, the “I Call Patrick Subaru” meme actually originated in 2021. The Great Meme Depression soon became a meme itself, as TikTokers flooded the platform with meta-commentary. How the Great March Meme Drought will be described in history books, one user posted, alongside a slideshow of images from the Great Depression circa 1929. Another creator shared a video featuring TikTok influencers faces captioned: When mfs say they grew up poor but never had to live through the Great Meme Depression. @de.novo12 Worst than a recession #march#marchmemedrought #fyp #funny original sound – maystxn Others joked about the surreal nature of it all. How it feels to realize The Great Meme Drought of March is actually a meme itself, one added. With the trend cycle running faster than ever, meme culture may simply be unable to keep pace. The insatiable demand for viral content has left us trapped in an algorithmic loop, now recycling the same tired material weve already scrolled past. Rather than forcing it, maybe this temporary drought is a chance to pause. Set some limits on screen timeand actually stick to them. Read a book or finally watch Severance. At least until the next viral moment comes along.
Category:
E-Commerce
Caroline Fleck, PhD, is a licensed psychologist, corporate consultant, and adjunct clinical instructor at Stanford University. She received a BA in psychology and English from the University of Michigan and an MA and PhD from the Department of Psychology and Neuroscience at Duke. Fleck has served as a supervisor and consultant for some of the most rigorous clinical training programs in the country, and has been featured in national media outlets, including the The New York Times, Good Morning America, and HuffPost. In her private practice, Fleck specializes in dialectical behavior therapy (DBT) and other cognitive behavioral treatments for mood, anxiety, and personality disorders. Flecks corporate work focuses on strengthening company cultures and individual performance. She implements custom training programs for Fortune 500 companies and provides executive coaching to industry leaders worldwide. Whats the big idea? The secret to influencing others isnt about persuasionits about validation. In Validation: How the Skill Set That Revolutionized Psychology Will Transform Your Relationships, Increase Your Influence, and Change Your Life, Fleck reveals how acknowledging and accepting others experiences can strengthen relationships, defuse conflicts, and even increase self-compassion. Through captivating stories and actionable techniques, she introduces eight powerful skills to harness validations transformative impact. Validation uncovers how truly seeing and being seen is the key to lasting change. Below, Fleck shares five key insights from her new book. Listen to the audio versionread by Fleck herselfin the Next Big Idea App. 1. Validation is not what you think it is. My technical definition of validation is that it communicates mindfulness, understanding, and empathy in ways that convey acceptance. If I were to translate that into a mantra, it would be, Validation shows that youre there, you get it, and you care. Validation is not praise: Praise is a judgment. It says, I like the way you look or perform. Validation demonstrates acceptance. It says, I accept who you are, independent of how you look or perform. When people claim that we shouldnt rely on external validation, they are confusing validation with praise. Validation is not problem-solving: Problem-solving focuses on changing someones reaction by suggesting solutions to their, e.g., I know you didnt do well on that spelling test; why dont we try reviewing your words on the way to school next time? Validation, on the other hand, focuses on acknowledging the situation and the validity of someones response to it: You studied so hard; I can understand why you are upset. Validation is not agreement: I can validate why someone would have concerns about protecting an unborn fetus, even if I am pro-choice. If the idea of validating an opinion you disagree with makes you nervous, rest assured that validating another persons perspective does not necessarily function to reinforce it. On the contrary, people tend to get entrenched in their views when they feel like they have to defend their own position or attack yours. A validating response from you leaves nothing to attack, much less anything to defend against. So again, validation shows that youre there, you get it, and you care. It is not praise, problem-solving, or agreement. 2. Validation is like MDMA for your relationships. Validation improves relationships by transforming how they feel, increasing trust, intimacy, and psychological safety. Research has consistently shown validation to be among the strongest predictors of relational outcomes, ranging from commitment to quality across various types of relationships. This is really important given the effect relationships have on our health and life expectancy. Having poor social relationships is associated with the same death rate as smoking 15 cigarettes a day. Data show that the quality of a persons relationships can increase their probability of surviving by 50%. Importantly, validation is critical to all our relationships, including the one we have with ourselves. Knowing how to validate your own emotions is essential to developing self-compassion and improving how you relate to yourself. I have many more tips on how to cultivate self-validation in the book. Validation is also particularly helpful in the context of conflict. Its basically like adding an adorable cat filter to yourself during a videoconferencing meetingit makes you immediately less threatening and infinitely harder to argue with. Why? The answer appears to be in how it affects the validated persons physiology. As someone becomes more upset, their ability to reason, recall, and focus sharply decreases. Their sympathetic nervous system takes over, reducing their response options to fight, flight, or freeze. Validation tempers this responseit reduces sympathetic arousal and enhances a persons ability to reason and engage in perspective-taking. Validating individuals in highly stressful situations has been shown to lower their heart rate, galvanic skin response (sweating), and negative emotions. Unsurprisingly, invalidation has demonstrated the opposite effect, increasing distress and conflict. 3. Research suggests that validation is a catalyst for change. I made this point earlier when discussing how validation is used in DBT. However, neuroimaging research can help us understand whats happening here. The question of whether validation can drive people to change their behavior hinges on the degree to which it is perceived as rewarding. Anything that is rewarding has the potential to serve as positive reinforcementa reward given after a behavior that increases the likelihood that the behavior will be repeated. For instance, if a dog that has been rewarded with a treat for sitting on command is more likely to sit on command in the future, we know that the treat functioned to positively reinforce her behavior. Positive reinforcement activates the reward center of our brain, releasing neurotransmitters like dopamine that create feelings of pleasure. For instance, opioids, orgasms, and cash giveaways all produce this effect. Neuroimaging studies have demonstrated that feeling understood stimulates these same reward centers as well as areas linked to social connectedness. Returning to our question of whether validation is enjoyable enough to prompt behavioral change, the answer is a resounding yes. 4. Validation is a skill set anyone can master. Therapists are trained in specific skills to help them reliably and authentically communicate validation. In Validation, I describe how Ive adapted these therapist skills so they can be used by anyone in any relationship. The model I developed is called the Validation Ladder. It includes three subsets of skills that map onto each of the three main qualities of validation. Youve got two skills for conveying mindfulness, three for understanding, and three for empathy. Validation only works if its authentic, so if you dont understand or empathize with soeones experience, the Mindfulness skills might be all you can use. An example of a Mindfulness Skill is Attending, which requires you to focus on answering this two-part question: 1) Whats a better way to make this persons point?2) Why does it matter to them? You dont need to communicate your insights. As a mindfulness skill, these questions are designed to inform how you listen. By focusing on these questions, youre more likely to signal engagement and naturally ask more targeted questions, rather than concentrating on your rebuttal or allowing your mind to wander. To apply understanding skills, you need to genuinely see the logic in someones response. An example of an understanding skill is Equalizing, or normalizing. If you can imagine that you would react similarly to whatever the other person is experiencing, you simply communicate. For instance, you might say, Anyone in your shoes would want a second opinion or I would have done the same thing. By indicating that someones reaction is consistent with what you would think, feel, or do in that situation, you convey that its understandable. Finally, the empathy skills are the most validating, as they convey mindfulness, understanding, and empathy in one fell swoop. An example of an Empathy skill is Emoting. You might tear up if someone is relaying a sad story or jump up and down when they share good news. Emoting allows you to enter into the other persons experience, not as a spectator but as an active participant. When I first learned validation skills as a therapist, I wasnt blown away by their novelty. Many of the skills in the Validation Ladder will be things youve heard of or practiced before. Their transformative power only becomes apparent once youve honed your ability to know when to use them. Validation is much like baking; the steps involved seem deceptively straightforward, but if a novice and a master baker follow the exact same recipe, the outcome will be noticeably different. Timing, technique, and understanding how to pivot when neededthese minor adjustments determine whether or not someone will appreciate or be reinforced by the treat you provide them. 5. Find the kernel of truth. You should only validate a persons experience to the extent that you actually consider it to be valid. The aim is to find the kernel of truth in someones experience and validate that. Generally speaking, a persons experience is composed of their thoughts, emotions, and behavior. Psychologists consider thoughts to be valid if they are logical or reasonable based on the facts of a situation. Behaviors are considered valid if they are effective given ones long-term goals. As for emotions, well, you can presume that emotions are always valid. Trust me, you dont want to get in the business of arguing with people about how they feel. A persons behavior and emotions may be valid even if the thoughts that gave rise to them are not, and vice versa. For example, if someone believes there is an imminent threat of an alien invasion, they would understandably feel anxious and fearful. Anxiety and fear are reasonable reactions to an impending danger. It also makes sense that this individual would vote for a politician with a plan to address the alien invasion. Their thoughts in this scenario are invalid as they are based on misinformation, but their emotions and behavior are understandable given the misinformation they are operating under. Recognizing the valid doesnt mean you cant work on changing whats invalid or problematic. On the contrary, if the last 30 years have taught us anything, its that people are much more receptive to collaborating, receiving feedback, and even changing when they feel seen in their experience. This article originally appeared in Next Big Idea Club magazine and is reprinted with permission.
Category:
E-Commerce
All news |
||||||||||||||||||
|