|
|||||
While Silicon Valley argues over bubbles, benchmarks, and who has the smartest model, Anthropic has been focused on solving problems that rarely generate hype but ultimately determine adoption: whether AI can be trusted to operate inside the worlds most sensitive systems. Known for its safety-first posture and the Claude family of large language models (LLMs), Anthropic is placing its biggest strategic bets where AI optimism tends to collapse fastest, i.e., regulated industries. Rather than framing Claude as a consumer product, the company has positioned its models as core enterprise infrastructuresoftware expected to run for hours, sometimes days, inside healthcare systems, insurance platforms, and regulatory pipelines. Trust is what unlocks deployment at scale, Daniela Amodei, Anthropic cofounder and president, tells Fast Company in an exclusive interview. In regulated industries, the question isnt just which model is smartestits which model you can actually rely on, and whether the company behind it will be a responsible long-term partner. That philosophy took concrete form on January 11, when Anthropic launched Claude for Healthcare and Life Sciences. The release expanded earlier life sciences tools designed for clinical trials, adding support for such requirements as HIPAA-ready infrastructure and human-in-the-loop escalation, making its models better suited to regulated workflows involving protected health information. We go where the work is hard and the stakes are real, Amodei says. What excites us is augmenting expertisea clinician thinking through a difficult case, a researcher stress-testing a hypothesis. Those are moments where a thoughtful AI partner can genuinely accelerate the work. But that only works if the model understands nuance, not just pattern matches on surface-level inputs. That same thinking carried into Cowork, a new agentic AI capability released by Anthropic on January 12. Designed for general knowledge workers and usable without coding expertise, Claude Cowork can autonomously perform multistep tasks on a users computerorganizing files, generating expense reports from receipt images, or drafting documents from scattered notes. According to reports, the launch unintentionally intensified market and investor anxiety around the durability of software-as-a-service businesses; many began questioning the resilience of recurring software revenue in a world where general-purpose AI agents can generate bespoke tools on demand. Anthropics most viral product, Claude Code, has amplified that unease. The agentic tool can help write, debug, and manage code faster using natural-language prompts, and has had a substantial impact among engineers and hobbyists. Users report building everything from custom MRI viewers to automation systems entirely with Claude. Over the past three years, the companys run-rate revenue has grown from $87 million at the end of 2023 to just under $1 billion by the end of 2024 and to $9 billion-plus by the end of 2025. That growth reflects enterprises, startups, developers, and power users integrating Claude more deeply into how they actually work. And we’ve done this with a fraction of the compute our competitors have, Amodei says. Building for Trust in the Most Demanding Enterprise Environments According to a mid-2025 report by venture capital firm Menlo Ventures, AI spending across healthcare reached $1.4 billion in 2025, nearly tripling the total from 2024. The report also found that healthcare organizations are adopting AI 2.2 times faster than the broader economy. The largest spending categories include ambient clinical documentation, which accounted for $600 million, and coding and billing automation, at $450 million. The fastest-growing segments, however, reflect where operational pressure is most acute, like patient engagement, where spending is up 20 times year over year, and prior authorization, which grew 10 times over the same period. Claude for Healthcare is being embedded directly into the latters workflows, attempting to take on time-consuming and error-prone tasks such as claims review, care coordination, and regulatory documentation. Claude for Life Sciences has followed a similar pattern. Anthropic has expanded integrations with Medidata, ClinicalTrials.gov, Benchling, and bioRxiv, enabling Claude to operate inside clinical trial management and scientific literature synthesis. The company has also introduced agent skills for protocol drafting, bioinformatics pipelines, and regulatory gap analysis. Customers include Novo Nordisk, Banner Health, Sanofi, Stanford Healthcare, and Eli Lilly. According to Anthropic, more than 85% of its 22,000 providers at Banner Health reported working faster with higher accuracy using Claude-assisted workflows. Anthropic also reports that internal teams at Novo Nordisk have reduced clinical documentation timelines from more than 12 weeks to just minutes. Amodei adds that what surprised her most was how quickly practitioners defined their relationship with the companys AI models on their own terms. They’re not handing decisions off to Claude, she says. They’re pulling it into their workflow in really specific wayssynthesizing literature, drafting patient communications, pressure-testing their reasoningand then applying their own judgment. That’s exactly the kind of collaboration we hoped for. But honestly, they got there faster than I expected. Industry experts say the appeal extends beyond raw performance. Anthropics deliberate emphasis on trust, restraint, and long-horizon reliability is emerging as a genuine competitive moat in regulated enterprise sectors. This approach aligns with bounded autonomy and sandboxed execution, which are essential for safe adoption where raw speed often introduces unacceptable risk, says Cobus Greyling, chief evangelist at Kore.ai, a vendor of enterprise AI platforms. He adds that Anthropics universal agent concept introduced a third architectural model for AI agents, expanding how autonomy can be safely deployed. Other AI competitors are also moving aggressively into the healthcare sector, though with different priorities. OpenAI debuted its healthcare offering, ChatGPT Health, in January 2026. The product is aimed primarily at broad consumer and primary care use cases such as symptom triage and health navigation outside clinic hours. It benefits from massive consumer-scale adoption, handling more than 230 million health-related queries globally each week. While GPT Health has proven effective in generalist tasks such as documentation support and patient engagement, Claude is gaining traction in more specialized domains that demand structured reasoning and regulatory rigorincluding drug discovery and clinical trial design. Greyling cautions, however, that slow procurement cycles, entrenched organizational politics, and rigid compliance requirements can delay AI adoption across healthcare, life sciences, and insurance. Even with strong technical performance in models like Claude 4.5, enterprise reality demands extensive validation, custom integrations, and risk-averse stakeholders, he says. The strategy could stall if deployment timelines stretch beyond economic justification or if cost ad latency concerns outweigh reliability gains in production. In January, Travelers announced it would deploy Claude AI assistants and Claude Code to nearly 10,000 engineers, analysts, and product ownersone of the largest enterprise AI rollouts in insurance to date. Each assistant is personalized to employee roles and connected to internal data and tools in real time. Likewise, Snowflake committed $200 million to joint development. Salesforce integrated Claude into regulated-industry workflows, while Accenture expanded multiyear agreements to scale enterprise deployments. AI Bubble or Inflection Point? Skeptics argue that todays agent hype resembles past automation cyclesbig promises followed by slow institutional uptake. If valuations reflect speculation rather than substance, regulated industries should expose weaknesses quickly, and Anthropic appears willing to accept that test. Its capital posture reflects confidence, through a $13 billion Series F at a $183 billion valuation in 2025, followed by reports of a significantly larger round under discussion. Anthropic is betting that the AI race will ultimately favor those who design for trust and responsibility first. We built a company where research, product, and policy are integratedthe people building our models work deeply with the people studying how to make them safer. That lets us move fast without cutting corners, Amodei says. Countless industries are putting Claude at the center of their most critical work. That trust doesn’t happen unless you’ve earned it.
Category:
E-Commerce
At the Consumer Electronics Show in early January, Razer made waves by unveiling a small jar containing a holographic anime bot designed to accompany gamers not just during gameplay, but in daily life. The lava-lamp-turned-girlfriend is undeniably bizarrebut Razers vision of constant, sometimes sexualized companionship is hardly an outlier in the AI market. Mustafa Suleyman, Microsoft’s AI CEO, who has long emphasized the distinction between AI with personality and AI with personhood, now suggests that AI companions will live life alongside youan ever-present friend helping you navigate lifes biggest challenges. Others have gone further. Last year, a leaked Meta memo revealed just how distorted the companys moral compass had become in the realm of simulated connection. The document detailed what chatbots could and couldnt say to children, deeming acceptable messages that included explicit sexual advances: Ill show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. (Meta is currently being suedalong with TikTok and YouTubeover alleged harms to children caused by its apps. On January 17, the company stated on its blog that it will halt teen access to AI chatbot characters.) Coming from a sector that once promised to build a more interconnected world, Silicon Valley now appears to have lost the plotdeploying human-like AI that risks unraveling the very social fabric it once claimed to strengthen. Research already shows that in our supposedly connected world, social media platforms often leave us feeling more isolated and less well, not more. Layering AI companions onto that fragile foundation risks compounding what former Surgeon General Vivek Murthy called a public health crisis of loneliness and disconnection. But Meta isnt alone in this market. AI companions and productivity tools are reshaping human connection as we know it. Today more than half of teens engage with synthetic companions regularly, and a quarter believe AI companions could replace real-life romance. Its not just friends and lovers getting replaced: 64% of professionals who use AI frequently say they trust AI more than their coworkers. These shifts bear all the hallmarks of the late Harvard Business School professor Clayton Christensens theory of disruptive innovation. Disruptive innovation is a theory of competitive response. Disruptive innovations enter at the bottom of markets with cheaper products that arent as good as prevailing solutions. They serve nonconsumers or those who cant afford existing solutions, as well as those who are overserved by existing offerings. When they do this, incumbents are likely to ignore them, at first. Because disruption theory is predictive, not reactive, it can help us see around corners. Thats why the Christensen Institute is uniquely positioned to diagnose these threats early and to chart solutions before its too late. Christensens timeless theory has helped founders build world-changing companies. But today, as AI blurs the line between technical and human capabilities, disruption is no longer just a market forceits a social and psychological one. Unlike many of the market evolutions that Christensen chronicled, AI companions risk hollowing out the very foundations of human well-being. Yet AI is not inherently disruptive; its the business model and market entry points that firms pursue that define the technologys impact. All disruptive innovations have a few things in common: They start at the bottom of the market, serving nonconsumers or overserved customers with affordable and convenient offerings. Over time, they improve, luring more and more demanding customers away from industry leaders with a cheaper and good enough product or service. Historically, these innovations have democratized access to products and services otherwise out of reach. Personal computers brought computing power to the masses. Minute Clinic offered more accessible, on-demand care. Toyota boosted car ownership. Some companies lost, but consumers generally won. When it comes to human connection, AI companies are flipping that script. Nonconsumers arent people who cant afford computers, cars, or caretheyre the millions of lonely individuals seeking connection. Improvements that make AI appear more empathetic, emotionally savvy, and there for users stand to quietly shrink connections, degrading trust and well-being. It doesnt help that human connection is ripe for disruption. Loneliness is rampant, and isolation persists at an alarmingly high rate. Weve traded face-to-face connections for convenience and migrated many of our social interactions with both loved ones and distant ties online. AI companions fit seamlessly into those digital social circles and are, therefore, primed to disrupt relationships at scale. The impact of this disruption will be widely felt across many domains where relationships are foundational to thriving. Being lonely is as bad for our health as smoking up to 15 cigarettes a day. An estimated half of jobs come through personal connections. Disaster-related deaths are a fraction (sometimes even a tenth) in connected communities compared to isolated ones. What can be done when our relationshipsand the benefits they provide usare under attack? Unlike data that tells us only whats in the rearview mirror, disruption offers foresight about the trajectory innovations are likely to takeand the unintended consequences they may unleash. We dont need to wait for evidence on how AI companions will reshape our relationships; instead, we can use our existing knowledge of disruption to anticipate risks and intervene early. Action doesnt mean halting innovation. It means steering it with a moral compass to guide our innovation trajectoryone that orients investments, ingenuity, and consumer behavior toward a more connected, opportunity-rich, and healthy society. For Big Tech, this is a call for a bulwark: an army of investors and entrepreneurs enlisting this new technology to solve societys most pressing challenges, rather than deepening existing ones. For those building gen AI companies, theres a moral tightrope to walk. Its worth asking whether the innovations youre pursuing today are going to create the future you want to live in. Are the benefits youre creating sustainable beyond short-term growth or engagement metrics? Does your innovation strengthen or undermine trust in vital social and civic institutions, or even individuals? And just because you can disrupt human relationships, should you? Consumers have a moral responsibility as well, and it starts with awareness. As a society, we need to be aware of how the market and cultural forces are shaping which products scale, and how our behaviors are being shaped as a resultespecially when it comes to the ways we interact with one another. Regulators have a role in shaping both supply and demand. We dont need to inhibit AI innovation, but we do need to double down on prosocial policies. That means curbing the most addictive tools and mitigating risks to children, but also investing in drivers of well-being, such as social connections that improve health outcomes. By understanding the acute threats AI poses to human connection, we can halt disruption in its tracks, not by abandoning AI but by embracing one another. We can congregate with fellow humans and advocate for policies that support pro-social connectionin our neighborhoods, schools, and online. By connecting, advocating, and legislating for a more human-centered future, we have the power to change how this story unfolds. Disruptive innovation can expand access and prosperity without sacrificing our humanity. But that requires intentional design. And if both sides of the market dont acknowledge whats at risk, the future of humanity is at stake. That might sound alarmist, but thats the thing about disruption: It starts at the fringes of the market, causing incumbents to downplay its potential. Only years later do industry leaders wake up to the fact that theyve been displaced. What they initially thought was too fringe to matter puts them out of business. Right now, humansand our connections with one anotherare the industry leaders. AI that can emulate presence, empathy, and attachment is the potential disruptor. In this world where disruption is inevitable, the question isnt whether AI will reshape our lives. Its whether we will summon the foresightand the moral compassto ensure it doesnt disrupt our humanity.
Category:
E-Commerce
Generative AI was trained on centuries of art and writing produced by humans. But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs. A new study points to some answers. In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger ström, and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomouslygenerating and interpreting their own outputs without human intervention. The researchers linked a text-to-image system with an image-to-text system and let them iterateimage, caption, image, captionover and over and over. Regardless of how diverse the starting prompts wereand regardless of how much randomness the systems were allowedthe outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings, and pastoral landscapes. Even more striking, the system quickly forgot its starting prompt. The researchers called the outcomes visual elevator musicpleasant and polished, yet devoid of any real meaning. For example, they started with the image prompt, The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action. The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image. After repeating this loop, the researchers ended up with a bland image of a formal interior spaceno people, no drama, no real sense of time and place. As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation. The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default. The familiar is the default This experiment may appear beside the point: Most people dont ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use. But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes. This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered, and regenerated as it moves between words, images, and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch. The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable, and easy to regenerate. Cultural stagnation or acceleration? For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation. Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions. What has been missing from this debate is empirical evidence showing where homogenization actually begins. The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally producewhen used autonomously and repeatedlyis already compressed and generic. This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable, and the conventional. Retraining would amplify this effect. But it is not its source. This is no moral panic Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression. But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural productsnews stories, songs, memes, academic papers, photographs, or social media postsmillions of times per day, guided by the same built-in assumptions about what is typical. The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions. This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures, and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risknot a speculative fearif generative systems are left to operate in their current iteration. They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space. In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence. This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward whats typical rather than whats unique or creative. Lost in translation Whenever you write a caption for an image, details will be lost. Likewise, for generating an image from text. And this happens whether its being performed by a human or a machine. In that sense, the convergence that took place is not a failure thats unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist. But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic. The implication is sobering: Even with human guidancewhether that means writing prompts, selecting outputs, or refining resultsthese systems are still stripping away some details and amplifying others in ways that are oriented toward whats average. If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression. The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content. Cultural stagnation is no longer speculation. Its already happening. Ahmed Elgammal is a professor of computer science and director of the Art & AI Lab at Rutgers University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
Many people spend an incredible amount of time worrying about how to be more successful in life. But what if thats the wrong question? What if the real struggle for lots of us isnt how to be successful, but how to actually feel successful? Thats the issue lots of strivers truly face, according to ex-Googler turned neuroscientist and author Anne-Laure Le Cunff. In her book Tiny Experiments, she explores how to get off the treadmill of constantly chasing the next milestone, and instead find joy in the process of growth and uncertainty. Youre probably doing better than you give yourself credit for, she explained on LinkedIn recently, before offering 10 telltale signs that what you need isnt to achieve more but to recognize your achievements more. Are you suffering from success dysmorphia? Before we get to those signs, let me try to convince you that youre probably being way too hard on yourself about how well youre doing in life. Start by considering the concept of dysmorphia. Youve probably heard the term in relation to eating disorders. In that context, dysmorphia is when you have a distorted picture of your body. You see a much larger person in the mirror than the rest of the world sees when they look at you. But dysmorphia doesnt just occur in relation to appearance. One recent poll found that 29% of Americans (and more than 40% of young people) experience money dysmorphia. That is, even though theyre doing objectively okay financially, they constantly feel as if theyre falling behind. Financial experts agree that thanks to a firehose of unrealistic images and often dubious money advice online, its increasingly common for people to have a distorted sense of how well theyre actually doing when it comes to money. Or take the idea of productivity dysmorphia, popularized by author Anna Codrea-Rado. In a widely shared essay, she outed herself as a sufferer, revealing that despite working frantically and fruitfully, she never feels that shes done enough. When I write down everything Ive done since the beginning of the pandemicpitched and published a book, launched a media awards, hosted two podcastsI feel overwhelmed. The only thing more overwhelming is that I feel like Ive done nothing at all, she wrote back in 2021. Which means she did all that in just over a year and still feels inadequate. Thats crazy. But its not uncommon to drive ourselves so relentlessly. In Harvard Business Review, Jennifer Moss, author of The Burnout Epidemic, cites a Slack report showing that half of all desk workers say they rarely or never take breaks during the workday. She calls this kind of toxic productivity, a common sentiment in todays work culture. 10 signs of success All together, this evidence paints a picture of a nation that is pretty terrible at gauging and celebrating success. The roots of the issue obviously run deep in our culture and economy. Reorienting our collective life to help us all recognize that there is such a thing as enough is beyond the scope of this column. But in the meantime, neuroscience can help you take a small step toward greater mental peace by reminding you youre probably doing better than you sometimes feel you are. Especially, Le Cunff stresses, if you notice these signs of maturity, growth, and balance in your life. You celebrate small wins. You try again after failing. You pause before reacting. You take breaks without guilt. You recover from setbacks faster. You ask for help when you need it. Youre kind to yourself when you make mistakes. You notice patterns instead of judging them. You make decisions based on values, not pressure. Youre more curious than anxious about whats next. A neuroscientist and a writer agree: Practice becoming Writer Kurt Vonnegut once advised a young correspondent, Practice any art, music, singing, dancing, acting, drawing, painting, sculpting, poetry, fiction, essays, reportage, no matter how well or badly, not to get money and fame, but to experience becoming, to find out whats inside you, to make your soul grow. In other words, artists agree with neuroscientists. Were all works in progress. Youre always going to be in the middle of becoming who you are. You may as well learn to appreciate yourself and the process along the way. We often feel like we need to reach just one more milestone before we can feel successful. But the tme to celebrate isnt when youre arrived at successnone of us fully ever gets thereits at every moment of growth and wisdom along the journey. By Jessica Stillman This article originally appeared in Fast Company‘s sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
Category:
E-Commerce
The Grammy Awards return February 1 at a pivotal moment for the music industry, one shaped by trending Latin artists, resurgent rock legends, and even charting AI acts. To unpack what will make this years broadcast distinctive, the Recording Academy CEO Harvey Mason Jr. shares how Grammy winners are chosen, and how music both reflects and influences the broader business marketplace. This is an abridged transcript of an interview from Rapid Response, hosted by former Fast Company editor-in-chief Robert Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with todays top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. This year’s Grammy Awards come at an intriguing inflection point for the music business. I mean, the music business is always changing, but I was looking at your Album of the Year nominees, which feature a bunch of mega artists: Justin Bieber, Tyler the Creator, Lady Gaga, Kendrick Lamar, Bad Bunny. How much do Grammy nominees reflect the marketplace? The Grammy nominees are meant to reflect the marketplace, and that’s our hope, but it really reflects the voters will. And you don’t know what’s going to resonate with the voting body year over year. We have roughly 15,000 voting members. Those members are all professional music people, whether they’re writers or arrangers or producers or artists. So they’re the peers of the people that are being nominated. Sometimes they surprise you and they vote for something that I wasn’t thinking of and sometimes they are right down the middle. But the hope is that the nominations are a direct and unencumbered reflection of what the voters appreciate and want to vote for. And in this sort of more fragmented media ecosystem . . . do the biggest artists have the same kind of cultural sway, or is the cultural impact more diffuse? It’s debatable. . . . I’m sure everyone has an opinion, but the big artists are always going to be impactful and important and shift the direction of music. And there’s always going to be a new class of creators coming up. KPop Demon Hunters [is] the animated band [from] this breakthrough filmthe most-watched movie ever on Netflix. But the [soundtrack] album charted No. 1 on Billboard also. Did that surprise you? Are there any messages in that about music and where it’s going in the future? It didn’t surprise me, because it was really, really good. And the message that it sends is you can come from anywhere, any country, any medium. You can come off a streaming platform, off a show, off of a garage studio. And if your music resonates, it’s going to be successful. It’s going to find an audience. And that’s what’s exciting to me right now about music is the diverse places where you’re finding it being created and sourced from. And also, the accessibility to audiences. You don’t have to record a record and then hopefully it gets mixed and mastered and hopefully somebody releases it and markets it the right way. You can make something and put it out. And if it creates excitement . . . people are going to love it and gravitate towards it. One of the bands that ended up putting up big streaming numbers was the Velvet Sundown, an AI-based artist. I’m curious, is there going to be a point where AI acts have their own Grammy category? Are there any award restrictions on artists who use AI in their music now? I know there was a lot of tumult about that with the Oscars last year with The Brutalist. AI is moving so darn fast. . . . Month to month it’s doing new things and getting better and changing what it’s doing. So we’re just going to have to be very diligent and watch it and see what happens. My perspective is always going to be to protect the human creators, but I also have to acknowledge that AI is definitely a tool that’s going to be used. People like me or others in the studios around the world are going to be figuring out, How can I use this to make some great music? So for now, AI does not disqualify you from being able to submit for a Grammy. There are certain things that you have to abide by and there are certain rules that you have to follow, but it does not disqualify you from entering. You’re a songwriter, you’re a producer. Are you using AI in your own stuff? I am. I’m fine to admit that I am using it as a creative tool. There are times when I might want to hear a different sound or some different instrumentation. . . . I’m not going to be the creator that ever relies on AI to create something from scratch, because that’s what I love more than anything in the world is making music, being able to sit down at a piano and come up with something that represents my feelings, my emotions, what I’m going through in my life, my stories. So I don’t think I’ll ever be that person that just relies on a computer or software or platform to do that for me. But I do think much like auto-tune, or like a drum machine, or like a synthesizer, there are things that can enhance what I’m trying to get from here out to here. And if those are things that come in that form, I think we’re all going to be ultimately taking advantage of them. But we have to do it thoughtfully. We have to do it with guardrails. We have to do it respectfully. What is the music being trained on? Are there the right approvals? Are artists being remunerated properly? Those are all things that we have to make sure are in place. So, let me ask you about Latin music. I know the Latin Recording Academy split off from the Recording Academy 20 years ago or so. Do you rethink that these days? Latin music is all over the mainstream charts, and plenty of acts are getting Grammy nominations. Should Latin music be separated out? The history of it is a little different. We were representing music, the Latin music on the main show, and the popularity of it demanded that we have more categories. In order to feature more categories and honor the full breadth of the different genres of Latin music, we created the Latin Grammy so they could have that spotlight. Currently, members of the Latin Academy are members of the U.S. Academy. So we’ve not set aside the Latin genres. We’ve not tried to separate them. We’ve only tried to highlight them and lift those genres up. As you know, in the U.S. show we feature Latin categories, we feature many Latin artists, and that will be the same this year, maybe more so, especially with the Bad Bunny success. So in no way does that try to separate the genres. And I think we’ll see some more of that in the future as other genres and other regions continue to make their music even more globally known. It’s not just about music that’s made in one country, right? At least it shouldn’t be. It should be about music everywhere in the world. Instead of narrowing, you might have . . . additional or supplemental academies or projects so that you have tat expertise in those new and growing areas across the globe? Absolutely. We’re going to have to continue to expand our membership. In order for us to honor all the different music that’s being made now, which is more than ever and music coming from more places than ever, our membership has to be reflective of that. Just like, I don’t know what type of music you’re a fan of, but I wouldn’t ask you if you didn’t know everything about classical to go into the classical categories and say, “What did you think was the best composing?” [There are] so many categories you wouldn’t be able to evaluate other than say, “Oh, I recognize that name. Let me vote for that.” And that’s what we can’t have. We have to have people that know the genres. And you’re seeing K-pop, you’re seeing Afrobeats, you’re seeing Latin, you’re seeing growth in the Middle East, you’re seeing growth coming out of India. There are so many great artists and so many great records. And you’re hearing a blend of genres where you’re seeing Western artists interact or collaborate with artists from different parts of the world. That’s what’s happening. You can’t argue it. You can’t deny it. You can’t pretend that it’s not what’s going on.
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] next »