|
|||||
In recent weeks, OpenAI has faced seven lawsuits alleging that ChatGPT contributed to suicides or mental health breakdowns. In a recent conversation at the Innovation@Brown Showcase, Brown University’s Ellie Pavlick, director of a new institute dedicated to exploring AI and mental health, and Soraya Darabi of VC firm TMV, an early investor in mental health AI startups, discussed the controversial relationship between AI and mental health. Pavlick and Darabi weigh the pros and cons of applying AI to emotional well-being, from chatbot therapy to AI friends and romantic partners. This is an abridged transcript of an interview from Rapid Response, hosted by the former editor-in-chief of Fast Company Bob Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with todays top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. A recent study showed that one of the major uses of ChatGPT for users is mental health, which makes a lot of people uneasy. Ellie, I want to start with you, the new institute that you direct known as ARIA, which stands for AI Research Institute on Interaction for AI Assistance. It’s a consortium of experts from a bunch of universities backed by $20 million in National Science Foundation funding. So what is the goal of ARIA? What are you hoping it delivers? Why is it here? Pavlick: Mental health is something that is very, I would say I don’t even know if it’s polarizing. I think many people’s first reaction is negative, the concept of AI mental health. So as you can tell from the name, we didn’t actually start as a group that was trying to work on mental health. We were a group of researchers who were interested in the biggest, hardest problems with current AI technologies. What are the hardest things that people are trying to apply AI to that we don’t think the current technology is quite up for? And mental health came up and actually was originally taken off our list of things that we wanted to work on because it is so scary to think about if you get it wrong, how big the risks are. And then we came back to it exactly because of this. We basically realized that this is happening, people are already using it. There’s companies that are like startups, some of them probably doing a great job, some of them not. The truth is we actually have a hard time even being able to differentiate those right now. And then there are a ton of people just going to chatbots and using them as therapists. And so we’re like, the worst thing that could happen is we don’t actually have good scientific leadership around this. How do we decide what this technology can and can’t do? How do we evaluate these kinds of things? How do we build it safely in a way that we can trust? There’s questions like this. There’s a demand for answers, and the reality is most of them we just can’t answer right now. They depend on an understanding of the AI that we don’t yet have. An understanding of humans and mental health that we don’t yet have. A level of discourse that society isn’t up for. We don’t have the vocabulary, we don’t have the terms. There’s just a lot that we can’t do yet to make this happen the right way. So that’s what ARIA is trying to provide this public sector, academic kind of voice to help lead this discussion. That’s right. You’re not waiting for this data to come out or for the final, whatever academia might say, this consortium might say. You’re already investing in companies that do this. I know you’re an early stage investor in Slingshot AI, which delivers mental health support via the app Ash. Is Ash the kind of service that Ellie and her group should be wary about? What were you thinking about when you decided to make this investment? Darabi: Well, actually I’m not hearing that Ellie’s wary. I think she’s being really pragmatic and realistic. In broad brushstrokes, zooming back and talking about the sobering facts and the scale of this problem, one billion out of eight billion people struggle with some sort of mental health issue. Fewer than 50% of people seek out treatment, and then the people who do find the cost to be prohibitive. That recent study that you cited, it’s probably the one from the Harvard Business Review, which came out in March of this year, which studied use cases of ChatGPT and their analysis showed that the number one, four, and seven out of 10 use cases for foundational models broadly are therapy or mental health related. I mean, we’re talking about something that touches half of the planet. If you’re looking at investing with an ethical lens, there’s no greater TAM [total addressable market] than people who have a mental health disorder of some sort. We’ve known the Slingshot AI team, which is the largest foundational model for psychology, for over a decade. We’ve followed their careers. We think exceptionally highly of the advisory board and panel they put together. But I think what really led us down the rabbit hole of caring deeply enough about mental health and AI to frankly start a fund dedicated to it, and we did that in December of last year. It was really kind of going back to the fact that AI therapy is so stigmatized and people hear it and they immediately jump to the wrong conclusions. They jump to the hyperbolic examples of suicide. And yes, it’s terrible. There have been incidents of deep codependence upon ChatGPT or otherwise whereby young people in particular are susceptible to very scary things and yet those salacious headlines don’t represent the vast number of folks whom we think will be well-serviced by these technologies. You said this phrase, we kind of stumbled on [these] uses for ChatGPT. It’s not what it was created for and yet people love it for that. Darabi: It makes me think about 20 years ago when everybody was freaking out about the fact that kids were on video games all day, and now because of that we have Khan Academy and Duolingo. Fearmongering is good actually because it creates a precedent for the guardrails that I think are absolutely necessary for us to safeguard our children from anything that could be disastrous. But at the same time, if we run in fear, we’re just repeating history and it’s probably time to just embrace the snowball, which will become an avalanche in mere seconds. AI is going to be omnipresent everywhere. Everything that we see and touch will be in some way supercharged by AI. So if we’re not understanding it to our deepest capabilities, then we’re actually doing ourselves a great disservice. Pavlick: To this point of yeah, people are drawn to AI for this particular use case. So on our team in ARIA, we have a lot of computer scientists who build AI systems, but acually a lot of our teams do developmental psychology, core cognitive science, neuroscience. There are questions to say, why? The whys and the hows. What are people getting out of this? What need is it filling? I think this is a really important question to be asking soon. I think you’re completely right. Fearmongering has a positive role to play. You don’t want to get too caught on it and you can point historically to examples of people freaked out and it turned out okay. There’s also cases like social media, maybe people didn’t freak out enough and I would not say it turned out okay. People can agree to disagree and there’s plus and minuses, but the point is these aren’t questions that really we are in a position that we can start asking questions. You can’t do things perfectly, but you can run studies. You can say, “What is the process that’s happening? What is it like when someone’s talking to a chatbot? Is it similar to talking to a human? What is missing there? Is this going to be okay long-term? What about young people who are doing this in core developmental stages? What about somebody who’s in a state of acute psychological distress as opposed to as a general maintenance thing? What about somebody who’s struggling with substance abuse?” These are all different questions, they’re going to have different answers. Again, I feel very strongly that the one LLM that just is one interface for everything is, I think a lot is unknown, but I would bet that that’s not going to be the final thing that we’re going to want.
Category:
E-Commerce
If youve chosen a target asset allocationthe mix of stocks, bonds, and cash in your portfolio youre probably ahead of many investors. But unless youre investing in a set-and-forget investment option like a target-date fund, your portfolios asset mix will shift as the market fluctuates. In a bull market you might get more equity exposure than you planned, or the reverse if the market declines. Rebalancing involves selling assets that have appreciated the most and using the proceeds to shore up assets that have lagged. This brings your portfolios asset mix back into balance and enforces the discipline of selling high/buying low. Rebalancing doesnt necessarily improve your portfolios returns, especially if it means selling asset classes that continue to perform well. But it can be an essential way to keep your portfolios risk profile from climbing too high. Where and how to rebalance If its been a while since your last rebalance, your portfolio might be heavy on stocks and light on bonds. A portfolio that started at 60% stocks and 40% bonds 10 years ago could now hold more than 80% stocks. Another area to check is the mix of international versus U.S. stocks. International stocks have led in 2025, but that followed a long run of outperformance for U.S. stocks, so your portfolio might lack international exposure. (Keeping about a third of your equity exposure outside the U.S. is reasonable if you want to align with Morningstars global market portfolio.) Other imbalances might exist. Growth stocks have gained nearly twice as much as value stocks over the past three years. You might also be overweight in specialized assets such as gold and bitcoin thanks to their recent run-ups. After assessing your allocations, decide where to make adjustments. You dont need to rebalance every accountwhat matters is the overall portfolios asset mix, which determines your risk and return profile. Its usually most tax-efficient to adjust within a tax-deferred account such as an IRA or 401(k), where trades wont trigger realized capital gains. For example, if youre overweight on U.S. stocks and light on international stocks, you could sell U.S. stocks and buy an international-stock fund in your 401(k). If you need to make changes in a taxable account, you can attempt to offset any realized capital gains by selling holdings with unrealized losses. That might be difficult, as the strong market environment has lifted nearly every type of asset over the past 12 months. Only a few Morningstar Categories (including India equity, real estate, consumer defensive, and health care) posted losses over the trailing 12-month period ended Oct. 30, 2025. The average long-term government-bond fund lost about 8% per year for the trailing five-year period as of the same date, so those could offer opportunities for harvesting losses. Required minimum distributions can also be used in tandem with rebalancing. Account owners have flexibility in which assets to sell to meet RMDs. If you own several different traditional IRAs, you could take the full RMD amount from any of them. Selling off holdings that appreciated the most can bring the portfolios asset mix back in line with your original targets. Another option is funneling new contributions into underweight asset classes. Depending on the size of additional investments, this approach might take time, but its better than not rebalancing at all. This might also appeal if youve built up capital gains you dont want to realize. Final thoughts Rebalancing is especially important in extremely volatile times. But even in a more gradual bull market like in recent years, its important for keeping a portfolios risk level in check, especially for investors as they approach retirement and start spending their portfolios. ___ This article was provided to The Associated Press by Morningstar. For more personal finance content, go to https://www.morningstar.com/personal-finance Amy C. Arnott, CFA is a portfolio strategist for Morningstar.
Category:
E-Commerce
The Trump administration left nursing off a list of “professional” degrees in a move that could directly limit how future nurses will finance their education. Removing the profession from the list will have a major impact, after the passing of President Trump’s One Big Beautiful Bill Act, which introduced a cap on borrowing. As of July 1, 2026, students who are not enrolled in professional degree programs will be subject to a borrowing cap of $20,500 per year and a lifetime cap of $100,000. However, professional degrees offer higher loan options, with the ability to borrow $50,000 per year and a $200,000 lifetime cap. ‘A backhanded slap’ Nursing is the largest healthcare profession in the United States, with about 4.5 million registered nurses. And given that most nurses (76%) rely on financial aid to pay for their education, the move has drawn immense backlash, as it’s being widely viewed as a slight against the profession. That’s especially true because nurses, who have a lengthy list of responsibilities, including providing frontline patient care, running lab work, assisting in procedures, and more, are often seen as one of the most essential pieces of the healthcare system. Bassey Etim-Edet, a high-risk labor and delivery nurse in Baltimore who was on the front lines of care during the COVID pandemic, told Fast Company that the Trump administration’s move sets the wrong precedent and that the impact can’t be overstated. “To go from ‘healthcare hero’ to not being recognized as a professional is such a backhanded slap,” Etim-Edet says,” especially at a time when legal precedent has made it clear that nurses are as responsible for provider mistakes as the providers themselves.” “We are disrespected, underpaid, and under-resourced,” she added. “Still, we serve.” Etim-Edet, who graduated with $150,000 in student loans, says her career wouldn’t have been possible without the HRSA Nurse Loan Repayment Program. “In exchange for working 23 years at a critical access hospital, the government paid back a massive percentage of my loans,” Etim-Edet explained. “At the end of my service commitment, my loan balance was down to about $60,000. I was able to buy a home, start a family, and live” because of the program. Fever pitch In response to the move, the American Nurses Association (ANA) launched a petition aimed at fighting the lower classification. It warned, “This move stems from an effort to rein in student loan debt and tuition costs as part of the One Big Beautiful Bill Act; however, it means that postbaccalaureate nursing students would only be eligible for half the amount of federal loans as graduate medical students.” The petition continued, “We call on the Department of Education to revise the proposed definition of ‘professional degrees’ to explicitly include nursing.” Amid the backlash, the Department of Education called concerns around the move “fear-mongering” by “certain progressive voices” in a lengthy statement released on Monday, November 24. “The definition of a ‘professional degree’ is an internal definition used by the Department to distinguish among programs that qualify for higher loan limits, not a value judgement about the importance of programs,” the statement reads. “It has no bearing on whether a program is professional in nature or not.” It also noted that “95% of nursing students borrow below the annual loan limit and therefore are not affected by the new caps.” A spokesperson for the Department of Education referred Fast Company to the statement when reached for additional comment. Still, nurses seem to disagree. At a time when healthcare in our country faces a historic nurse shortage and rising demands, limiting nurses access to funding for graduate education threatens the very foundation of patient care,” Jennifer Mensik Kennedy, president of the American Nurses Association, said in a statement. “In many communities across the country, particularly in rural and underserved areas, advanced practice registered nurses ensure access to essential, high-quality care that would otherwise be unavailable.” The Trump administration’s move comes as the nationwide nursing shortage is expected to continue to worsen. Etim-Edet adds that, as the system is already collapsing, younger people who greatly value work-life balance won’t want to work in a career that isn’t financially accessible or good for their emotional health.
Category:
E-Commerce
The line between human and machine authorship is blurring, particularly as its become increasingly difficult to tell whether something was written by a person or AI. Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence. As a scholar who explores how AI is built, how people are using it in their everyday lives, and how its affecting culture, Ive thought a lot about what this technology can do and where it falls short. If youre more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to? It isnt all or nothing Thinking about these questions reminded me of Umberto Ecos essay Apocalyptic and Integrated, which was originally written in the early 1960s. Parts of it were later included in an anthology titled Apocalypse Postponed, which I first read as a college student in Italy. In it, Eco draws a contrast between two attitudes toward mass media. There are the apocalyptics who fear cultural degradation and moral collapse. Then there are the integrated who champion new media technologies as a democratizing force for culture. Back then, Eco was writing about the proliferation of TV and radio. Today, youll often see similar reactions to AI. Yet Eco argued that both positions were too extreme. It isnt helpful, he wrote, to see new media as either a dire threat or a miracle. Instead, he urged readers to look at how people and communities use these new tools, what risks and opportunities they create, and how they shapeand sometimes reinforcepower structures. While I was teaching a course on deepfakes during the 2024 election, Ecos lesson also came back to me. Those were days when some scholars and media outlets were regularly warning of an imminent deepfake apocalypse. Would deepfakes be used to mimic major political figures and push targeted disinformation? What if, on the eve of an election, generative AI was used to mimic the voice of a candidate on a robocall telling voters to stay home? Those fears werent groundless: Research shows that people arent especially good at identifying deepfakes. At the same time, they consistently overestimate their ability to do so. In the end, though, the apocalypse was postponed. Post-election analyses found that deepfakes did seem to intensify some ongoing political trends, such as the erosion of trust and polarization, but theres no evidence that they affected the final outcome of the election. Listicles, news updates, and how-to guides Of course, the fears that AI raises for supporters of democracy are not the same as those it creates for writers and artists. For them, the core concerns are about authorship: How can one person compete with a system trained on millions of voices that can produce text at hyper-speed? And if this becomes the norm, what will it do to creative work, both as an occupation and as a source of meaning? Its important to clarify whats meant by online content, the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements. A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews, and product explainers. The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business. A whole industry of writersmostly freelance, including many translatorshas relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text, and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them. Collaborating with AI The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity. How can you distinguish a human-written article from a machine-generated one? And does that ability even matter? Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI. A writer might draft a few lines, let an AI expand them, and then reshape that output into the final text. This article is no exception. As a non-native English speaker, I often rely on AI to refine my language before sending drafts to an editor. At times, the system attempts to reshape what I mean. But once its stylistic tendencies become familiar, it becomes possible to avoid them and maintain a personal tone. Also, artificial intelligence is not entirely artificial, since it is trained on human-made material. Its worth noting that even before AI, human writing has never been entirely human, either. Every technology,from parchment and stylus paper to the typewriter and now AI, has shaped how people write and how readers make sense of it. Another important point: AI models are increasingly trained on datasets that include not only human writing but also AI-generated and humanAI coproduced text. This has raised concerns about their ability to continue improving over time. Some commentators have already described a sense of disillusionment following the release of newer large models, with companies struggling to deliver on their promises. Human voices may matter even more But what happens when people become overly reliant on AI in their writing? Some studies show that writers may feel more creative when they use artificial intelligence for brainstorming, yet the range of ideas often becomes narrower. This uniformity affects style as well: These systems tend to pull users toward similar patterns of wording, which reduces the differences that usually mark an individual voice. Researchers also note a shift toward Westernand especially English-speakingnorms in the writing of people from other cultures, raising concerns about a new form of AI colonialism. In this context, texts that display originality, voice, and stylistic intention are likely to become even more meaningful within the media landscape, and they may play a crucial role in training the next generations of models. If you set aside the more apocalyptic scenarios and assume that AI will continue to advanceperhaps at a slower pace than in the recent pastits quite possible that thoughtful, original, human-generated writing will become even more valuable. Put another way: The work of writers, journalists, and intellectuals will not become superfluous simply because much of the web is no longer written by humans. Francesco Agnellini is a lecturer in digital and data studies at Binghamton University, State University of New York. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
When a company with tens of thousands of software engineers found that uptake of a new AI-powered tool was lagging well below 50%, they wanted to know why. It turned out that the problem wasnt the technology itself. What was holding the company back was a mindset that saw AI use as akin to cheating. Those who used the tool were perceived as less skilled than their colleagues, even when their work output was identical. Not surprisingly, most of the engineers chose not to risk their reputations and carried on working in the traditional way. These kinds of self-defeating attitudes arent limited to one companythey are endemic across the business world. Organizations are being held back because they are importing negative ideas about AI from contexts where they make sense into corporate settings where they dont. The result is a toxic combination of stigma, unhelpful policies, and a fundamental misunderstanding of what actually matters in business. The path forward involves setting aside these confusions and embracing a simpler principle: Artificial intelligence should be treated like any other powerful business tool. This article shares what I have learned over the past six months while revising the AI use policies for my own companies, drawing on the research and insights of my internal working group (Paul Scade, Pranay Sanklecha, and Rian Hoque). {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":""}} Confusing Contexts In educational contexts, it is entirely appropriate to be suspicious about generative AI. School and college assessments exist for a specific purpose: to demonstrate that students have acquired the skills and the knowledge they are studying. Feeding a prompt into ChatGPT and then handing in the essay it generates undermines the reason for writing the essay in the first place. When it comes to artistic outputs, like works of fiction or paintings, there are legitimate philosophical debates about whether AI-generated work can ever possess creative authenticity and artistic value. And there are tough questions about where the line might lie when it comes to using AI tools for assistance. But issues like these are almost entirely irrelevant to business operations. In business, success is measured by results and results alone. Does your marketing copy persuade customers to buy? Yes or no? Does your report clarify complex issues for stakeholders? Does your presentation convince the board to approve your proposal? The only metrics that matter in these cases are accuracy, coherence, and effectivenessnot the contents origin story. When we import the principles that govern legitimate AI use in other areas into our discussion of its use in business, we undermine our ability to take full advantage of this powerful technology. The Disclosure Distraction Public discussions about AI often focus on the dangers that follow from allowing generative AI outputs into public spaces. From the dead internet theory to arguments about whether it should be a legal requirement to label AI outputs on social media, policymakers and commentators are rightly concerned about malicious AI use infiltrating and undermining the public discourse. Concerns like these have made rules about disclosure of AI use central to many corporate AI use policies. But theres a problem here. While these discussions and concerns are perfectly legitimate when it comes to AI agents shaping debates around social and political issues, importing these suspicions into business contexts can be damaging. Studies consistently show that disclosed AI use triggers negative bias within companies, even when that use is explicitly encouraged and when the output quality is identical to human-created content. The study mentioned at the start of this article found that internal reviewers assessed the same work output to be less competent when they were told that AI had been used in its production than when they were told it had not been, even when the AI tools in question were known to increase productivity and when their use was encouraged by the employer. Similarly, a meta-analysis of 13 experiments published this year identified a consistent loss of trust in those who disclose their AI use. Even respondents who felt positively about AI use themselves tended to feel higher distrust toward colleagues who used it. This kind of irrational prejudice creates a chilling effect on the innovative use of AI within businesses. Disclosure mandates for the use of AI tools reflect organizational immaturity and fear-based policymaking. They treat AI as a kind of contagion and create stigma around a tool that should be as uncontroversial as using spell-check or design templatesor having the communications team prepare a statement for the CEO to sign off on. Companies that focus on disclosure are missing the forest for the trees. They have become so worried about the process that theyre ignoring what actually matters: the quality of the output. The Ownership Imperative The solution to both context confusion and the distracting push for disclosure is simple: Treat AI like a perfectly normalalbeit powerfultechnological tool, and insist that the humans who use it take full ownership of whatever they produce. This shift in mindset cuts through the confused thinking that plagues current AI policies. When you stop treating AI as something exotic that requires special labels and start treating it as you would any other business tool, the path forward becomes clear. You wouldnt disclose that you used Excel to create a budget or used PowerPoint to design a presentation. What matters isnt the toolit is whether you stand behind the work. But heres the crucial part: Treating artificial intelligence as normal technology doesnt mean you can play fast and loose with it. Quite the opposite. Once we put aside concepts that are irrelevant in a business context, like creative authenticity and cheating, we are left with something more fundamental: accountability. When AI is just another tool in your tool kit, you own the output completely, whether you like it or not. Every mistake, every inadequacy, every breach of the rules belongs to the human who sends the content out into the world. If the AI plagiarizes and you use that text, youve plagiarized. If the AI gets facts wrong and you share them, they’re your factual errors. If the AI produces generic, weak, unconvincing language and you choose to use it, youve communicated poorly. No client, regulator, or stakeholder will accept the AI did it as an excuse. This reality demands rigorous verification, editing, and fact-checking as nonnegotiable components of the AI-use workflow. A large consulting company recently learned this lesson when it submitted an error-ridden AI-generated report to the Australian government. The mistakes slipped through because humans in the chain of responsibility treated AI output as finished work rather than as raw material requiring human oversight and ownership. The firm couldnt shift blame to the toolthey owned the embarrassment, the reputational damage, and the client relationship fallout entirely. Taking ownership isnt just about accepting responsibility for errors. It is also about recognizing that once you have reviewed, edited, and approved AI-assisted work, it ceases to be AI output and becomes your human output, produced with AI assistance. This is the mature approach that moves us past disclosure theater and toward genuine accountability. Making the Shift: Owning AI Use Here are four steps your business can take to move from confusion about contexts to the clarity of an ownership mindset. 1. Replace disclosure requirements with ownership confirmation. Stop asking Did you use AI? and start requiring clear accountability statements: I take full responsibility for this content and verify its accuracy. Every piece of work should have a human who explicitly stands behind it, regardless of how it was created. 2. Establish output-focused quality standards. Define success criteria that ignore creation method entirely: Is it accurate? Is it effective? Does it achieve its business objective? Create verification workflows and fact-checking protocols that apply equally to all content. When something fails these standards, the conversation should be about improving the output, not about which tools were used. 3. Normalize AI use through success stories, not policies. Share internal case studies of teams using AI to deliver exceptional results. Celebrate the business outcomesfaster delivery, higher quality, breakthrough insightswithout dwelling on the methodology. Make AI proficiency a valued skill on par with Excel expertise or presentation design, not something requiring special permission or disclosure. 4. Train for ownership, not just usage. Develop training that goes beyond prompting techniques to focus on verification, fact-checking, and quality assessment. Teach employees to treat AI output as raw material that requires their expertise to shape and validate, not as finished work. Include modules on identifying AI hallucinations, verifying claims, and maintaining brand voice. The companies that will thrive in the next year wont be those that unconsciously disincentivize the use of AI through the stigma of disclosure policies. They will be those that see AI for what it is: a powerful tool for achieving business results. While your competitors tie themselves in knots over process documentation and disclosure theater, you can leapfrog past them with a simple principle: Own your output, regardless of how you created it. The question that will separate winners from losers isn’t Did you use AI? but Is this excellent? If you’re still asking the first question, you are already falling behind. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":""}}
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] next »