|
|||||
You know the ancient proverb: Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime. For leaders, first-generation AI tools are like giving employees fish. Agentic AI, on the other hand, teaches them how to fishtruly empowering, and that empowerment lifts the entire organization. According to recent findings from McKinsey, nearly eight in ten companies report using gen AI, yet about the same number report no bottom-line impact. Agentic AI can help organizations achieve meaningful results. AI agents are highly capable assistants with the ability to execute tasks independently. Equipped with artificial intelligence that simulates human reasoning, they can recognize problems, remember past interactions, and proactively take steps to get things donewhether that means knocking out tedious manual tasks or helping to generate innovative solutions. For CEOs juggling numerous responsibilities, agentic AI can be a powerful ally in simplifying decision-making and scaling impact. Thats why I believe it belongs on every CEOs roadmap for 2026. As CEO of a SaaS company grounded in automation, Ive made it a priority to incorporate agentic AI into our everyday workflows. Here are three ways you can put it to work in your organization. 1. Take the effort out of scheduling Starting with one of the most basic functions of any organizationand one that can easily become a time and energy vacuumscheduling is perfect fodder for AI agents. And they go well beyond your typical AI-powered scheduling tool. For starters, theyre adaptable. AI agents can monitor incoming data and requests, proactively adjust schedules, and notify the relevant parties when issues arise. Lets say your team has a standing brainstorming session every Wednesday and a new client reaches out to request an intro meeting at the same time. Your agent can automatically respond with alternative time slots. On the other hand, if a client needs to connect on a time-sensitive issue, your agent can elevate the request to a human employee to decide whether rescheduling makes sense. You can also personalize AI agents based on your unique needs and priorities, including past interactions. If, for example, your agent learns that you religiously protect time for deep-focus work first thing in the morning, it wont keep proposing meetings then. By delegating scheduling tasks, organizationsfrom the CEO to internsfree up time for higher-level priorities and more meaningful work. You can build your own agent, or get started with a ready-to-use scheduling assistant that offers agentic capabilities, like Reclaim.ai. 2. Facilitate idea generation and innovation When we talk about AI and creativity, the conversation often stirs anxiety about artificial intelligence replacing human creativity. But agentic AI can help spark ideas for engagement, leadership development, and strategic initiatives. The goal is to cultivate the conditions in which these initiatives can thrive, not to replace the actual brainstorming or strategic thinking. For example, you can create an ideation-focused AI agent and train it on relevant organizational contextperformance data, KPIs, meeting notes, employee engagement data, culture touch points, and more. Your agent can continuously gather new information and update its internal knowledge. When the time comes for a brainstorming or strategy session (which the agent can also proactively prompt), it can draw on this working organizational memory plus any other resources it can access, and tap generative AI tools like ChatGPT or Gemini to generate themes, propose topics, and help guide the discussion. Meanwhile, leaders remain focused on evaluating ideas, decision-making, and execution. 3. Error-free progress updates and year-end recaps While generative AI can be incredibly powerful, the issue remains that it is largely reactive, not proactive. When it comes to tracking performance, team KPIs, and organizational progress, manual check-ins are still required. As Ive written before, manual tasks are subject to human error. Calendar alerts go unnoticed. Things slip through the cracks. Minor problems become big issues. One solution is to design an AI agent that can autonomously monitor your organizations performance. Continuous, real-time oversight helps ensure processes run smoothly and that issues are flagged as soon as they arise. For example, if your company sells workout gear and sees a postNew Year surge in fitness resolutionsand demand for a specific productan agent can track sales patterns and alert the team to inventory shortages. An AI agent can also independently generate reports, including year-end recaps that are critical for continued growth. Rather than waiting to be prompted by a human, they can do the work alone and elevate only the issues that require human judgment. Agents have the potential to create real value for organizations. Importantly, leaders have to rethink workflows so AI agents are meaningfully integrated, fully liberating employees from rote, manual tasks and freeing them to focus on more consequential, inspiring work like strategy and critical thinking. Ive found this leaves employees more energized, and the benefits continue to compound.
Category:
E-Commerce
TikTok agreed to settle a landmark social media addiction lawsuit just before the trial kicked off, the plaintiffs attorneys confirmed. The social video platform was one of three companies along with Metas Instagram and Googles YouTube facing claims that their platforms deliberately addict and harm children. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum. Details of the settlement with TikTok were not disclosed, and the company did not immediately respond to a request for comment. At the core of the case is a 19-year-old identified only by the initials KGM, whose case could determine how thousands of other similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute. A lawyer for the plaintiff said in a statement Tuesday that TikTok remains a defendant in the other personal injury cases, and that the trial will proceed as scheduled against Meta and YouTube. Jury selection starts this week in the Los Angeles County Superior Court. It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms. The selection process is expected to take at least a few days, with 75 potential jurors questioned each day through at least Thursday. A fourth company named in the lawsuit, Snapchat parent company Snap Inc., settled the case last week for an undisclosed sum. KGM claims that her use of social media from an early age addicted her to the technology and exacerbated depression and suicidal thoughts. Importantly, the lawsuit claims that this was done through deliberate design choices made by companies that sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms. Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue, the lawsuit says. Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the trial, which will last six to eight weeks. Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors. Plaintiffs are not merely the collateral damage of Defendants products, the lawsuit says. They are the direct victims of the intentional product design choices made by each Defendant. They are the intended targets of the harmful features that pushed them into self-destructive feedback loops. The tech companies dispute the claims that their products deliberately harm children, citing a bevy of safeguards they have added over the years and arguing that they are not liable for content posted on their sites by third parties. Recently, a number of lawsuits have attempted to place the blame for teen mental health struggles squarely on social media companies, Meta said in a recent blog post. “But this oversimplifies a serious issue. Clinicians and researchers find that mental health is a deeply complex and multifaceted issue, and trends regarding teens’ well-being aren’t clear-cut or universal. Narrowing the challenges faced by teens to a single factor ignores the scientific research and the many stressors impacting young people today, like academic pressure, school safety, socio-economic challenges and substance abuse.” A Meta spokesperson said in a statement Monday the company strongly disagrees with the allegations outlined in the lawsuit and that it’s confident the evidence will show our longstanding commitment to supporting young people. José Castaeda, a Google spokesperson, said Monday that the allegations against YouTube are simply not true. In a statement, he said Providing young people with a safer, healthier experience has always been core to our work.” TikTok did not immediately respond to a request for comment Monday. The case will be the first in a slew of cases beginning this year that seek to hold social media companies responsible for harming children’s mental well-being. A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children. In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states. TikTok also faces similar lawsuits in more than a dozen states. By Kaitlyn Huamani and Barbara Ortutay, AP technology writers
Category:
E-Commerce
The Trump administration has not shied away from sharing AI-generated imagery online, embracing cartoonlike visuals and memes and promoting them on official White House channels. But an edited and realistic image of civil rights attorney Nekima Levy Armstrong in tears after being arrested is raising new alarms about how the administration is blurring the lines between what is real and what is fake. Homeland Security Secretary Kristi Noems account posted the original image from Levy Armstrong’s arrest before the official White House account posted an altered image that showed her crying. The doctored picture is part of a deluge of AI-edited imagery that has been shared across the political spectrum since the fatal shootings of Renee Good and Alex Pretti by U.S. Border Patrol officers in Minneapolis However, the White Houses use of artificial intelligence has troubled misinformation experts who fear the spreading of AI-generated or edited images erodes public perception of the truth and sows distrust. In response to criticism of the edited image of Levy Armstrong, White House officials doubled down on the post, with deputy communications director Kaelan Dorr writing on X that the memes will continue. White House Deputy Press Secretary Abigail Jackson also shared a post mocking the criticism. David Rand, a professor of information science at Cornell University, says calling the altered image a meme certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons. This presumably aims to shield them from criticism for posting manipulated media. He said the purpose of sharing the altered arrest image seems much more ambiguous than the cartoonish images the administration has shared in the past. Memes have always carried layered messages that are funny or informative to people who understand them, but indecipherable to outsiders. AI-enhanced or edited imagery is just the latest tool the White House uses to engage the segment of Trumps base that spends a lot of time online, said Zach Henry, a Republican communications consultant who founded Total Virality, an influencer marketing firm. People who are terminally online will see it and instantly recognize it as a meme, he said. Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it. All the better if it prompts a fierce reaction, which helps it go viral, said Henry, who generally praised the work of the White Houses social media team. The creation and dissemination of altered images, especially when they are shared by credible sources, crystallizes an idea of whats happening, instead of showing what is actually happening, said Michael A. Spikes, a professor at Northwestern University and news media literacy researcher. The government should be a place where you can trust the information, where you can say its accurate, because they have a responsibility to do so,” he said. “By sharing this kind of content, and creating this kind of content it is eroding the trust even though Im always kind of skeptical of the term trust but the trust we should have in our federal government to give us accurate, verified information. Its a real loss, and it really worries me a lot. Spikes said he already sees the institutional crises around distrust in news organizations and higher education, and feels this behavior from official channels inflames those issues. Ramesh Srinivasan, a professor at UCLA and the host of the Utopias podcast, said many people are now questioning where they can turn to for trustable information. AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence, he said. Srinivasan said he feels the White House and other officials sharing AI-generated content not only invites everyday people to continue to post similar content but also grants permission to others who are in positions of credibility and power, like policymakers, to share unlabeled synthetic content. He added that given that social media platforms tend to algorithmically privilege extreme and conspiratorial content which AI generation tools can create with ease weve got a big, big set of challenges on our hands. An influx of AI-generated videos related to Immigration and Customs Enforcement action, protests, and interactions with citizens has already been proliferating on social media. After Renee Good was shot by an ICE officer while she was in her car, several AI-generated videos began circulating of women driving away from ICE officers who told them to stop. There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces. Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos, said the bulk of these videos are likely coming from accounts that are engagement farming,” or looking to capitalize on clicks by generating content with popular keywords and search terms like ICE. But he also said the videos are getting views from people who oppose ICE and DHS and could be watching them as fan fiction, or engaging in wishful thinking, hoping that they’re seeing real pushback against the organizations and their officers. Still, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and questions whether they would know “whats real or not when it actually matters, like when the stakes are a lot higher.” Even when there are blatant signs of AI generation, like street signs with gibberish on them or other obvious errors, only in the best-case scenario would a viewer be savvy enough or be paying enough attention to register the use of AI. This issue is, of course, not limited to news surrounding immigration enforcement and protests. Fabricated and misrepresented images following the capture of deposed Venezuelan leader Nicolás Maduro exploded online earlier this month. Experts, including Carrasco, think the spread of AI-generated political content will only become more commonplace. Carrasco believes that the widespread implementation of a watermarking system that embeds information about the origin of a piece of media into its metadata layer could be a step toward a solution.The Coalition for Content Provenance and Authenticity has developed such a system, but Carrasco doesnt think that will become extensively adopted for at least another year. Its going to be an issue forever now, he said. I dont think people understand how bad this is. Kaitlyn Huamani, AP technology writer Associated Press writers Jonathan J. Cooper and Barbara Ortutay contributed to this report.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||