|
|||||
Adam Mosseri, the head of Meta’s Instagram, testified Wednesday during a landmark social media trial in Los Angeles that he disagrees with the idea that people can be clinically addicted to social media platforms.The question of addiction is a key pillar of the case, where plaintiffs seek to hold social media companies responsible for harms to children who use their platforms. Meta Platforms and Google’s YouTube are the two remaining defendants in the case, which TikTok and Snap have settled.At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose lawsuit could determine how thousands of similar lawsuits against social media companies would play out. She and two other plaintiffs have been selected for bellwether trials essentially test cases for both sides to see how their arguments play out before a jury.Mosseri, who’s headed Instagram since 2018 said it’s important to differentiate between clinical addiction and what he called problematic use. The plaintiff’s lawyer, however, presented quotes directly from Mosseri in a podcast interview a few years ago where he used the term addiction in relation to social media use, but he clarified that he was probably using the term “too casually,” as people tend to do.Mosseri said he was not claiming to be a medical expert when questioned about his qualifications to comment on the legitimacy of social media addiction, but said someone “very close” to him has experienced serious clinical addiction, which is why he said he was “being careful with my words.”He said he and his colleagues use the term “problematic use” to refer to “someone spending more time on Instagram than they feel good about, and that definitely happens.”It’s “not good for the company, over the long run, to make decisions that profit for us but are poor for people’s well-being,” Mosseri said.Mosseri and the plaintiff’s lawyer, Mark Lanier, engaged in a lengthy back-and-forth about cosmetic filters on Instagram that changed people’s appearance in a way that seemed to promote plastic surgery.“We are trying to be as safe as possible but also censor as little as possible,” Mosseri said.In the courtroom, bereaved parents of children who have had social media struggles seemed visibly upset during a discussion around body dysmorphia and cosmetic filters. Meta shut down all third-party augmented reality filters in January 2025. The judge made an announcement to members of the public on Wednesday after the displays of emotion, reminding them not to make any indication of agreement or disagreement with testimony, saying that it would be “improper to indicate some position.”During cross examination, Mosseri and Meta lawyer Phyllis Jones tried to reframe the idea that Lanier was suggesting in his questioning that the company is looking to profit off of teens specifically.Mosseri said Instagram makes “less money from teens than from any other demographic on the app,” noting that teens don’t tend to click on ads and many don’t have disposable income that they spend on products from ads they receive. During his opportunity to question Mosseri for a second time, Lanier was quick to point to research that shows people who join social media platforms at a young age are more likely to stay on the platforms longer, which he said makes teen users prime for meaningful long-term profit.“Often people try to frame things as you either prioritize safety or you prioritize revenue,” Mosseri said. “It’s really hard to imagine any instance where prioritizing safety isn’t good for revenue.”Meta CEO Mark Zuckerberg is expected to take the stand next week.In recent years, Instagram has added a slew of features and tools it says have made the platform safer for young people. But this does not always work. A report last year, for instance, found that teen accounts researchers created were recommended age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to describe demeaning sexual acts, and brief displays of nudity.”In addition, Instagram also recommended a “range of self-harm, self-injury, and body image content” on teen accounts that the report says “would be reasonably likely to result in adverse impacts for young people, including teenagers experiencing poor mental health, or self-harm and suicidal ideation and behaviors.” Meta called the report “misleading, dangerously speculative” and said it misrepresents its efforts on teen safety.Meta is also facing a separate trial in New Mexico that began this week. By Kaitlyn Huamani and Barbara Ortutay, AP Technology Writers
Category:
E-Commerce
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here. Is AI slop code here to stay? A few months ago I wrote about the dark side of vibe coding tools: they often generate code that introduces bugs or security vulnerabilities that surface later. They can solve an immediate problem while making a codebase harder to maintain over time. Its true that more developers are using AI coding assistants, and using them more frequently and for more tasks. But many seem to be weighing the time saved today against the cleanup they may face tomorrow. When human engineers build projects with lots of moving parts and dependencies, they have to hold a vast amount of information in their heads and then find the simplest, most elegant way to execute their plan. AI models face a similar challenge. Developers have told me candidly that AI coding tools, including Claude Code and Codex, still struggle when they need to account for large amounts of context in complex projects. The models can lose track of key details, misinterpret the meaning or implications of project data, or make planning mistakes that lead to inconsistencies in the codeall things that an experienced software engineer would catch. The most advanced AI coding tools are only now beginning to add testing and validation features that can proactively surface problematic code. When I asked OpenAI CEO Sam Altman during a recent press call whether Codex is improving at testing and validating generated code, he became visibly excited. Altman said OpenAI likes the idea of deploying agents to work behind developers, validating code and sniffing out potential problems. Indeed, Codex can run tests on code it generates or modifies, executing test suites in a sandboxed environment and iterating until the code passes or meets acceptance criteria defined by the developer. Claude Code, meanwhile, has its own set of validation and security features. Anthropic has built testing and validation routines into its Claude Code product, too. Some developers say Claude is stronger at higher-level planning and understanding intent, while Codex is better at following specific instructions and matching an existing codebase. The real question may be what developers should expect from these AI coding tools. Should they be held to the standard of a junior engineer whose work may contain errors and requires careful review? Or should the bar be higher? Perhaps the goal should be not only to avoid generating slop code but also to act as a kind of internal auditor, catching and fixing bad code written by humans. Altman likes that idea. But judging by comments from another OpenAI executive, Greg Brockman, its not clear the company believes that standard is fully attainable. Brockman, OpenAIs president, suggests in a recently posted set of AI coding guidelines that AI slop code isnt something to eliminate so much as a reality to manage. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high, Brockman wrote on X. Saas stocks still smarting from last weeks SaaSpocalypse Last week, shares of several major software companies tumbled amid growing anxiety about AI. The share prices of ServiceNow, Oracle, Salesforce, AppLovin, Workday, Intuit, CrowdStrike, Factset Research, and Thompson Reuters fell so sharply that Wall Street types began to refer to the event as the SaaSpocalypse. The stocks fell sharply on two pieces of news. First, late in the day on Friday, January 30, Anthropic announced a slate of new AI plugins for its Cowork AI tool aimed at information workers, including capabilities for legal, product management, marketing, and other functions. Then, on February 4, the company unveiled its most powerful model yet, Claude Opus 4.6, which now powers the Claude chatbot, Claude Code, and Cowork. For investors, Anthropics releases raised a scary question: How will old-school SaaS companies survive when their products are already being challenged by AI-native tools? Although software shares rebounded somewhat later in the week, as analysts circulated reassurances that many of these companies are integrating new AI capabilities into their products, the unease lingers. In fact, many of the stocks mentioned above have yet to recover to their late-January levels. (Some SaaS players, like ServiceNow, are now even using Anthropics models to power their AI features.) But its a sign of the times, and investors will continue to watch carefully for signs that enterprises are moving on from traditional SaaS solutions to newer AI apps or autonomous agents. China is flexing its video models This week, some new entrants in the race for best model are very hard to miss. X is awash with posts showcasing video generated by new Chinese video generation modelsSeedance 2.0 from ByteDance and Kling 3.0 from Kuaishou. The video is impressive. Many of the clips are difficult to distinguish from traditionally shot footage, and both tools make it easier to edit and steer the look and feel of a scene. AI-generated video is getting scary-good, its main limitation being that the generated videos are still pretty short. Sample videos from Kling 3.0, which range from 3 seconds to 15 seconds, feature smooth scene transitions and a variety of camera angles. The characters and objects look consistent from scene to scene, a quality that video models have struggled with. The improvements are owed in part to the models ability to glean the creators intent from the prompts, which can include reference images and videos. Kling also includes native audio generation, meaning it can generate speech, sound effects, ambient audio, lip-sync, and multi-character dialogue in a number of languages, dialects, and accents. ByteDances Seedance 2.0, like Kling 3.0, generates video with multiple scenes and multiple camera angles, even from a single prompt. One video featured a shot from within a Learjet in flight to a shot from outside the aircraft. The video motion looks smooth and realistic, with good character consistency across frames and scenes, so that it can handle complex high-motion scenes like fights, dances, and action sequences. Seedance can be prompted with text, images, reference videos, and audio. And like Kling, Seedance can generate synchronized audio including voices, sound effects, and lip-sync in multiple languages. More AI coverage from Fast Company: Were entering the era of AI unless proven otherwise A Palantir cofounder is backing a group attacking Alex Bores over his work with . . . Palantir Why a Korean film exec is betting big on AI Mozillas new AI strategy marks a return to its rebel alliance roots Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
Russia has attempted to fully block WhatsApp in the country, the company said, the latest move in an ongoing government effort to tighten control over the internet.A WhatsApp spokesperson said late Wednesday that the Russian authorities’ action was intended to “drive users to a state-owned surveillance app,” a reference to Russia’s own state-supported MAX messaging app that’s seen by critics as a surveillance tool.“Trying to isolate over 100 million people from private and secure communication is a backwards step and can only lead to less safety for people in Russia,” the WhatsApp spokesperson said. “We continue to do everything we can to keep people connected.”Russia’s government has already blocked major social media like Twitter, Facebook, and Instagram, and ramped up other online restrictions since Russia’s full-scale invasion of Ukraine in 2022.Kremlin spokesman Dmitry Peskov said WhatsApp owner Meta Platforms should comply with Russian law to see it unblocked, according to the state Tass news agency.Earlier this week, Russian communications watchdog Roskomnadzor said it will introduce new restrictions on the Telegram messaging app after accusing it of refusing to abide by the law. The move triggered widespread criticism from military bloggers, who warned that Telegram was widely used by Russian troops fighting in Ukraine and its throttling would derail military communications.Despite the announcement, Telegram has largely been working normally. Some experts say it’s a more difficult target, compared with WhatsApp. Some Russian experts said that blocking WhatsApp would free up technological resources and allow authorities to fully focus on Telegram, their priority target.Authorities had previously restricted access to WhatsApp before moving to finally ban it Wednesday.Under President Vladimir Putin, authorities have engaged in deliberate and multipronged efforts to rein in the internet. They have adopted restrictive laws and banned websites and platforms that don’t comply, and focused on improving technology to monitor and manipulate online traffic.Russian authorities have throttled YouTube and methodically ramped up restrictions against popular messaging platforms, blocking Signal and Viber and banning online calls on WhatsApp and Telegram. In December, they imposed restrictions on Apple’s video calling service FaceTime.While it’s still possible to circumvent some of the restrictions by using virtual private network services, many of them are routinely blocked, too.At the same time, authorities actively promoted the “national” messaging app called MAX, which critics say could be used for surveillance. The platform, touted by developers and officials as a one-stop shop for messaging, online government services, making payments and more, openly declares it will share user data with authorities upon request. Experts also say it doesn’t use end-to-end encryption. Associated Press
Category:
E-Commerce
Daniel Kokotajlo predicted the end of the world would happen in April 2027. In AI 2027 a document outlining the impending impacts of AI, published in April 2025 the former OpenAI employee and several peers announced that by April 2027, unchecked AI development would lead to superintelligence and consequently destroy humanity. The authors, however are going back on their predictions. Now, Kokotajlo forecasts superintelligence will land in 2034, but he doesnt know if and when AI will destroy humanity. In AI 2027, Kokotajlo argued that superintelligence will emerge through fully autonomous coding, enabling AI systems to drive their own development. The release of ChatGPT in 2022 accelerated predictions around artificial general intelligence, with some forecasting its arrival within years rather than decades. These predictions accrued widespread attention. Notably, JD Vance, U.S. vice president, reportedly read AI 2027 and later urged Pope Leo XIV who underscored AI as a main challenge facing humanity to provide international leadership to avoid outcomes listed in the document. On the other hand, people like Gary Marcus, emeritus professor of neuroscience at New York University, disregarded AI 2027 as a work of fiction, even calling various predictions pure science fiction mumbo jumbo. As researchers and the public alike begin to reckon with how jagged AI performance is, AGI timelines are starting to stretch again, according to Malcolm Murray, an AI risk management expert and one of the authors of the International AI Safety Report. For a scenario like AI 2027 to happen, [AI] would need a lot of more practical skills that are useful in real-world complexities, Murray said. Still, developing AI models that can train themselves remains a steady goal for leading AI companies. Sam Altman, OpenAI CEO, set internal goals for a true automated AI researcher by March of 2028. However, hes not entirely confident in the companys capabilities to develop superintelligence. We may totally fail at this goal, he admitted on X, but given the extraordinary potential impacts we think it is in the public interest to be transparent about this. And so, superintelligence may still be possible, but when it arrives and what it will be capable of remains far murkier than AI 2027 once suggested. Leila Sheridan This article originally appeared on Fast Company‘s sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
Category:
E-Commerce
For most of modern finance, one number has quietly dictated who gets ahead and who gets left out: the credit score. It was a breakthrough when it arrived in the 1950s, becoming an elegant shortcut for a complex decision. But shortcuts age. And in a world driven by data, digital behavior, and real-time signals, the score is increasingly misaligned with how people actually live and manage money. Were now at a turning point. A foundational system, long considered untouchable, is finally being reconstructed by using AIspecifically, advanced machine learning models built for risk predictionto extract more intelligence from existing data. These are rigorously tested, well-governed systems that help lenders see risk with greater nuance and clarity. And the results are reshaping core economics for lenders. THE CREDIT SCORE WASNT BUILT FOR MODERN CONSUMERS Legacy credit scores rely on a narrow slice of information updated at a pace that reflects the black-and-white television era. A single late payment can overshadow years of financial discipline. Data updates lag behind real behavior. And lenders are forced to make million-dollar decisions using a tool that cant see volatility, nuance, or context. A single, generic credit score is a compromise by design. National credit scores are designed to work reasonably well across thousands of institutions, but not optimally for any specific one. That becomes clear when you compare regional differences. A lender in an agricultural region may see very different income seasonality and cash-flow patterns than a lender in a major metro areadifferences that a universal score was never designed to capture. Financial institutions need models built around their actual membership that can adjust to different financial histories and behaviors. That rigidity has created the gap were now seeing across the economy. Consumers feel squeezed, lenders feel exposed, and businesses struggle to grow in a risk environment that looks nothing like the one their scoring tools were built for. Modern machine-learning models give lenders something the score never coulda panoramic view instead of a narrow window. HOW AI CHANGES THE GAME The data in credit files has long been there. Whats changed is the modelingmodern machine learning systems that can finally make full use of those signals. These models can evaluate thousands of factors inside bureau files, not just the static inputs, but the patterns behind them: How payment behavior changes over time Which fluctuations are warning signs versus temporary noise How multiple variables interact in ways a traditional score cant measure This lets lenders differentiate between someone who is truly risky and someone who is momentarily out of rhythm. The impact is profound: more approvals without more losses, stronger compliance without more overhead, and decisions that align with how people actually manage their finances today. For leadership teams, this also means making intentional choices about who to serve and how to allocate capital. Tailored models let institutions focus their resources on the customers they actually want to reach, rather than relying on a one-size-fits-all score. AI FIXES SOMETHING WE DONT TALK ABOUT ENOUGH There’s widespread concern about AI bias, and rightly so. When algorithms aren’t trained on a representative set of data or arent monitored after deployment, this can create biased results. In lending, these models arent deployed on faith; theyre validated, back-tested, and monitored over time, with clear documentation of the factors driving each decision. Modern explainability techniques, now well-established in credit risk, can give regulators and consumers a clearer view into how and why decisions are made. Business leaders should also consider that there is bias embedded in manual underwriting. Human decisionsespecially in high-volume, time-pressured environmentsvary from reviewer to reviewer, case to case, hour to hour. Machine learning models that use representative data, are regularly monitored, and make explainable, transparent decisions, giving humans a dependable baseline. This allows them to focus on exceptions, tough cases, and strategy. THE NEW ADVANTAGE FOR BUSINESS LEADERS The next era of lending will be defined by companies that operationalize AI with discipline, building in strong governance, clear guardrails, and transparency. Those who do will see higher approval rates, lower losses, faster decisions with fewer manual bottlenecks, and fairer outcomes that reflect real behavior, not outdated shortcuts. For the first time in 70 years, were able to bring real, impactful change to one of the most influential drivers in the economy. THE FUTURE ISNT A SCORE, ITS UNDERSTANDING If the last century of lending was defined by a single, blunt number, the next century will be defined by intelligence. By the ability to interpret risk with nuance, adapt to fast-moving economic signals, and extend opportunity to people who have long been underestimated by the system. AI wont make lending flawless. But it gives us the clearest path weve ever had toward a credit ecosystem that is more accurate, more resilient, and far fairer than the one we inherited. And for leaders focused on growth, innovation, and long-term competitiveness, that shift is transformational. Sean Kamkar is CTO of Zest AI.
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] next »