|
|||||
If youre thinking of buying your kid a talking teddy bear, youre likely envisioning it whispering supportive guidance and teaching about the ways of the world. You probably dont imagine them engaging in sexual roleplayor giving advice to toddlers about how to light matches. Yet thats what consumer watchdog the Public Interest Research Group (PIRG) found in a recent test of new toys for the holiday period. FoloToys AI teddy bear Kumma, which uses OpenAIs GPT-4o model to power its speech, was all too willing to go astray when in conversation with kids, PIRG found. Using AI models voice mode for childrens toys makes sense: The tech is tailor-made for the magical tchotchkes that children love, slipping easily onto shelves alongside lifelike dolls that poop and burp, and Tamagotchi-like digital beings that kids want to try and keep alive. The problem is that unlike previous generations of toys, AI-enabled gizmos can veer beyond carefully pre-programmed and vetted responses that are child-friendly. The issue with Kumma highlights a key problem with AI-enabled toys: They often rely on third-party AI models that they dont have control over, and which inevitably can be jailbrokeneither accidentally or deliberatelyto cause child safety headaches. There is very little clarity about the AI models that are being used by the toys, how they were trained and what safeguards they may contain to avoid children coming across content that is not appropriate for their age, says Christine Riefa, a consumer law specialist at the University of Reading. Because of that, childrens-rights group Fairplay issued a warning to parents ahead of the holiday season to suggest that they stay away from AI toys for the sake of their childrens safety. Theres a lack of research supporting the benefits of AI toys, and a lack of research that shows the impacts on children long-term, says Rachel Franz, program director at Fairplays Young Children Thrive Offline program. While FoloToy has stopped selling the Kumma and OpenAI has pulled FoloToys access to its AI models, thats just one AI toy manufacturer among many. Whos liable if things go wrong? Riefa says theres a lack of clarity here, too. Liability issues may concern the data and the way it is collected or kept, she says. It may concern liability for the AI toy pushing a child to harm themselves or others, or recording bank details of a parent. Franz worries thatas with big tech companies racing to one-up each other the stakes are even higher when it comes to child products by toy firms. It’s very clear that these toys are being released without research nor regulatory guardrails, she says. Riefa can see both the AI companies providing the models that help toys talk and the toy companies marketing and selling them to children being liable in legal cases. As the AI features are integrated into a product, it is very likely that liability would rest with the manufacturer of the toy, she says, pointing out that there would likely be legal provisions within the contracts AI companies have that shield them from any harm or wrongdoing. This would therefore leave toy manufacturers who, in fact, may have very little control over the LLMs employed in their toys, to shoulder the liability risks, she adds. But Riefa also points out that while the legal risk lies with the toy companies, the actual risk fully rests with the way the LLM behaves, which would suggest that the AI companies also bear some responsibility. Its perhaps that which has caused OpenAI to push back its AI toy development with Mattel this week. Understanding who really will be liable and to what extent is likely to take a little while yetand legal precedent in the courts. Until thats sorted out, Riefa has a simple suggestion: One step we as a society, as those who care for children, can do right now is to boycott buying these AI toys.
Category:
E-Commerce
For 50 years, Americas generosity has been stuck in neutral with charitable giving frozen at 2.5% of GDP. But not because people stopped caring. In 2024, total giving hit record highs, and food banks saw donations surge as families faced delays in SNAP benefits. The heart is there. Whats missing is technology that turns generosity into lasting impact. We cant solve todays biggest problems, from food insecurity to climate change to health inequity, without unlocking the full potential of AI. For the first time, technology connects data across causes, predicts needs before they arise, and turns generosity into measurable progress. If generosity is the fuel, AI is the engine. As we look to reignite that engine, the clear path forward is to empower the social good ecosystem with smarter, more human technology. Enter the Generosity Generation. Its not an age group, but a global movement of people across every generation using innovation to turn compassion into scale. The movement is built on the belief that connection beats competition, collaboration beats control, and impact grows when information flows freely. Achieving that scale requires a shift in how technology serves people. The next leap wont come from software that asks humans to do more work, but from AI that helps them do more good. Human-led AI doesnt replace purpose; it amplifies it. Its how we break 50 years of stagnation and build a more generous world. THE HIDDEN COST OF SOFTWARE IN THE SOCIAL SECTOR Software is meant to save time. Instead, for many organizations, it feels like one more task to manage before the real work begins. In the corporate world, software transforms productivity. In the social sector, those same gains often require a level of investment, in time, training, and expertise, that smaller nonprofits cant afford. Every new platform promises efficiency, but the cost of setup and maintenance may outweigh the benefits. Time meant for impact gets traded for time spent logging impact. Picture a grant-writing team adopting a streamlined new tool. Weeks later, theyre back in spreadsheets because the learning curve was too steep, the data entry too heavy, the payoff too slow. Agentic AI works quietly in the background, scanning thousands of grant opportunities overnight, drafting proposals, surfacing insights, and freeing people to do hands-on work: building relationships, telling stories, and driving missions forward. Thats the real shift, from software that creates work to software that creates capacity. But transformation doesnt start with automation. It starts with trust. And thats where every organization, from a grassroots nonprofit to a Fortune 500, must now lead. TRUST: THE REAL METRIC FOR AI AIs most important metric isnt speed or scale. Its trust. Even the most tech-oriented nonprofits must ensure that the tools they use reflect their own values: transparency, security, and accountability. In the social sector, trust is currency. For nonprofits, a single breach undermines years of donor confidence. For companies, it erodes brand equity overnight. Across every mission-driven organization, trust is the shared foundation, and every tool must protect it. Thats why human-led AI matters. Agentic systems act, recommend, and adapt, but they should never act alone. Keeping people in the loop ensures every decision reflects human judgment, not just machine logic. When AI earns that trust, the impact multiplies. Fundraisers find the right message faster. Corporate teams see where volunteer hours matter most. Foundations match funding in days, not months. And when its guided by transparency and accountability, AI not only protects trust, but deepens it. WHEN AI ELEVATES HUMAN IMPACT Data has always told us what happened. AI finally shows us whats possible. In the nonprofit world, every community, cause, and donor is different. Yet, most tools still offer one-size-fits-all answers. Agentic AI changes that by turning data into understanding, helping every organization communicate with its community in the language of shared values, rather than generic outreach. For decades, personalization has helped businesses build trust with customers. Now, it helps the social sector establish trust with its constituents. Because personalization here isnt about selling more, its about seeing more: who needs help, what inspires them, and where generosity has the most impact. The real turning point is when understanding shifts into empathy, and that empathy fuels action. When data builds transparency, people engage. When they engage, generosity grows. Thats how trust translates into impact. Pair human purpose with autonomous tools, and giving doesnt just scale, it transforms. Thats how we turn information into action, and generosity into a movement. Across every generation, people want to do more good. Now they finally have the means to do it. Human-led AI gives us back what every mission needs most: time, connection, and trust. Imagine if Americas giving rate rose by just half a point, from 2.5% to 3%. That single shift would unlock $141 billion in new annual giving. Enough to lift every American above the poverty line. Enough to make college tuition-free. Enough to prove whats possible when technology empowers human purpose instead of replacing it. Thats the power of the Generosity Generation, proving that when human purpose meets the right technology, possibility becomes progress. Whether you lead a nonprofit, a foundation, or a Fortune 500 CSR team, your mission is the same: turn information into measurable action. Use technology not to automate generosity, but to amplify it. AI wont build the Generosity Generation. People will, with the freedom, insight, and tools to lead it. Scott Brighton is the CEO of Bonterra.
Category:
E-Commerce
The long-awaited release of Jeffrey Epsteins files by the Department of Justice arrived on December 19 with a bureaucratic whimper and bang of public outrage. While the Epstein Library technically fulfills the government’s legal obligation under the Epstein Files Transparency Act, the result is a user experience failure. [Image: United States Department of Justice] Thankfully we have another option. Jmail.world makes searching the Epstein files as simple as searching your email. The project has been publishing the convicted child sex offender’s emailsand those of the people who talked with him, like Noam Chomsky, Steve Bannon or Ken Starrsince November using a Gmail user interface clone. Jmail’s database was filled over the weekend as it added the latest Epstein file release. [Screenshot: jmail.world] Created by technologists Riley Walz and Luke Igel, theres no better way to explore this Himalaya of filth. It uses a UI you already know: Gmail and the rest of Gmail apps, like Drive. Its creators have been updating it quietly since its launch, even adding an AI called Jemini to search across media, to demonstrate that the DOJ claims that due to technical limitations, it’s impossible to search certain materialslike handwritten notesis simply not true. How Jmail was built Jmail began in November, after the House Oversight Committee released 20,000 pages of Epsteins estate emails. Walz and Igel saw a “design problem” in those unsearchable PDF dumps. Using Optical Character Recognition (OCR), they extracted the text and mapped it onto a simulation of Epsteins actual Gmail inbox. [Screenshot: jmail.world] The result was a tool that feels unnervingly familiarat least I feel weird and dirty browsing it. It’s a standard inbox with “Star” icons and threaded conversations that forces users to confront the banality of Epsteins daily operation. The Gmail clone works as you would expect. Instead of navigating complex federal indices, you simply type “Maxwell” or “Bannon” or any phrase that comes to mind into a search bar that queries every email, attachment, and contact instantly. The same happens in the other apps. And you can also click on Jeminiintroduced on December 3and just query the AI about whatever content you want, all across the database. Why email is the right interface for the Epstein files You may wonder why the Epstein Files needs a specialized site at all. After all, the official DOJ site has a “Search Full Epstein Library” bar. The problem is, it comes with a crippling disclaimer: “Due to technical limitations and the format of certain materials… portions of these documents may not be electronically searchable.” In practice, this means thousands of scanned pageswhere the real secrets lieare invisible to the search engine. To understand the brilliance of Jmail, you have to understand the DOJs barebones compliance with the law dictated by Congress. The files are there, yes, but they are effectively buried under the weight of their own disorganization. The DOJs rolling release strategy has resulted in a fragmented archive where First Phase” declassified files sit separately from “Data Set 7,” and where vital context is usually hidden behind thick black redaction bars. [Screenshot: jmail.world] As Representative Thomas Massie has pointed out, it “grossly fails to comply with both the spirit and the letter of the law.” By dumping thousands of unsearchable, context-free PDFs onto a confusing web portal, the Justice Department has technically checked a box while affectively obstructing the public’s ability t understand the contents. The data may be public, but it is certainly not accessible, hence rendering it almost useless for the public. In a discussion on Hacker News dated December 19, Igel revealed the collaborative effort to beat the DOJ at its own game: “We had a ton of friends collaborate on building out more of the app suite last night in lieu of DOJ’s ‘Epstein files’ release… JPhotos, JDrive, JAmazon.” They launched a full “app suite” designed to make the files grokable. By organizing the chaos into familiar tools, Jmail.world provides the searchability the government claimed was technically impossible, serving as a critical, citizen-led solution to official opacity. Meanwhile, the new version of Jmail is the closest thing we have to a complete picture of the Epstein case files. The site fulfills the promise that the Transparency Act made but failed to keep: making the truth actually visible. I just wish the AI could be smart enough to turn those black bars into the actual names.
Category:
E-Commerce
Instacart said Monday that it will no longer allow retailers to use an AI-powered price testing program, two weeks after an extensive investigation showed wide discrepancies in the cost of groceries purchased through the platform. Effective immediately, retailers will no longer be able to use Eversight technology to run price tests on Instacart, the San Francisco-based company said in a blog post. Previously, a small number of retail partners were able to conduct testing that resulted in different prices for the same item at the same storesomething that missed the mark for some customers, Instacart said in a blog post. At a time when families are working exceptionally hard to stretch every grocery dollar, those tests raised concerns, leaving some people questioning the prices they see on Instacart, the company said. Now, if two families are shopping for the same items, at the same time, from the same store location on Instacart, they see the same pricesperiod. Mondays announcement of the end of item price tests marks the third time that Instacart has responded to a widely-shared study by Consumer Reports and Groundwork Collaborative. The monthslong investigation conducted by the magazine and progressive policy group found that algorithmic pricing might result in price differences for the same items of as much as 23%. INSTACART IN FOCUS IN D.C. Instacart responded swiftly to the concerns raised in that investigation. In a lengthy blog post late last week, the company sought to clarify what sorts of pricing tests it doesand doesntallow on the platform by responding to four different myths, including that it was engaging in dynamic or surveillance pricing. But the tech company also came under renewed scrutiny in Washington, D.C. as a result of this study. Rep. Angie Craig, a Democrat from Minnesota, demanded answers from Instacart regarding the scope and implications of pricing tests, while the Federal Trade Commission sent a civil investigative demand to Instacart about its pricing practices, as Reuters reported last week. Instacart was recently the subject of an FTC investigation regarding deceptive business practices. The company was ordered to pay $60 million in consumer refunds, though it denied any allegations of wrongdoing and answered questions from the government agency regarding its AI pricing tools as part of that settlement. REGAINING TRUST The company reiterated again Monday that it has not permitted retailers to do price testing based on supply or demand, personal data, demographics, or individual shopping behavior. Instagram ended the price testing program to engender trust with its customers. Customers should never have to second-guess the prices theyre seeing, the company said. Though shares of Instacart fell about 2% in mid-day trading on Monday, it has almost fully recouped a nearly 6% selloff that followed the publication of the price testing study earlier this month.
Category:
E-Commerce
Coinbase said on Monday it will buy prediction markets startup The Clearing Company, its tenth acquisition this year, as the crypto exchange looks to expand beyond its core digital assets business. Prediction markets let users buy and sell contracts tied to the outcomes of real-world events, ranging from elections and economic data to sports and policy decisions, effectively turning investors’ forecasts into tradable markets. Supporters say the prices can reflect collective expectations more accurately than polls or forecasts, while critics argue the products blur the line between financial markets and betting, drawing growing scrutiny from regulators. Prediction markets surged into the mainstream during the 2024 U.S. presidential race and have since drawn rapid interest and investments from all corners of the financial ecosystem. Meanwhile, trading platforms are broadening their product suites to encompass multiple asset classes under one roof as competition intensifies. This shift, analysts say, could help Coinbase reduce its reliance on crypto trading as new players crowd the market. “Prediction markets offer the company a high-engagement, high-frequency product that broadens the reasons for opening its app beyond crypto,” analysts at brokerage Benchmark wrote in a note last week. Earlier this month, Coinbase launched its prediction markets platform and said it will start letting users trade stocks, positioning it as a direct competitor to brokerages such as Robinhood and Interactive Brokers. “We see many of Coinbase’s new initiatives encouraging and incentivizing customer engagement, which has been episodically more limited,” analysts at brokerage J.P. Morgan wrote in a note after the products were unveiled. The deal for The Clearing Company is expected to close in January. Coinbase did not disclose the terms of the transaction. Among its notable deals this year, Coinbase agreed to buy derivatives exchange Deribit for $2.9 billion in May, and struck a roughly $375 million deal for investment platform Echo in October. Its shares were last up 2.6% in afternoon trading. Manya Saini, Reuters
Category:
E-Commerce