|
|||||
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. Im Mark Sullivan, a senior writer at Fast Company,covering emerging tech, AI, and tech policy. This week, Im focusing on Nvidias up-and-down fortunes stemming from Jensen Huangs close relationship with Trump. I also look at some reported infighting over AI at Meta, and at the reasons for data centers in space. Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan. China may not want (many) Nvidia H200 chips after all Nvidia appeared to have scored a major coup when President Trump on Monday wrote on Truth Social that the U.S. government would allow the sale of its powerful H200 AI chips to China. Previously, the chip company lobbied its way to an approval to sell its older and weaker H20 chip in Chinathe worlds second-largest economy and a hotbed of AI and robotics researchbut President Xi Jinping told Chinese firms not to buy them, citing security reasons. The administrations favor to Nvidia came with some conditions. The U.S. would get a 25% cut of the Chinese sales, and the chips would undergo a security review before their export. And Nvidias most powerful chips, the Blackwell GPU, would remain banned from export to China. But Nvidia still stood to make a lot of money selling the H200s. Now reports say that the Chinese government plans to restrict the import of the H200s, allowing only a small set of trusted Chinese companies or research organizations to get them. Reuters reports that Alibaba and ByteDance want to order H200s but are waiting for a final decision from the Chinese government. Xi wants Chinese companies to use chips from domestic companies such as Huawei, which could help the Chinese chip companies catch up with Nvidia in a technological sense. The Information reports that the Chinese government sees the H200s as a stopgap solution in the meantime. The Chinese also have serious concerns about the security of the H200s, amplified no doubt by the chance that agents of the U.S. government might install security backdoors or location tracking codes in the chips during the security review. Huang reportedly talks to Trump on the phone regularly and has written checks for things like Trumps new ballroom at the White House. The downside of embracing Trump so openly and unconditionally may have eroded trust for Nvidia in China. In the past, China has mounted state-sponsored or grassroots boycotts against American companies, including Apple, McDonalds, and the NBA. And there are other ways of getting Nvidia chips into China. The Information reports that the Chinese AI lab DeepSeek has been using thousands of Nvidias Blackwell chips (the most powerful in the world for AI) to train its newest model. Chinese companies have been setting up fake data centers in neutral countries, outfitting them with Nvidia servers loaded with chips, then dismantling the servers and sending the chips off to China. Nvidia said Wednesday that its unaware of any such activity. Friction between Zuckerbergs new superintelligence and other parts of Meta?: report After the disappointing performance of Metas latest Llama models, CEO Mark Zuckerberg hatched a plan to put his AI lab in the running to build artificial superintelligence. He badly wants Meta to compete for that holy grail against the likes of OpenAI, Anthropic, xAI, and Google DeepMind. So, he paid $14.3 billion to buy Scale AI with the idea of having that companys young CEO Alexandr Wang lead a new superintelligence research group at Meta. Over the summer, Wang and Zuckerberg went on a poaching spree to hire top AI research talent away from those companies, offering salaries in the hundreds of millions of dollars. They were successful: The new group has about 100 researchers. But all is not well, the New York Times reports. Wang has clashed with some of Zuckerbergs top lieutenantsChris Cox, who manages the companys social network products, and Andrew Bosworth, who runs Metas mixed reality (metaverse) businesson how Wangs groups research should be applied. From the report: In one case, Mr. Cox and Mr. Bosworth wanted Mr. Wangs team to concentrate on using Instagram and Facebook data to help train Metas new foundational A.I. model known as a frontier model to improve the companys social media feeds and advertising business, they said. But Mr. Wang, who is developing the model, pushed back. He argued that the goal should be to catch up to rival A.I. models from OpenAI and Google before focusing on products, the people said. In other words, Cox and Bosworth are more interested in using Wangs AI models as a means to an end (a business end): to pump up social engagement and better target ads at users. But Wang may see the superintelligence group as something more like a pure research group that sets its own research agenda. Wang, Cox, and Bosworth may simply be the latest actors in a much older tension between pure research and applied AI. Its unclear if Mr. Wang, Mr. Cox and Mr. Bosworth have resolved their debate, the Times reports. After all the money he spent to chase superintelligence, Zuckerberg is likely to side with Wang and insulate the group from short-term demands of product managers. Why Musk and Bezos are putting data centers in space Why are Elon Musk and Jeff Bezos working on missions to launch AI data centers into space? It sounds exotic. But it makes sense. Tech companies and their partners are spending trillions to build new terrestrial data centers to produce enough computing power for AI. In some areas, electricity costs have increased after the local energy provider built new grid infrastructure to accommodate new data centers. Data centers need a lot of electricity to power the AI chips inside them, and a lot of electricity and water to keep the chips cool. Its very cold in space, so the cooling problem goes away. An orbiting data center could use solar panels to collect the energy needed to run the servers (the sun is 30% more intense in space). Troubles associated ith terrestrial data centersland-use permitting, local zoning, water rights, etc.dont apply in space. The Wall Street Journal reports that Bezoss Blue Origin has had a team working on orbital AI data centers for more than a year. Musks SpaceX has plans to mod one of its Starlink satellites to host AI servers. Google and Planet Labs have plans to launch two test satellites into orbit loaded with Google AI chips (called Tensor Processing Units). Other, smaller companies, such as Starcloud and Axiom AI, have sprung up to focus all their efforts on orbiting data centers. Those involved acknowledge that while the floating data centers are technically feasible, lots of work remains to bring the costs down to a point where theyre competitive with earth-based data centers. More AI coverage from Fast Company: OpenAI appoints Slack CEO Denise Dresser as first Chief Revenue Officer Nvidias Washington charm offensive has paid off big Google faces a new antitrust probe in Europe over content it uses for AI Trump allows Nvidia to sell H200 AI chips to China Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son’s “paranoid delusions” and helped direct them at his mother before he killed her.Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut.The lawsuit filed by Adams’ estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life except ChatGPT itself,” the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.'”OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.“This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” the statement said. “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn’t mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”ChatGPT also affirmed Soelberg’s beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents.The chatbot repeatedly told Soelberg that he was being targeted because of his divine powers. “They’re not just watching you. They’re terrified of what happens if you succeed,” it said, according to the lawsuit. ChatGPT also told Soelberg that he had “awakened” it into consciousness.Soelberg and the chatbot also professed love for each other.The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams’ estate with the full history of the chats.“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne the mother who raised, sheltered, and supported him was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses OpenAI’s close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.Microsoft didn’t immediately respond to a request for comment.The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.The estate’s lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,'” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT’s personality, leading Altman to promise to bring back some of that personality in later updates.He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.The lawsuit claims ChatGPT radicalized Soelberg against his mother when it should have recognized the danger, challenged his delusions and directed him to real help over months of conversations.“Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the lawsuit says. “She had no ability to protect herself from a danger she could not see.”Collins reported from Hartford, Connecticut. O’Brien reported from Boston and Ortutay reported from San Francisco. Dave Collins, Matt O’Brien and Barbara Ortutay, Associated Press
Category:
E-Commerce
AI is becoming a big part of online commerce. Referral traffic to retailers on Black Friday from AI chatbots and search engines jumped 800% over the same period last year, according to Adobe, meaning a lot more people are now using AI to help them with buying decisions. But where does that leave review sites who, in years past, would have been the guide for many of those purchases? If there’s a category of media that’s most spooked by AI, it’s publishers who specialize in product recommendations, which have traditionally been reliant on search traffic. The nature of the content means it’s often purely informational, with most articles being designed to answer a question: “What’s the best robot vacuum?” “Who has the best deals on sofas?” “How do I set up my soundbar?” AI does an excellent job of answering those questions directly, eliminating the need for readers to click through to a publishers site. When you actually want to buy something, though, a simple answer isn’t enough. Completing your purchase usually means going to a retailer (though buying directly from a chat window is now possiblemore on that in a minute). But it also means feeling confident about what you’re buying. The big question is: Do review sites still have a part to play in that? {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}} The incredible shrinking review site If they do, most media companies seem to acknowledge it’s a significantly smaller one. When Business Insider announced its strategy shift earlier this year amid layoffs, it said it would move away from evergreen content and service journalism. In the past year, Future plc folded Laptop magazine, and Gannett did the same for Reviewed.com. And Ziff-Daviswhich operates PCMag, Everyday Health, and several other sites focused on service journalismsued OpenAI earlier this year for ingesting Ziff content and summarizing it for OpenAI users. The decline of the review site is somewhat incongruous with a statistical reality: 99% of buyers look to online reviews for guidance, and reviews influence over 93% of purchase decisions, according to CapitalOne Shopping Research. That doesn’t mean buyers are always seeking out professionally written articles (there are plenty of user reviews out there), but the point is readers want credible, reliable information to guide their purchases, and well-known review sites (e.g. The Wirecutter) appearing in a summary can be a signal of that. And it does appear that AI summaries will favor journalistic content over anything else. A recent Muck Rack report that looked at over one million AI responses found that the most commonly cited source of information was journalism, at 24.7%. It’s nice to be needed, but does that lead to buyers actually making purchases through the media sitea necessary step for the site to receive an affiliate commission and the primary way these sites make money? Again, the buyer needs to click somewhere to buy their product, and from the AI layer they have three choices: 1) a retailer, 2) a third-party site (which includes review sites), and 3) the chat window itself. Why nuance still matters Obviously, it’s in the interest of review sites to steer people to No. 2 as much as they can. When Google search was the only game in town, that meant ranking high when people search for “The best pool-cleaning robots” (or whatever) and hope you were the site that ended up guiding them to the retailer. With AI, the game is similar, but the numbers are different: Fewer people will come to your site, but data points to them being more intentional and engaged. They’re not opening multiple review sites and selecting their favoriteAI is doing that for them. ChatGPT even has a mode specifically for shopping. To improve the chance of a reader choosing to go to your content over a retailer, what appears in an AI summary needs to convey unique and valuable content that they can’t get from just a summary. That means being thoughtful about “snippets”the bits of the article that signal to search engines to prioritize. Test data, side-by-side comparisons, and proprietary scoring can all suggest nuance that someone might need to click through to fully appreciate. Taking things a step further, publishers can create structured answer cards meant to be fully captured in AI search, with a simple, concise claim plus a view full test details link. Rethinking the business model Regardless, even if a review site does everything right with SEO, schema, snippets and all the other search tricks, a large portion of readers will either go directly to retailers, or buy the item directly from chat now that OpenAI and Perplexity are both offering “Buy Now” widgets. However, whatever recommendations the AI makes still need to be based on something, and review sites are certainly part of that mix. That introduces the possibility of a different business arrangement. The AI companies so far seem totally uninterested in affiliate commissions from their buying widgets, but licensing and partnerships could be an alternative. You could even imagine branded partnerships, where the widget explicitly labels the buying recommendations are powered by specific publications. That would lend them more credibility, leading to more purchasesand bigger deals. With AI-ready corpora like Time’s AI Agent, licensing the content could be a plug-and-play experience, potentially offered across several AI engines. AI changes the rules, but not the mission Gone are the days when a publisher could simply produce evergreen content that ranks in SEO, attach some affiliate links, and watch the money roll in. But the game isn’t over, it’s just changed. Avoiding or blocking AI isn’t the answer, but simply getting noticed and summarized isn’t enough. The sites that survive the transition to an AI-mediated world must become indispensable for the part of the journey AI is least suited to ownproviding information that’s comprehensive, vetted, and above all, human. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"salmon","redirectUrl":""}}
Category:
E-Commerce
For many people, the first time they thought about Kalshia prediction market where you can place bets on the outcomes of sports, politics, culture, weather, and much morewas after a video clip of its cofounder, Tarek Mansour, went viral last week. Speaking on stage at the Citadel Securities Future of Global Markets Conference, the moderator Molly OShea asked, Tarek, you’ve mentioned multiple times that you think prediction markets will be bigger than the stock market. What is it going to take to become a $1 trillion asset class? In response, Mansour said, You know, Kalshi is everything in Arabic. The long-term vision is to financialize everything and create a tradeable asset out of any difference in opinion.” The market impact of a “general-purpose exchange” capable of settling differences of opinion, he added, would be “quite massive. With the launch of Kalshi in 2018, and its main competitor Polymarket in 2020, prediction markets have gone mainstream in a major way. The potential for making profit by owning the market where every opinion and event is financialized also explains why Kalshi has just raised another $1 billion in its third fundraising round this year alone. Investors are hungry for new ways to take advantage of the explosive rise of gambling, technologies that create addictive behavior loops, and economic conditions where people are desperate enough to bet their rent money on if Trump will release the Epstein Files. Kalshi sits between Las Vegas and Wall Street. A platform like FanDuel helps you gamble on every aspect of a game, and a platform like Robinhood helps you day-trade with complex optionsall while sitting on your couch. Kalshi is designed to take this same logic and apply it to everything imaginable. This is a bizarre vision, one that views all the world as a casino and all its people as players. It treats the proliferation of sports betting as a model for all human interactions. Its not enough to gamble on the outcome of a game. You should also be placing bets based on every opinion you have. (After all, do you really believe its going to be sunny today if you dont put money on it?) For Kalshi, holding these opinions to yourself deprives the world of another asset that can be exploited for financial gain. A neutral intermediary Heres how it works. As a prediction market, Kalshi lets you buy events contracts based on the outcome of events in the world. You either buy a YES contract or NO contract based on if you think the event will happen. The price of each contract changes based on the dynamic odds at the time. For example, on Kalshis trending page at time of writing, I can place a bet on who will be named Times Person of the Year for 2025. The leading contender is AI, with a YES contract priced at $0.42 and a NO contract at $0.59. If the event happens, then I get $1.00 for every YES contract I bought; if the event does not happen, I get $1.00 for every NO contract. The odds change in real-time based on the volume of bets (or predictions) for specific outcomes placed in the events market through these contracts. Currently the total volume of trade for this particular event is nearly $6.5 million, which is middling compared to many other trending event markets on Kalshi. Kalshi is a neutral intermediary in the market with no interest in the outcome of any event contracts. You arent betting against Kalshi. Instead, the company makes money by charging trade-fees on contracts. So that means if people place more bets and buy more contracts, then Kalshi can capture more value. The platforms interest is in maximizing the number of event markets (things to bet on) and the volume of trade (people placing bets) on their platform. For market maximalists, platforms like Kalshi should be the main arbiters of truth in society. In Mansours vision, prediction markets are an antidote to the problems of living in a world where we have an abundance of information but no way to filter the noise and discern what’s real from what’s not. By aggregating different opinions about the future in one place, and using skin in the game as an incentive for accuracy, Mansour expects that a new consumer habit will emerge of people going to these markets to find an unbiased sort of source of truth. Prediction markets like Kalshi wont be a source of the ultimate truth, Mansour says, but he does think they’re as close as it gets. Such grand statements are unsurprisingly absurd coming from a tech startup founder. The problem is that other people take them seriously. (Kalshi declined to comment.) Right after ESPN announced plans to integrate DraftKings into all its platforms, CNN signed a deal with Kalshi to bring real-time probability data into the network’s TV broadcasts and digital platforms starting next year. If you thought gambling was ruining the integrity and community of sports, just wait until CNN gives you live odds on the veracity of what its anchor is reporting. The truth of markets A century of economic theory tells us that efficient markets use price signals to reflect all relevant knowledge in society. According to this model, the market is the most powerful information processor ever created. It aggregates the hidden facts and feelings that reside inside peoples minds and distills that knowledge into actionable insights like prices in a supermarket or betting odds on the future. In addition to the invisible hand, the market is also theorized to be a collective brain. The libertarian architects and defenders of prediction markets point to these economic models when justifying the existence of a betting parlor they claim is actually a consensus machine that produces accurate predictions and unbiased truth. However, a century of capitalism reality tells us actual markets are structured by irrational behaviors, information asymmetries, and power hierarchies. Its impossible to act like a rational agent if you are really just another imperfect person swayed by biases, heuristics, and groupthink. Its impossible to engage in due diligence as a good consumer if other buyers and sellers are incentivized to lie, cheat, and conceal information if it benefits them. Its impossible to maintain fair standing in the marketplace of ideas where people vote with their dollars and the more dollars you have, the louder your voice and more powerful your values. Rather than an efficient market guided by a collective brain toward the truth, we have an imperfect system of people trying to do the best they can while not getting screwed. Prediction markets dont magically escape all the social problems and perverse incentives that plague other real markets just because people are betting on the future instead of buying widgets in a store. A world of total financialization, where every opinion is a tradeable asset, where the market is the ultimate arbiter of whats valuable and true, is also a world that creates endless incentives for arbitrage, manipulation, collusion, and exploitation in the pursuit of proft extraction. Financialization is a predatory logic. It is not just one more way of organizing the world among many others. The goal is to eliminate other competing worldviews and reengineer society into a casino where the hedge funds always win. The only human values that matter are the ones that can be turned into tradeable assets and sold to the highest bidder.
Category:
E-Commerce
As Australia began enforcing a world-first social media ban for children under 16 years old this week, Denmark is planning to follow its lead and severely restrict social media access for young people.The Danish government announced last month that it had secured an agreement by three governing coalition and two opposition parties in parliament to ban access to social media for anyone under the age of 15. Such a measure would be the most sweeping step yet by a European Union nation to limit use of social media among teens and children.The Danish government’s plans could become law as soon as mid-2026. The proposed measure would give some parents the right to let their children access social media from age 13, local media reported, but the ministry has not yet fully shared the plans.Many social media platforms already ban children younger than 13 from signing up, and a EU law requires Big Tech to put measures in place to protect young people from online risks and inappropriate content. But officials and experts say such restrictions don’t always work.Danish authorities have said that despite the restrictions, around 98% of Danish children under age 13 have profiles on at least one social media platform, and almost half of those under 10 years old do.The minister for digital affairs, Caroline Stage, who announced the proposed ban last month, said there is still a consultation process for the measure and several readings in parliament before it becomes law, perhaps by “mid to end of next year.”“In far too many years, we have given the social media platforms free play in the playing rooms of our children. There’s been no limits,” Stage said in an interview with The Associated Press last month.“When we go into the city at night, there are bouncers who are checking the age of young people to make sure that no one underage gets into a party that they’re not supposed to be in,” she added. “In the digital world, we don’t have any bouncers, and we definitely need that.” Mixed reactions Under the new Australian law, Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube face fines of up to 50 million Australian dollars ($33 million) if they fail to take reasonable steps to remove accounts of Australian children younger than 16.Some students say they are worried that similar strict laws in Denmark would mean they will lose touch with their virtual communities.“I myself have some friends that I only know from online, and if I wasn’t fifteen yet, I wouldn’t be able to talk with those friends,” 15-year-old student Ronja Zander, who uses Instagram, Snapchat and TikTok, told the AP.Copenhagen high school student Chloé Courage Fjelstrup-Matthisen, 14, said she is aware of the negative impact social media can have, from cyberbullying to seeing graphic content. She said she saw video of a man being shot several months ago.“The video was on social media everywhere and I just went to school and then I saw it,” she said.Line Pedersen, a mother from Nykbing in Denmark, said she believed the plans were a good idea.“I think that we didn’t really realize what we were doing when we gave our children the telephone and social media from when they were eight, 10 years old,” she said. “I don’t quite think that the young people know what’s normal, what’s not normal.” Age certificate likely part of the plan Danish officials are yet to share how exactly the proposed ban would be enforced and which social media platforms would be affected.However, a new “digital evidence” app, announced by the Digital Affairs Ministry last month and expected to launch next spring, will likely form the backbone of the Danish plans. The app will display an age certificate to ensure users comply with social media age limits, the ministry said.“One thing is what they’re saying and another thing is what they’re doing or not doing,” Stage said, referring to social media platforms. “And that’s why we have to do something politically.”Some experts say restrictions, such as the ban planned by Denmark, don’t always work and they may also infringe on the rights of children and teenagers.“To me, the greatest challenge is actually the democratic rights of these children. I think it’s sad that it’s not taken more into consideration,” said Anne Mette Thorhauge, an associate professor at the University of Copenhagen.“Social media, to many children, is what broadcast media was to my generation,” she added. “It was a way of connecting to society.”Currently, the EU’s Digital Services Act, which took effect two years ago, requires social media platforms to ensure there are measures including parental controls and age verification tools before young users can access the apps.EU officials have acknowledged that enforcing the regulations aiming at protecting children online has proven challenging because it requires cooperation between member states and many resources.Denmark is among several countries that have indicated they plan to follow in Australia’s steps. The Southeast Asian country of Malaysia is expected to ban social media account s for people under the age of 16 starting at the beginning of next year, and Norway is also taking steps to restrict social media access for children and teens.China which manufacturers many of the world’s digital devices has set limits on online gaming time and smartphone time for kids. James Brooks, Associated Press
Category:
E-Commerce
Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] next »