Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-01-16 10:30:00| Fast Company

A new mandatory safety feature requires Roblox users in the U.S. to submit to facial age estimation via the app to access its chat feature. The online gaming platform announced it was implementing the system to prevent children younger than 16 from communicating with adults. About 42% of Roblox users are younger than 13.  But a cursory scroll on eBay found that various listings of age-verified Roblox accounts are available for purchase, some for as little as $2.99. This allows the purchaser to sign in to the account without having to use any ID or facial scan, voiding the new safety feature Roblox has implemented. The description of one listing (since removed) read: The product is an Age Verified Roblox account for users between the ages of 13-15. This account comes with chat unlocked, allowing users to communicate with other players aged 9-17. After Wired flagged the listings in a recent report, eBay said the company was removing them for violating the site’s policies. At the time of this writing, Fast Company found 27 results when searching for Roblox age verified account.  Thats not the only problem Roblox is contending with. The system has also had trouble correctly estimating users ages, mislabeling some adults as children and vice versa.  One X user wrote that their 10-year-old brother was misidentified as a 16- to 17-year-old. The replies are full of others sharing similar stories. I have a full ass beard and it put my alt on a 13-16 age range, one responded to the post. I’m 23 (nearly 24) and it’s forced me as a 16-17 year old, another wrote.  Another X user posted a video of them circumventing the age verification using a 3D animated avatar, which Roblox’s system identified as over 18. Another user, somewhat unbelievably, fooled the system into thinking he was over the age of 21 by drawing on facial hair with a marker. At present, nearly 80 active lawsuits accuse Roblox of enabling child exploitation, with some parents alleging their children encountered predators on the app. A Roblox spokesperson told Fast Company: Were excited to see that tens of millions of users have already completed the process, proving that the vast majority of our community values a safer, more age-appropriate environment. To roll this out to a global community of over 150 million daily active users is a huge undertaking and were working to smooth out the transition.” Roblox has previously put out statements saying the company is constantly evaluating user behavior to determine if someone is significantly older or younger than expected. In these situations, we will soon begin asking users to repeat the age-check process. Last week the company also became aware of instances where parents age check on behalf of their children, leading to kids being aged to 21+.


Category: E-Commerce

 

LATEST NEWS

2026-01-16 10:00:00| Fast Company

Imagine youre talking to someone and they suddenly start to add advertising to the exchange. What might that look like? In a 1965 episode of the classic sitcom I Dream of Jeannie, the protagonist uses her magical powers to create fake parents for herself in order to impress a date. She crafts them to be just like the people on television commercials, making them speak using sentences from commercials. Her synthetic parents appear friendly and normaluntil they start talking, reciting ads verbatim for products like streak-away for gray hair, dish soap, Grippo denture adhesive, and deodorant. They have so much to say, yet communicate nothing at all.  Something similar might happen if OpenAI goes forward with its rumored plans to add advertising to ChatGPT. Last December, an article in Futurism, citing internal sources at OpenAI, suggested ad adoption could be near. Recently, The Information reported that the company is hiring digital advertising veterans and that it will install  a secondary model capable of evaluating if a conversation has commercial intent, before offering up relevant ads in the chat responses. Annoying ads within ChatGPT could be for things as banal as a grocery product, a local destination to visit, or a handyman service. But they could also be a lot of something elsesomething dangerous. Given ChatGPTs track record, some poor soul might be pouring their heart out to the chatbot, only to be advised of a special on rope at their local hardware store. Im not making light of the latterIt could happen. There cant be true oversight with LLMs. And thats only one of their problems. Context is the Holy Grail OpenAIs advertising move is a bold and brilliant, but potentially terrible, crude attempt to automate contextual understanding, a missing link with the push toward combining big data and surveillance. For a long time, newspapers and radio stations were local and distributed. As transportation connected us and technology improved, the opportunity to distribute more centralized news from single, larger sources became possible. Television began with a few channels and concentrated programming that was the same across broad regions. This ushered in a heyday for advertisers who sponsored TV content and could show single ads to millions of viewers.  As a distributed technology, the internet disrupted many forms of traditional media, and advertisers have been scrambling to try to reach us in new ways. While technology has enabled advertisers to benefit from our location in an attempt to hone in on what might appeal to us, internet ads are often not contextually relevant to what we want or need. What OpenAI intends to do with advertising, via ChatGPTs self-reported 900 million weekly users, will synthesize the local distributed model. This will enable the platform to reach into our homes in the same way that mass television once possessed. Its an attempt to unify and bypass the interfaces of phones and computers that we currentlyuse. In the process, OpenAI willbe creating a super platform for informational use and processing.  The algorithms dont know us Within its current platform, ChatGPT offers a conversational medium of interaction and query; each chat captures how we use language, more detailed descriptions of the problems we seek to solve, and many of our needs. Thus, the opportunity for OpenAI to have platform control, along with access to our inner thoughts, all with the surveillance capability to compile these into targeted individual ads, is the ultimate goal for advertisers: to really reach us, deep inside our thoughts. However, this outcome is unlikely. The problem with this model is that it still relies on computational compiling and sorting. The algorithms wont know us, or form relationships with us. Because of that, they cant actually recommend true advertising solutions to our problemsjust like these algorithms cant solve our problems now. But their results can mimic helpfulness, just like Jeannies synthetic parents. While collecting and compiling our online data has brought advertisers closer to knowing what they think we need, what has been missing is an understanding of the context of what these actions mean to us. Qualitative research, which helps to discover the how, why, and what of interaction, has been pushed aside through the rush to embrace big data.  The LLMs that feed chatbots are not magical: They are algorithms that statistically match words and rank them from sources that the model was trained on. An LLM listening to our conversations will not “understand” context as human qualitative researchers can. Thus, the ads that ChatGPT will suggest from our conversations may seem like a match, but they’re unlikely to offer anything contextually substantial.Another idea OpenAI suggests is that sponsored results could get preferential treatment. Subscribers might get better matching ads, but, again, because this is all based on word matching, it may not matter much. (It hasnt been revealed if there will be an option to avoid such advertising completely.)  The trust is an illusion An OpenAI spokesperson told The Information: People have a trusted relationship with ChatGPT, and any approach would be designed to respect that trust. But theres a big difference between having a social relationship with someone and having a trusted social relationship. Many of us are trained to fill in social gaps when we interact with others who are trying to communicate with us. In that context, we may project sociabilityand, thus, trustonto them. By seeming to respond to us with a point of view and a chat style that feels personal, ChatGPT perpetuates that illusion of sociability and trust. By leveraging our innate social behaviors, ChatGPT also leverages that behavioral goodwill. But that sociability and trust is in our heads. It isnt realits just an algorithm. ChatGPT is merely a way for OpenAIs LLM outcomes to be presented to us. Is it trusted and social to siphon peoples knowledge and work to train a model? If OpenAI were a person, wed say no, pointing out how thats akin to a sociopath stealing our ideas and work and presenting them to others. But because we converse with ChatGPT, we project a trust upon it that it cant earn because it is not human.  OpenAI adding advertising to ChatGPT seems like an inevitability. If we use this tool, we need to remember that we cannot form bonds to it, that it cannot have a relationship with us, and that all it can do is word atch. Any ad it serves us will be based on what we tell it, but it can’t “think” about all we tell it and propose an ad that speaks explicitly to us as a trusted friend who knows us would do. It is best to keep that in mind as these tools evolve to seemingly understand us. OpenAI as a company could try to earn its customers’ trust by discovering what its customers want and need using qualitative research rather than foisting its advertising decisions upon us. Even so, the idea that this advertising model will scale and deliver contextually relevant advertising to 900 million weekly users seems unrealistic. Context, especially driven through LLMs that already have issues with slop, hallucinations, and outright lies, can be a challenging match for advertisers, who need reliable recommendations to keep the integrity of their brands and reputations.  Without trust formed between entities, were all at risk of being played: OpenAI, who believes their algorithms will deliver what they promise, the advertisers who trust that their ads will accurately match the users context and interest, and those who use ChatGPT, a service they trust that, in fact, seems intent upon using them for revenue instead.


Category: E-Commerce

 

2026-01-16 10:00:00| Fast Company

The new year is a time for resolutions. This year, governments, platforms, and campaigners all seem to have hit on the same ones: Children should spend less time online, and companies should know exactly how old their users are. From TikToks infinite scroll to chatbots like xAIs Grok that can spin up uncensored answers to almost any question in seconds, addictive and inappropriate online options leave legislators and regulators worried. The result is a new kind of arms race: Lawmakers, often spooked by headlines about mental health, extremism, or sexual exploitation, are turning to age gates, usage caps, and outright bans as solutions to social medias problems. Just in the past week, weve seen Grok become Exhibit A in the debate about harmful content as it helps undress users, while states consider or enact bans, blocks, and time limits on using tech. Right now, the regulatory debate seems to exclusively focus on how certain internet services are net negatives, and banning access to minors to such services, says Catalina Goanta, associate professor in private law and technology at Utrecht University in the Netherlands. That black-and-white approach is easy for politicians to parse, but doesnt necessarily communicate the nuance involved in tech and its potential for good. The scientific debate shows us a much more nuanced landscape of what can be harmful to minors, and that will depend on so many more aspects than just a child having a phone in their hands, says Goanta. Legislators are moving quickly to throw a protective shield around younger users. A December 2025 proposed law in Texas would have required Apple and Google to verify user ages and get parental consent for minors app downloads, but was blocked just before Christmas. Meanwhile, as outright bans are being blocked, states are pushing forward with rules that cap social media access. Virginias default one-hour daily cap for under-16s was launched with a requirement for commercially reasonable age checks. However, it has already been challenged in court by a lawsuit filed by NetChoice, an association that seeks to make the Internet safe for free enterprise and free expression. The group, which includes Amazon, Google, Meta and OpenAI as members, says imposing a time block on social media is like limiting the ability to read books or watch documentaries. All of the laws have been challenged, and the court’s ruling on the Texas law doesn’t bode well for the other state laws, says Adam Kovacevich, founder and CEO of the Chamber of Progress, which he describes as a center-left tech industry policy coalition. But, he says, some of this tough talk is also allegedly helped by big tech firms themselves, It’s important to keep in mind that the app store age verification bills have been written and advanced by Meta, largely as a way of getting themselves from defense onto offense. The Texas law is just one out of many that are cropping up around the United Statesand around the world. Across the Atlantic, France is pursuing an Australia-style ban on social media for under-15s this year, while the U.K.s official (if not likely) opposition party, the Conservatives, has also backed a social media ban for under-16s. That court challenge is an augur of whats to come in 2026, reckons Kovacevich. Legislators keep pushing and pushing with age verification mandates, warning labels, and design mandates, and they keep running into the same two buzzsaws again and again, he says: Users’ privacy rights and the First Amendment. The legislative surge is part of a broader tech temperance movement aimed at social media, apps, and AI. In the U.K., the Online Safety Act’s child-safety provisions came into practical effect in July 2025, requiring platforms likely to be accessed by children to implement “highly effective” age-assurance measures and shield young users from content promoting self-harm, suicide, violence, and pornography. With Grok, the law is facing its first big test for the body in charge, communications regulator Ofcom. Across the European Union, the Digital Services Act’s rules on minors’ data and recommender systems are also tightening. The question now is whether courtsand userswill tolerate the friction these laws create. Regulators have to resolve an inherent tension, says Goanta. Do we want children to have agency over their access to and conduct on the internetthe childrens rights narrative. Or do we consider that they have limited capacity because they are not yet fully developed, and their guardians get to make decisions for them? She points out that there can be plenty of solutions that fall between both extremes. But the resulting spectrum should be the focus of debates, and not moral panics. 


Category: E-Commerce

 

Latest from this category

16.01NYC has a major delivery problem. These architects have a big vision to fix it
16.01Chipotles burrito wrappers are about to get a whole lot glitzier
16.01A subreddit dedicated to the bald community may be the most wholesome corner of the internet 
16.01Hondas futuristic new logo is ready for a Tron cameo
16.01Roblox launched age-verification rules. Days later, age-verified accounts were available on eBay
16.01Are we ready for OpenAI to put ads into ChatGPT? 
16.01The world is getting tougher on kids online safety in 2026
16.01Free America Walkout: January 20 protest against Trump presidency at one-year mark, amid anti-ICE movement. Heres what to know
E-Commerce »

All news

16.01Oak Park and River Forest student who headlined at Chicago Fashion Week at 16 preparing for international stage
16.01Chipotles burrito wrappers are about to get a whole lot glitzier
16.01NYC has a major delivery problem. These architects have a big vision to fix it
16.01Hondas futuristic new logo is ready for a Tron cameo
16.01A subreddit dedicated to the bald community may be the most wholesome corner of the internet 
16.01Roblox launched age-verification rules. Days later, age-verified accounts were available on eBay
16.01The world is getting tougher on kids online safety in 2026
16.01Are we ready for OpenAI to put ads into ChatGPT? 
More »
Privacy policy . Copyright . Contact form .