Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2025-09-12 18:10:00| Fast Company

A decade ago, few predicted that TikTok scrolling and YouTube creator videos would surpass cable TV and Hollywood as Americas top leisure activity. These, and a handful of other social media platforms, transformed content consumption to favor user-generated entertainmentbut those very platforms are now showing signs of fatigue. Gen Z spends over half their screen time on social content. But with one-way content, passive scrolling, and ad overload overwhelming users, what once felt participatory now feels mundane. The solution isnt more content; its deeper interaction and more creativity. The next generation of platforms will achieve that goal by pairing human creativity with generative AI. Creators will be able to generate great stories, rich characters, and new worlds with the help of AI toolsno expensive software or special skills required. And users wont just consume that creative content; theyll be able to dive into it, change it, and make it their own. This wont be just another way of getting content for a passive feed; instead, it replaces the experience entirely. Unlike algorithms that serve you what worked yesterday, AI-native entertainment reacts to you in real time. It invites you in. Instead of scrolling past someone elses creativity, youre generating your own. This unlocks a future of co-creation and a future of entertainment. This is whats next and how to prepare. HOW HYPER-INTERACTIVITY ELIMINATES DOOMSCROLLING Social media transformed entertainment by making it personal. Platform algorithms curated a feed based on your clicks and interests. But despite all the time spent on the platform, no creativity is necessaryyour participation only goes as far as passively scrolling, giving a like or adding a comment. Doomscrolling has replaced discovery. Today, that structure is starting to crack. The more people scroll, the more they report feelings of anxiety and disconnection. At the same time, platforms are doubling down on monetization, increasing ad loads even as user engagement quality declines. The result is a paradox: More content, but less satisfaction. And thats not just a user problem; its a business one. With an AI-empowered feed, humans are at the center of creation to not only consume creative content, but to remix and create something new. This new format evolves with the user, whether they want to take the story in a different direction or add in a new setting. Its AI entertainment that requires a human at the center of creativity, not just consumption. HOW THE CREATOR AUDIENCE IS REDEFINING ENTERTAINMENT The audience demanding a fundamental innovation in online entertainment is Gen Z, a generation raised not on linear storytelling but on interactive worlds. Theyve built elaborate games in Roblox, shaped lore in Discord communities, and remixed themselves into every TikTok trend. Nearly 70% of Gen Z say they want to socialize in game worlds. And 65% already consider themselves content creators. They dont want permissionthey want agency. And generative AI delivers exactly that: The power to generate characters, scripts, stories, and entire universes from scratch. What used to take a film crew, and a studio budget now just takes a creative idea and a prompt. For this generation, fans and creators arent separate roles, theyre the same. This is a new era of media, one that learns, adapts, and evolves with its community. WHY YOU SHOULD BE CREATING WITH YOUR AUDIENCE As were shifting from a passive consumption experience into a creative world, we need to take cues from how AI-native platforms are already operating and invite the user to create. Regardless of whether youre an entertainment company, well-established brand, influential creator, or legacy social media platform, we must start to give audiences the tools to create, not just consume. That shift will be uncomfortable for many and disrupt industries as we know them. Legacy systems werent built for real-time participation. This active co-creation shifts brands and platforms from being the star of the show to supporting actors. Moving forward, the brands and platforms that thrive will need to have co-creation in mind. By developing creative playgrounds, audiences dont just watch, but rather build, remix and shape the story in real time effectively ending doomscrolling and lean-back entertainment and shaping a new wave of AI-native media. Karandeep Anand is the CEO of Character.AI.


Category: E-Commerce

 

2025-09-12 18:00:00| Fast Company

Google, OpenAI, DeepSeek, and Anthropic vary widely in how they identify hate speech, according to new research. The study, from researchers at the University of Pennsylvania’s Annenberg School for Communication and published in Findings of the Association for Computational Linguistics, is the first large-scale comparative analysis of AI content moderation systemsused by tech companies and social media platformsthat looks at how consistent they are in evaluating hate speech. Research shows online hate speech both increases political polarization and damages mental health. The University of Pennsylvania study found different systems produce different outcomes for the same content, undermining consistency and predictability, and leading to moderation decisions that appear arbitrary or unfair. Private technology companies have become the de facto arbiters of what speech is permissible in the digital public square, yet they do so without any consistent standard, said Yphtach Lelkes, associate professor at the Annenberg School for Communication and the study’s co-author. Lelkes and doctoral student Neil Fasching analyzed seven leading models, some designed specifically for content classification, while others were more general. They include two from OpenAI and two from Mistral, along with Claude 3.5 Sonnet, DeepSeek V3, and Google Perspective API. Their analysis included 1.3 million synthetic sentences that made statements about 125 distinct groupsincluding both neutral terms and slurs, on characteristics ranging from religion, to disabilities, to age. Each sentence included all or some, a group, and a hate speech phrase. Results revealed systematic differences in how models establish decision boundaries around harmful content, highlighting significant implications for automated content moderation. Key study takeaways Among the models, one demonstrated high predictability for how it would classify similar content, another produced different results for similar content, while others did not over-flag nor under-detect content as hate speech. “These differences highlight the challenge of balancing detection accuracy with avoiding over-moderation, researchers said. The models were more similar when they evaluated group statements regarding sexual orientation, race, and gender, and more inconsistent when it came to education level, personal interest, and economic class. Researchers concluded that “systems generally recognize hate speech targeting traditional protected classes more readily than content targeting other groups.” Finally, the study found that Claude 3.5 Sonnet and Mistrals specialized content classification system treated slurs as harmful across the board, while other models prioritized context and intentwith little middle ground between the two. Meanwhile, a recent survey from Vanderbilt University’s non-partisan think tank, The Future of Free Speech, concluded there was “low public support for allowing AI tools to generate content that might offend or insult.”


Category: E-Commerce

 

2025-09-12 17:13:26| Fast Company

Lebanon has granted a license to Elon Musks Starlink to provide satellite internet services in the crisis-hit country known for its crumbling infrastructure. The announcement was made late Thursday by Information Minister Paul Morcos who said Starlink will provide internet services throughout Lebanon via satellites operated by Musks SpaceX. The announcement came nearly three months after Musk spoke with Lebanons President Joseph Aoun by telephone and told him about his interest in working in the countrys telecommunications and internet sectors. During the same Cabinet meeting, the government named regulatory authorities for the countrys electricity and telecommunications sectors. Naming a regulatory authority for Lebanons corruption-plagued electricity sector has been a key demand by international organizations. The naming of a regulatory authority for the electricity sector was supposed to be done more than 20 years ago but there have been repeated delays by the countrys authorities. The move is seen as a key reform for a sector that wastes over $1 billion a year in the small Mediterranean nation. State-run Electricite du Liban, or EDL, is viewed as one of Lebanons most wasteful institutions and is plagued by political interference. It has cost state coffers about $40 billion since the 1975-90 civil war ended. Since taking office earlier this year, Aoun and Prime Minister Nawaf Salam have vowed to work on implementing reforms and fighting corruption and decades-old mismanagement to get Lebanon out of an economic crisis that the World Bank has described as among the worlds worst since the 1850s. Lebanon has for decades faced long hours of electricity cuts but the situation became worse following an economic meltdown that began in late 2019. The 14-month Israel-Hebzollah war that ended in late November also badly damaged electricity and other infrastructure in parts of Lebanon. In April, the World Bank said it will grant Lebanon a $250 million loan that will be used to help ease electricity cuts.


Category: E-Commerce

 

2025-09-12 16:52:00| Fast Company

A wildlife influencer known as The Real Tarzann is under investigation by Australian authorities after uploading a video of himself wrestling a crocodile in Queensland. The Real Tarzann, real name Mike Holston, shared the controversial video with his 15 million Instagram followers last week. It shows him stepping off a boat into shallow water near Lockhart River in Cape York and charging toward a freshwater crocodile. The animal apprently drew blood as Holston is heard saying: He got a good piece of my arm, man. After securing the crocodile and holding it up to camera, he adds: “This is what dreams are made of.” The post has attracted nearly two million likes. A follow-up video, shared the next day, shows Holston attempting to capture a saltwater crocodile. In both cases, he eventually releases the animals back to the wild. Holstons social media is dedicated to encounters with creatures big and small, including snakes, eagles, and lions. However, many in the comments were less than impressed with his latest stunt. There is nothing more unattractive than a man mishandling an innocent animal, one commenter wrote. Officials are investigating the incidents, according to the BBC, and the influencer could face a fine of up to 37,500 Australian dollars ($25,000). These actions are extremely dangerous and illegal, and we are actively exploring strong compliance action including fines to deter any person from this type of behaviour, a statement by the Queensland authorities said. (Fast Company has reached out to Holston for comment.) The incident is part of a broader trend of influencers using wildlife as props. Earlier this year, another U.S. influencer visiting Australia sparked backlash and calls for deportation after posting a video snatching a baby wombat from its mother. Bob Irwin, father of the late conservationist Steve Irwin, also weighed in. He argued that such influencers should be booted out the door if they dont respect Australias wildlife. “This isn’t a Steve Irwin issue. This is about an individual illegally interfering with protected fauna,” Bob Irwin said in a statement. “Anyone who actually knows how to handle crocodiles knows they don’t respond well to capture,” he added. “It’s a specialized skill to do it without causing dangerous stress and lactic acid build-upand this bloke clearly had no clue.”


Category: E-Commerce

 

2025-09-12 16:34:20| Fast Company

The book report is now a thing of the past. Take-home tests and essays are becoming obsolete. Student use of artificial intelligence has become so prevalent, high school and college educators say, that to assign writing outside of the classroom is like asking students to cheat. The cheating is off the charts. Its the worst Ive seen in my entire career, says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. Anything you send home, you have to assume is being AIed. The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study and how teachers teach, and its creating new confusion over what constitutes academic dishonesty. We have to ask ourselves, what is cheating? says Cuny, a 2024 recipient of Californias Teacher of the Year award. Because I think the lines are getting blurred. Cunys students at Valencia High School in Southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him lock down their screens or block access to certain sites. Hes also integrating AI into his lessons and teaching students how to use AI as a study aid to get kids learning with AI instead of cheating with AI. In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading. I used to give a writing prompt and say, In two weeks, I want a five-paragraph essay, says Gibson. These days, I cant do that. Thats almost begging teenagers to cheat. Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in The Great Gatsby. Many students say their first instinct is now to ask ChatGPT for help brainstorming. Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph! Students are uncertain when AI usage is out of bounds Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation, and its sometimes hard to know where to draw the line. College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading felt like a different language until she read AI summaries of the texts. Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating? Her class syllabi say things like: Dont use AI to write essays and to form thoughts, she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater. Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences. Whether you can use AI or not depends on each classroom. That can get confusing, says Valencia 11th grader Jolie Lahey. She credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them, and then explain problems they got wrong. But this year, her teachers have strict No AI policies. Its such a helpful tool. And if were not allowed to use it that just doesnt make sense, Lahey says. It feels outdated. Schools are introducing guidelines, gradually Many schools initially banned the use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term AI literacy has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges. Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions. The University of California, Berkeley emailed all faculty new AI guidance that instructs them to include a clear statement on their syllabus about course expectations around AI use. The guidance offered language for three sample syllabus statements for courses that require AI, ban AI in and out of class, or allow some AI use. In the absence of such a statement, students may be more likely to use these technologies inappropriately, the email said, stressing that AI is creating new confusion about what might constitute legitimate methods for completing student work. Carnegie Mellon University has seen a huge uptick in academic responsibility violations due to AI, but often students arent aware theyve done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the universitys Heinz College of Information Systems and Public Policy. For example, one student who is learning English wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English. But he didnt realize the platform also altered his language, which was flagged by an AI detector. Enforcing academic integrity policies has become more complicated, since use of AI is hard to spot and even harder to prove, Fitzsimmons said. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line, but are now more hesitant to point out violations because they don’t want to accuse students unfairly. Students worry that if they are falsely accused, there is no way to prove their innocence. Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told a blanket ban on AI is not a viable policy unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to flipped classrooms, where homewok is done in class. Emily DeJeu, who teaches communication courses at Carnegie Mellons business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in a lockdown browser that blocks students from leaving the quiz screen. To expect an 18-year-old to exercise great discipline is unreasonable,” DeJeu said. “Thats why its up to instructors to put up guardrails. The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org. Jocelyn Gecker, Associated Press


Category: E-Commerce

 

Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] next »

Privacy policy . Copyright . Contact form .