Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-06-18 09:00:00| Fast Company

The average consumer subscribes to 4.5 streaming services, many of which offer content that feels largely indistinguishable from one another. When Netflix disrupted film and television in the late 2010s, it introduced a new model of viewership: an endless blend of originals and archives, delivered through a finely tuned personalization algorithm. Today, Disney+, Hulu, HBO Max, Peacock, and many others follow the same playbook. Not the Criterion Channel. The streamer rejects the infinite-content model, instead curating rotating collections of select films that appear for just a few months. Their offerings range from mass-market to niche indie: A recent example, “Surveillance Cinema,” matched the $350 million-earning Minority Report with the tiny French neo-noir Demonlover. It also turns away from algorithmic recommendationsevery title is handpicked by a programmer. Aliza Ma, the Criterion Channel’s head of programming, says that she’s “offended” by the Netflix model of curation. “Its absurd in the face of art and curiosity that you would think somebodys past behavior could indicate future taste,” she tells Fast Company. This approach has earned the Criterion Channel a loyal following among artistically curious cinephiles, creating a stable, low-churn subscriber base. For just $10.99 a month, viewers from the U.S. and Canada can escape the clutches of streamer sludge. The mega-viral Criterion Closet doesnt hurt either. I would have expected that broader is better, Ma says. Its a brilliant surprise to us that the more specific we get, the more we pull focus on a subject or theme, the better it seems to reach people. [Image: Courtesy of Criterion] A streamer without an algorithm For over 30 years, Criterion was known as a seller and refurbisher of physical media. Their DVD and Blu-ray archives sustained the business, while the company licensed their films to several video-on-demand (VOD) services. First they were available on Mubi, then Hulu, and finally FilmStruck, the streamer from Turner Classic Movies. But when FilmStruck shut down in 2018, Criterion president Peter Becker and his team decided to create their own point of access. The Criterion Channel was running by 2019 and has since eclipsed the company’s physical media business. In 2024, Criterion and its sister company, Janus Films, were sold to billionaire Steven Rales, founder of the film studio Indian Paintbrush and a minority owner of the Indiana Pacers. [Image: Courtesy of Criterion] The channel’s focus on curation naturally narrows its appeal. In the ongoing streaming wars, Criterion isnt trying to compete on scale. Instead, it leans into its niche. You have to think you care about movies enough to want a streaming service really devoted to movies, Becker says. But specificity also creates a highly loyal customer base, he adds. Asked whether one specific collection surged traffic at the site, Becker notes that there are different points of entry for everybody. Some are more popular within the streamers walls than othersboth Ma and Becker reference the 2023 High School Horror set featuring movies like Donnie Darko and I Know What You Did Last Summer. But subscribers come more for the curation than for any individual film, meaning theyre likely to stay longer.  Michael Cunningham, acclaimed author of Day and The Hours (the latter of which was adapted into a film starring Meryl Streep and Nicole Kidman), is a subscriber to the Criterion Channel. “Im a fan because Criterion is keeping alive films that would otherwise fade away and be forgotten,” he writes in an email to Fast Company. “It reminds us that greatness resides in a wide range of movies, from Potemkin to Some Like It Hot.” Estimating the Criterion Channels size is a difficult task. The company declined to provide Fast Company with revenue or user figures, only saying that it has grown steadily since we launched. When its predecessor FilmStruck shut down in 2018, the subscriber base was estimated at just 100,000. The Criterion Channel has likely surpassed thisit has over 100,000 downloads on the Google Play store alone. But thats still small compared with other specialty streamers like Mubi, which has more than 5 million Google Play downloads.  It’s audience is also shifting. If you had gone back 10 or 15 years and looked at who was collecting DVDs and Blu-rays, you would have seen a heavy disproportion of people who were male and over 30, Becker says. That has been completely shattered.  [Image: Courtesy of Criterion] DVDs, writers, and that infamous closet Criterion, the company behind the channel, still operates its specialty DVD business and commissions a stable of writers to pen essays on its archive. But the Criterion Channel is the companys most far-reaching project, Becker says. And then there’s the company’s infamous closet. It began in 2010, when Guillermo del Toro stepped into Criterion’s DVD archive in New York and picked out his favorites. Choosing among a collection organized only by spine number, del Toro professed his love for François Truffauts The 400 Blows. Criterion has continued to pump out these Closet Picksthe videos are now significantly less grainyand posts them to YouTube.  We record a couple a week, and were always amazed by the conversations we have in there, Becker says. I think its a relief for the people in the Closet, because they dont have to talk about their own movies. Creatives see the Criterion Closet as more than a stop on their press tour, though. Griffin Dunne, star of films like Martin Scorcese’s After Hours, relished the opportunity to rifle through Criterion’s archives. “There are a few benchmarks in an actor’s or directors career,” Dunne wrote in an email to Fast Company. “Getting your first job, any job, in the movie business. Seeing your name in a New York Times review for your first film. Getting nominated or winning for any of the EGOTs. Being invited to the Criterion Closet to talk about your favorites films.” The closet has since gone mobile. Criterion now takes a portable version on the road, drawing fans who line up for hours. Becker even recalls a couple who got engaged inside. Were always amazed and gratified at how young the people who come out are, he says, noting that most attendees are in their 20s and early 30s. [Image: Courtesy of Criterion] The traveling closet of films also reveals the diversity of Criterions audience. Few titles are picked more than a handful of times. While some favorites recurRichard Linklaters films, for example, or Anoramost picks are highly personal and eclectic. Has the Criterion Closet helped funnel audiences back to their streamer or paid offerings? Becker isnt interested in talking shop. The closet wasnt set up as a marketing tool, so they dont track it as one. But it has been a helpful brand extension, he concedes.  When 13 million people see the Ben Affleck video, thats a lot of people, Becker says. Were definitely reaching more people than would have sought us out without it. Afflecks first pick from the Criterion Closet was Jean Renoirs The Rules of the Game, the 1939 French satire celebrated for its humanist worldview. Its hard to imagine the film finding traction on Netflix. How would they package it? What thumbnail image or search-friendly pitch could make it click? Its age alone might be a barrierback in March, the oldest title on Netflix was 1973s The Sting. But viewers can find The Rules of the Game on the Criterion Channel. It appears in a French Poetic Realism collection, alongside commentary from Cunningham, the novelist. They can watch the film, explore its historical context, and dip into criticism, too. Thats what the Criterion Channel offers: not just content, but curation.


Category: E-Commerce

 

LATEST NEWS

2025-06-18 08:00:00| Fast Company

Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable peopleincluding children in foster care, adults in nursing homes, and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs. Developers are using natural language processing, for examplea form of AI that interprets written or spoken languageto try to detect patterns of threats, manipulation, and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most at risk for abuse. When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier. But as a social worker with 15 years of experience researching family violenceand five years on the front lines as a foster-care case manager, child abuse investigator, and early childhood coordinatorIve seen how well-intentioned systems often fail the very people they are meant to protect. Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movementsnot faces or voicesto detect physical violence. Im grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm? New tech, old injustice Many AI tools are trained to “learn by analyzing historical data. But history is full of inequality, bias, and flawed assumptions. So are people, who design, test, and fund AI. That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families risk levelsscores given to hotline staff to help them screen callswould have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%. Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as aggressive at a significantly higher rate than Standard American Englishup to 62% more often, in certain contexts. Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress. These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systemssometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation, and family separation compared with white families, even after accounting for income and other socioeconomic factors. Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers. Surveillance over support Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost. In hospitals and eldercare facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors, and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy. In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 monthsoverwhelming staff and missing at least one real incident. The programs accuracy did not achieve a level that would be considered acceptable to staff and management, according to the independent report. Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian, and Securly are marketed as tools to keep students safe. Such programs can be installed on students devices to monitor online activity and flag anything concerning. But theyve also been shown to flag harmless behaviorslike writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality. Other systems use classroom cameras and microphones to detect aggression. But they frequently misidentify normal behavior like laughing, coughing, or roughhousingsometimes prompting intervention or discipline. These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humansdata that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote i Automating Inequality, AI systems risk scaling up these long-standing harms. Care, not punishment I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. Ive developed a framework of four key principles for what I call trauma-responsive AI. Survivor control: People should have a say in how, when, and if theyre monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help. Human oversight: Studies show that combining social workers expertise with AI support improves fairness and reduces child maltreatmentas in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate. Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBMs AI Fairness 360, Googles What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models. Privacy by design: Technology should be built to protect peoples dignity. Open-source tools like Amnesia, Googles differential privacy library, and Microsofts SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize peoples identities in video or photo data. Honoring these principles means building systems that respond with care, not punishment. Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech developmentfrom needs assessments to user testing and ethical oversight. Legislation is important, too. On May 5, 2025, for example, Montanas governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling. As I tell my students, innovative interventions should disrupt cycles of harm, not perpetuate them. AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it. Aislinn Conrad is an associate professor of social work at the University of Iowa. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

2025-06-18 04:11:00| Fast Company

Gen Zers are putting their money where their mouths are when it comes to shopping. Nearly all Gen Z consumers96%say they shop with intention, and 66% believe their purchases should reflect their personal values, according to the newly released Lightspeed Commerce report, which surveyed over 2,000 North American shoppers. Spending habits have never been more visible, thanks to social media. Todays consumers have more ways than ever to signal their morals and valuesand more platforms to share those choices. Posting shopping hauls and empties videos, or filming a fit check with coffee in hand has become prime social media fodder. In an age where everything is content, more consumers are choosing brands that reflect who they are and what they stand for. A hallmark of Gen Z is coming to age in a hyper-connected world. In this world, every follow, like, repost, and even purchase is a direct reflection of a persons identity and values, Lightspeed CEO Dax Dasilva tells Fast Company. Through this connected world, there is a never-ending exposure to global issues, where activism, accountability, and cancel culture move at the speed of light. Today, the wrong purchase can carry social consequencesnot just from peers, but from the broader judgment of the internet. This pressure is especially strong among Gen Z: Thirty-two percent fear being canceled for supporting the wrong brands, which is more than five times higher than for boomers (6%). In many ways, this fear of being judged or canceled and the understanding of the weight of their buying decisions differentiates Gen Z from older generations, who have traditionally shopped based on things like price or quality, Dasilva says. This trendwhat Lightspeed calls value spendingis part of a broader consumer shift. Nearly all consumers (92%) identify as at least somewhat intentional in their purchases. While price (78%) and quality (67%) remain top priorities across generations, purchasing decisions that align with personal values or identity are close behind, cited by 62% of respondents. In the past six months, 27% of consumers made purchases based on national pride; 18% supported brands tied to charitable or social causes; another 18% chose products for their sustainability impact; and 15% factored in a CEOs political alignment. For 32% of these value spenders, this is a new behaviorbut half believe their spending carries more influence than ever before. Value for money has taken on a new meaning.


Category: E-Commerce

 

Latest from this category

18.06Why OpenAI and Microsofts AI partnership might be headed for a breakup
18.06This family-owned toy company is challenging Trumps tariffs before the Supreme Court
18.06This 19-year-old YouTuber is directing a new A24 horror movie
18.06Slide Insurance IPO: Stock price will be closely watched today as insurtech firm debuts on the Nasdaq
18.06G7 summit concludes after Trumps early exit and without major agreements on these key issues
18.06Who is going to shop at Amazon once AI takes all our jobs?
18.06How the advertising industry plans to coexist with generative AI
18.06More women in the boardroom can lead to safer companies
E-Commerce »

All news

18.06Poundland founder would have bought business back
18.06A White House lunch date has everyone talking amid an unfolding geopolitical game involving US, Pak, Iran, India
18.06UK inflation eases by less than anticipated ahead of Bank of England rate decision
18.06Why OpenAI and Microsofts AI partnership might be headed for a breakup
18.06This family-owned toy company is challenging Trumps tariffs before the Supreme Court
18.06This 19-year-old YouTuber is directing a new A24 horror movie
18.06Slide Insurance IPO: Stock price will be closely watched today as insurtech firm debuts on the Nasdaq
18.06G7 summit concludes after Trumps early exit and without major agreements on these key issues
More »
Privacy policy . Copyright . Contact form .