|
YouTube star Jimmy Donaldsonaka MrBeastis the face of the online video-sharing platform. He tops the platforms most-subscribed list, with more than 400 million people following his exploits. Online video has made Donaldson rich, with his business worth an estimated $1 billion. However, the megastar is now embroiled in controversy following the launch of a new AI-powered thumbnail generator. The tool, developed with the analytics platform Viewstats, was promoted in now-deleted videos by Donaldson as a way for creators to easily generate eye-catching thumbnailsincluding the ability to swap faces and styles with existing popular videos. The product was condemned by fellow YouTubers and artists, who accused MrBeast of facilitating the theft of their creative work and brand identity. Prominent creators like Jacksepticeye (i.e. Seán McLoughlin) publicly criticized the tool after his own logo and thumbnail style were used in promotional materials without his consent, calling the practice deeply unethical and harmful to the creative community. I hate what this platform is turning into. Fuck AI, Jacksepticeye posted on X. (Neither McLoughlin nor Donaldson responded to Fast Company‘s request for comment.) Donaldson quickly acknowledged the concerns, pledging to make changes to the tool. Ill build this more in a way to be inspiration for artists/a tool they use and not replace them, he posted on X. Still, the incident has gained momentum, provoking angry responses and heated debate about the endorsement of such an AI product. For example, another YouTuber, Eric Pointcrow, said of Donaldson: What a piece of work. The mini-drama has riled the YouTube community in a way few other issues have, touching on a common occurrence in the space: the copying of video thumbnails. Why? I think there are several things going on here, says Jess Maddox, associate professor at the University of Alabama, who studies platform dynamics on sites like YouTube. Primarily, Maddox believes that underlying the controversy is some good old-fashioned YouTube drama. The platform often responds as a mob to things it deems offensive, so its unsurprising that this incident has triggered so much anger. YouTube pioneered online pile-on culture, in which everyone wants a piece of someone elses name, image, or likeness, says Maddox. But its actually quite hard to go after MrBeast, whos one of the biggest and most successful creators. Hes almost too big to fail, or ride his coattails. Beyond that, Maddox points out that the technologyand the broader fear of automationis also driving the intensity of the response. AI in the creator economy is incredibly controversial right now, says Maddox. Many do view it as theft, and other creators view not using it as a badge of honorthat they can say with pride they either do all the work themselves or pay their team fairly to do so. Donaldson’s decision to launch the AI product also came just after YouTube admitted that it used a subset of the 20 billion videos on its platform to train Googles Veo 3 video generation AI modela fact that may have further amplified the backlash. Yet a recent small survey of U.K. YouTube creators suggests that up to four in five creators are already using AI themselves, saving nearly eight hours of work each week. Whats caused this backlash isnt just the tool, its what it represents, agrees Dom Smales, cofounder of GloMotion Studios, a digital entertainment studio and longstanding voice in the YouTube space. When the most powerful creator on the platform automates creativity using other creators work, it hits a nerve. It further exposes the growing gap between mega-creators and everyone else, which has to be handled carefully as this is a community above everything else. This combination of factors helps explain why the criticism has been so strong and so sustained. MrBeast clearly has enough money to pay for this work, so the fact that he isnt doesnt paint him in the most positive light, says Maddox. The idea that such AI systems might worsen existing problems is also top of mind. If the biggest YouTube creator out there is using AI, I think many creators are nervous this will unfairly exacerbate the divide between big creators and mega-creatorsnever mind the divide between those and micro- and nano-creators, Maddox says. AI is a labor issue, and it risks making an already unequal creator economy even more unequal. Yet Smales cautions that people shouldnt be so quick to vilify AIso long as its used responsibly. AI is here to stay and can be a superb tool to level creators up and allow further democratization of the creator economy, he says. Im building businesses that use it, but I believe it has to be developed with creators, not just deployed on them.
Category:
E-Commerce
A team of prominent AI researchers, led by Databricks and Perplexity cofounder Andy Konwinski, has launched Laude Institute, a new nonprofit that helps university-based researchers turn their breakthroughs into open-source projects, startups, or large-scale products with real-world impact. Laude brings together top academic and industry leaders to guide promising AI research out of the lab and into the world. Its mission: help more AI ideas cross the gap from paper to product. The effort builds on a growing belief within the AI and open-source communities that the fields biggest advances should be developed in public, not behind corporate walls. Many promising breakthroughs happen inside university labs, but often end up as research papers with no clear path to deployment. At the same time, as AIs development costs and potential rewards have skyrocketed, the need to support ambitious academic work outside of the big tech ecosystem has become more urgent. Konwinski, who was named one of Bloombergs New Billionaires of the AI Boom, has assembled a high-profile board for Laude. Among its members are Googles head of AI Jeff Dean, board chairman and Turing Award winner Dave Patterson, and Joëlle Pineau, a professor at McGill University and the Quebec AI Institute (Mila), and former Global VP of AI Research at Meta (FAIR). Andy Konwinski at Laude Institute’s Inaugural SYR Summit in San Francisco, June 18 [Photo: Marc Fong] Laudes core goal is to replicate and enhance the university lab model used by departments like UC Berkeleys, known for foundational AI research. As a PhD student at Berkeley, Konwinski helped develop Apache Spark and later cofounded Databricks to commercialize it. That experience shaped his vision for Laude. I could do another company, he says, but I’m honestly more interested in helping find other Databricks and Perplexities and Linux and the internet and the personal computer. Laudes biggest flagship funding initiative, Moonshots, will initially focus on projects in four key areas: reinventing healthcare delivery (for example, by developing an AI-powered insulin pump), accelerating scientific breakthroughs (such as visualizing black holes or discovering new materials), revitalizing civic discourse (helping voters find common ground on controversial issues), and helping workers reskill for the AI age. These are domains where AI could have significant positive impact, but where the technologys potential is still largely untapped, Konwinski explains. Laude, a nonprofit with a public benefit corporation operating arm, will award grants to ambitious moonshot projects that may take three to five years to complete. Selected projects will receive $250,000 seed grants, with the most promising progressing to multiyear research labs led by faculty affiliated with universities. Moonshots, one of Laude Institute’s flagship programs, fund big swings at species-level challenges with multi-year research labs. [Image: Courtesy of laude.org] Funding ambitious, high-impact work for long periods can give academic labs the autonomy to really identify and tackle significant societal challenges, Dean says. This longer-term view can enable not just writing research papers but also creation of full-fledged working systems, open-source software to catalyze broader communities, or other forms of impact. In addition, Laude will support slingshot projects, providing fast, low-friction grants and embedded support for individual researchers aiming to launch startups or open-source projects. This could mean tens of thousands of dollars worth of compute time, funding for PhD or Postdoc support, or embedding engineers, designers, and communicators to help bring a product to completion. Slingshots, one of Laude Institute’s flagship programs, give the right resources to the right researchers at the right time. [Image: Courtesy of laude.org] We talk about the right resource for the right researcher atthe right time in order to maximize how many more open-source breakouts and how many more companies we can build, says Konwinski, who has pledged $100 million of his own money to fund the first round of grants. Laudes primary value will not just be resources like talent and compute power, but guidance from people who have successfully brought technologies from lab to market. The academic model, when done well, can be excellent, but it doesn’t necessarily have this ability to accelerate research at key points, Pineau says. You need to bring in more resources, build artifacts that go beyond papers, and get them in front of users. A network of advisers, including top professors and industry leaders, will help shape research projects by offering insights on product launches, multidisciplinary viewpoints, and best practices for open-source distribution. Among the advisers are Databricks CEO Ali Ghodsi, Jake Abernethy of Georgia Tech and Google DeepMind, Ludwig Schmidt of Stanford and Anthropic, Kurtis Heimerl of the University of Washington, Berkeley RISElab director Ion Stoica, and researcher-professors from Caltech, University of Wisconsin, and University of Illinois Urbana. For some researchers, Laude may provide an appealing alternative to venture capital. There are some projects where it’s probably too risky for venture capitalists to take on, Pineau says, while noting that not all VCs are the same. They tend to be a little bit shortsighted and want to see returns within a certain time frame, whereas a moonshot can tolerate higher risks. There are also practical considerations. Some researchers prefer to keep one foot in academia, while VCs often want them to go full-time in the commercial space. Berkeley roots The inspiration for Laude dates back to Konwinskis days as a PhD student at Berkeley from 2007 to 2012. Patterson, then a professor in the computer science department, was instrumental in developing Berkeleys lab system. There, professors lead labs that attract PhD students and postdocs to pursue emerging fields like reinforcement learning. We developed this model of research labs with an opinionated style that were multidisciplinary, Patterson says. Experts from across the university were brought in to offer fresh perspectives on the work. Labs were structured with five-year sunset clauses to encourage high-impact results. About a year ago, after founding Databricks and Perplexity, Konwinski returned to the department with the goal of using his new wealth to give more young researchers the experience he had. At Berkeley, PhD students sometimes write vision papers on controversial topics. As a student, Konwinski cowrote one on the value of cloud computing for research. Upon returning, he wanted to take on an even more ambitious subject: how to accelerate and improve the real-world impact of AI research. The result was Shaping AI, a paper coauthored by Konwinski, Patterson, and others, with input from Anthropic CEO Dario Amodei, Google DeepMind researcher and 2024 Nobel Prize winner John Jumper, Eric Schmidt, and former President Barack Obama. The idea for the Moonshot program took shape through writing the paper. We recognized that a way to help shape AIs impact was to set up prizes and research labs, similar to what we did at Berkeley, Patterson says. The new idea was inducement prizes like the X-Prize, and also new labs in North America to tackle big problems and improve AIs outcomes for public good. How Laude fits in Laude is not exactly an incubator or an accelerator. It represents something new, with a clear AI for good mission and a conscientious approach to where and how the research is done. That starts with transparency. One of the requirements of this funding is to keep everything in the open, Patterson says. There are not many requirements for grant recipients, but one is everything must be open source. Konwinski is also focused on how researchers handle both the benefits and risks of the technology they create. Returning to Berkeley, he was troubled by the polarized tone of the AI debate. The AI discourse has ended up a bit polarized, he says. It’s the accelerationists and doomers. You either pump the brakes or youre pedal-to-the-metal. That loses nuance. Konwinski believes in a rational middle ground. It would be just as much of a tragedy to ignore the upsides, especially medium and near-term upsides, as it would be to ignore the catastrophic potential. Laude will encourage researchers to participate in public discussions about their work, partly to ensure they appreciate the weight of the decisions they are making. Too often, he says, executives like Sam Altman or Sundar Pichai lead the conversation about breakthrough technologies, not the Ilya Sutskevers and Jeff Deans who actually create them. Getting started On Thursday, June 19, Konwinskis voice was nearly gone after presiding over Laudes first Ship Your Research Summit the day before in San Francisco. The event brought together 70 handpicked researchers from more than two dozen universities for a day of salon-style discussions. Speakers included Jeff Dean and Dave Patterson, along with an off-record session with the Databricks founding team. Laude plans to make the summit an annual event to strengthen its community and attract new talent in computer science. Konwinski is particularly passionate when talking about Laudes community-building role. He wants Laude to serve as an anchor for researchers with strong academic ties who believe in open source and are motivated to use AI to tackle tough problems and seize new opportunities. It means you put people in a room and you make them like part of something bigger than themselves, he says. Its like, ‘Wow, I’m with my people here who want to move humanity forward by turning research into breakthroughs.’ Thats special. Shortly after the summit, Laude announced its first major investment: $3 million a year for five years, comparable to a National Science Foundation grant, to fund a new AI-focused lab at UC Berkeley. The lab, led by a team of Berkeleys top researchers including Ion Stoica, Matei Zaharia, Joey Gonzalez, and Raluca Ada Papa, is set to open in 2027.
Category:
E-Commerce
Maria Weston Kuhn had one lingering question about the car crash that forced her to have emergency surgery during a vacation in Ireland: Why did she and her mother sustain serious injuries while her father and brother, who sat in the front, emerge unscathed?“It was a head-on crash and they were closest to the point of contact,” said Kuhn, now 25, who missed a semester of college to recover from the 2019 collision that caused her seat belt to slide off her hips and rupture her intestines by pinning them against her spine. “That was an early clue that something else was going on.”When Kuhn returned home to Maine, she found an article her grandma had clipped from Consumer Reports and left on her bed. Women are 73% more likely to be injured in a frontal crash, she learned, yet the dummy used in vehicle tests by the National Highway Traffic Safety Administration dates back to the 1970s and is still modeled almost entirely off the body of a man. A survivor becomes an activist Kuhn, who is starting law school at New York University this fall, took action and founded the nonprofit Drive US Forward. Its aim was to raise public awareness and eventually encourage members of Congress to sign onto a bill that would require NHTSA to incorporate a more advanced female dummy into its testing.The agency has the final word on whether cars get pulled from the market, and the kind of dummy used in its safety tests could impact which ones receive coveted five-star ratings.“It seems like we have an easy solution here where we can have crash test dummies that reflect an average woman as well as a man,” Sen. Deb Fischer, a Nebraska Republican who has introduced the legislation the past two sessions, told The Associated Press.Senators from both parties have signed onto Fischer’s “She Drives Act,” and the transportation secretaries from the past two presidential administrations have expressed support for updating the rules.But for various reasons, the push for new safety requirements has been moving at a sluggish pace. That’s particularly true in the U.S., where much of the research is happening and where around 40,000 people are killed each year in car crashes. Evolution of a crash test dummy The crash test dummy currently used in NHTSA five-star testing is called the Hybrid III, which was developed in 1978 and modeled after a 5-foot-9, 171-pound man (the average size in the 1970s but about 29 pounds lighter than today’s average). What’s known as the female dummy is essentially a much smaller version of the male model with a rubber jacket to represent breasts. It’s routinely tested in the passenger seat or the back seat but seldom in the driver’s seat, even though the majority of licensed drivers are women.“What they didn’t do is design a crash test dummy that has all the sensors in the areas where a woman would be injured differently than a man,” said Christopher O’Connor, president and CEO of the Farmington Hills, Michigan-based Humanetics Group, which has spent more than a decade developing and refining one.A female dummy from Humanetics equipped with all of the available sensors costs around $1 million, about twice the cost of the Hybrid used now.But, O’Connor says, the more expensive dummy far more accurately reflects the anatomical differences between the sexesincluding in the shape of the neck, collarbone, pelvis, and legs, which one NHTSA study found account for about 80% more injuries by women in a car crash compared to men.Such physical dummies will always be needed for vehicle safety tests, and to verify the accuracy of virtual tests, O’Connor said.Europe incorporated the more advanced male dummy developed by Humanetics’ engineers, the THOR 50M (based on a 50th percentile man), into its testing procedures soon after Kuhn’s 2019 crash in Ireland. Several other countries, including China and Japan, have adopted it as well.But that model and the female version the company uses for comparison, the THOR 5F (based on a 5th percentile woman), have been met with skepticism from some American automakers who argue the more sophisticated devices may exaggerate injury risks and undercut the value of some safety features such as seat belts and airbags. A debate over whether more sensors mean more safety Bridget Walchesky, 19, had to be flown to a hospital, where she required eight surgeries over a month, after a 2022 crash near her home in Sheboygan, Wisconsin, that killed her friend, who was driving. While acknowledging the seat belt likely saved her life, Walchesky said some of the injuriesincluding her broken collarbonewere the result of it pinning her too tightly, which she views as something better safety testing focused on women could improve.“Seat belts aren’t really built for bodies on females,” Walchesky said. “Some of my injuries, the way the force hit me, they were probably worsened.”The Alliance for Automotive Innovation, an industry trade group, said in a statement to the AP that the better way to ensure safetywhich it called its top priorityis through upgrades to the existing Hybrid dummy rather than mandating a new one.“This can happen on a faster timeline and lead to quicker safety improvements than requiring NHTSA to adopt unproven crash test dummy technology,” the alliance said.Humanetics’ THOR dummies received high marks in the vehicle safety agency’s early tests. Using cadavers from actual crashes to compare the results, NHTSA found they outperformed the existing Hybrid in predicting almost all injuriesincluding to the head, neck, shoulders, abdomen, and legs.A separate review by the Insurance Institute for Highway Safety, a research arm funded by auto insurers, was far more critical of the dummy’s ability to predict chest injuries in a frontal crash. Despite the vast expansion in the number of sensors, the insurance institute’s testing found, the male THOR dummy was less accurate than the current Hybrid dummies, which also had limitations.“More isn’t necessarily better,” said Jessica Jermakian, senior vice president for vehicle research at IIHS. “You also have to be confident that the data is telling you the right things about how a real person would fare in that crash.” The slow pace of changing the rules NHTSA’s budget plan commits to developing the female THOR 5F version with the ultimate goal of incorporating it into the testing. But there could be a long wait considering the THOR’s male version adopted by other countries is still awaiting final approval in the U.S.A 2023 report by the Government Accountability Office, which conducts research for Congress, cited numerous “missed milestones” in NHTSA’s development of various crash dummy enhancementsincluding in the THOR models.Kuhn acknowledges being frustrated by the slow process of trying to change the regulations. She says she understands why there’s reluctance from auto companies if they fear being forced to make widespread design changes with more consideration for women’s safety.
Category:
E-Commerce
All news |
||||||||||||||||||
|