|
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Inside the new Grok 3 model In just two years, Elon Musks xAI has become one of a dozen or so labs capable of developing state-of-the-art AI models. Now xAI is out with its Grok 3 large language model, which beats state-of-the-art frontier models, such as OpenAIs GPT-4o and DeepSeeks V3, in common mathematics, science, and coding benchmarks by a wide margin. Meanwhile, the smaller Grok 3-mini performs at par with the larger competing models. The new Grok model reportedly was trained using unprecedented computing powerfirst with a cluster of 100,000 Nvidia H100 GPUs. A small group of rival developers have been testing an early version of Grok 3, and most say theyre impressed, with some caveats. OpenAI cofounder Andrej Karpathy posted on X that Grok 3 exhibited sharp reasoning skills and was able to resolve some complex problems. He estimates that the model is on par with OpenAIs o1-Pro reasoning model and slightly better than DeepSeek-R1 and Googles Gemini 2.0 Flash Thinking. However, he did find that Grok 3 choked on some prompts commonly known to give large transformer models trouble, such as determining how many Ls are in Lollapalooza, for example. Scale AI CEO Alexandr Wang posted on X that Grok 3 is a state-of-the-art model and gives it props for achieving the top spot on the Chatbot Arena benchmark. Whereas AI skeptic Gary Marcus, who also posted on X, said that while Grok 3 shows real progress, it doesnt represent a significant leap beyond existing models. More benchmark test scores will surface in the coming days and weeks to provide a fuller picture of how Grok 3 stacks up against the competition. Impressive as that is, the development of new thinking models is now moving so fast that Grok 3 could disappear back into the pack of benchmark performers three months from now. AI labs are only now learning how to scale up the computing power that thinking models use after being presented with a problem. Upcoming models from OpenAI, Anthropic, Google, DeepSeek, and others will show the fruits of that research. Brookings: The AI revolutions winners and losers wont be who or where youd think A new Brookings Institution analysis of AIs effects on jobs and job losses suggests highly educated urban workers will be most at risk of losing their jobs. The last industrial revolution mainly affected lower-wage manufacturing and service jobs in smaller towns and rural areas. This time around, itll be knowledge workers in tech hubs and financial centers who will face the greatest exposure to AI-driven change. In San Jose’s Santa Clara County, Brookings found that nearly 43% of workers could see half or more of their tasks transformed by AI tools including OpenAIs ChatGPT and Anthropics Claude. Meanwhile, workers in less tech-oriented regions like Las Vegas would see less than a third of their tasks altered by AI tools. This pattern holds true across the nation, with major disparities even within states: California’s exposure rates range from 42.8% in tech-heavy Santa Clara County to just 26.7% in rural Mono County. Its not so surprising when one looks at the technology itself, according to Brookingss report. Factory-floor machinery was meant to replace repetitive physical tasks, while generative AI specializes in cognitive work: writing, analysis, coding, and other knowledge-based tasks. The more education and higher wages a job requires, the more likely it is to be touched by AI capabilities. Brookings, a D.C.-based policy think tank, says lawmakers should be thinking about ways of protecting the jobs of urban-knowledge workers, and reskilling them, while ensuring that rural areas aren’t left behind in accessing AI’s productivity benefits. The geography of technological disruption has been rewritten, the think tank says, and the implications for workforce development and economic inequality are only just beginning to emerge. Ex-OpenAI Mira Murati unveils AI startup, but its focus remains vague Former OpenAI CTO Mira Murati has unveiled her new AI company, Thinking Machines Lab. While the product the startup intends to build remains unclear, Murati apparently intends to build AI in a very different way than her former companyout in the open. The AI research community used to be a fairly chatty place, but the research breakthroughs that led to ChatGPT soon attracted a lot of moneyand with big money comes more secrecy. So, while companies such as OpenAI and Anthropic closely guard their training methods, Murati said in a blog post that Thinking Machines will regularly publish its technical insights, research papers, and code. If DeepSeek is any guide (it open-sourced its models and published its research methods), this practice could intensify the race toward the industrys goal of creating artificial general intelligence (that is, AI thats generally smarter than humans). Muratis blog post also expresses an intent to create models that can be more easily steered toward specific applications in specific subject areas. AI systems remain difficult for people to customize to their specific needs and values, Murati wrote. She said her company will build systems that are more widely understood, customizable, and generally capable. The Information reports that more than two-thirds of the researchers at Muratis company come from OpenAI, including OpenAI cofounder John Schulman and former head of safety Lilian Weng. The startup intends to build systems that assist humans, not replace them. Instead of focusing solely on making fully autonomous AI systems, we are excited to build multimodal systems that work with people collaboratively, Murati wrote. We see enormous potential for AI to help in every field of work. Beyond that, little is known about what Thinking Machines Lab will build. Based on Muratis background and statements, it seems likely that the company will focus on very large foundation models that can be trained or adapted to many different specialized tasks. Meanwhile, Bloomberg reports that another OpenAI alum, cofounder Ilya Sutskever, is in talks to raise more than $1 billion in funding in a round that could value his AI startup, Safe Superintelligence, at more than $30 billion. More AI coverage from Fast Company:
Category:
E-Commerce
In the about 1,000 days between her drunken-driving crash in May 2022 and her death, South Korean mainstream news organizations published at least around 2,000 stories on film actor Kim Sae-ron.They illustrate how the local media often cover a celebrity’s fall from grace. Previously one of the brightest young stars in South Korean cinema, Kim was condemned and ridiculed for driving drunk; for talking about her financial struggles after losing roles; for taking a job at a coffee shop; for attempting a comeback in theater; for going out with friends instead of “showing remorse”; and for being seen smiling on set while shooting an indie movie.After the 24-year-old actor was found dead at her home Sunday, the headlines predictably swung to calling for changes to the way celebrities are treated in the public arena.Kim’s death, which police consider a suicide, adds to a growing list of high-profile celebrity deaths in the country, which some experts attribute to the enormous pressure celebrities face under the gaze of a relentlessly unforgiving media that seizes on every misstep. EDITOR’S NOTE: In South Korea, callers can receive 24-hour counseling through the suicide prevention hotline 1577-0199, the “Life Line” service at 1588-9191, the “Hope Phone” at 129 and the “Youth Phone” at 1388. Here’s a look at the intense pressure faced by South Korean celebrities who fall from grace. A sudden fall from grace South Korea is notoriously harsh on its celebrities, particularly women.Kim rose to stardom as a child actor with the 2010 hit crime thriller The Man from Nowhere and garnered acclaim and popularity for her acting in movies and TV dramas for years.But that changed after May 18, 2022, when Kim crashed a vehicle into a tree and an electrical transformer while driving drunk in southern Seoul. She posted a handwritten apology on Instagram and reportedly compensated around 60 shops that lost power temporarily because of the crash, but that did little to defuse negative coverage and she struggled to find acting work.When a Seoul court issued a 200 million won ($139,000) fine over the crash in April 2023, Kim expressed her fears about the media to reporters, saying many articles about her private life were untrue.“I’m too scared to say anything about them,” she said. Relentless negative coverage In the wake of Kim’s drunken-driving crash, celebrity gossip channels on YouTube began posting negative videos about her private life, suggesting without providing evidence that she was exaggerating her financial straits by working at coffee shops, and arguing that social media posts showing her socializing with friends meant she wasn’t showing enough remorse.Other entertainers, especially female, have struggled to find work after run-ins with the law, including drunken driving or substance abuse, and experts say many of them are reluctant to seek treatment for mental health problems like depression, fearing further negative coverage.Kwon Young-chan, a comedian-turned-scholar who leads a group helping celebrities with mental health issues, said celebrities often feel helpless when the coverage turns negative after spending years carefully cultivating their public image. Kwon, who stayed with Kim’s relatives during a traditional three-day funeral process, said her family is considering legal action against a YouTube creator with hundreds of thousands of subscribers for what they describe as groundless attacks on Kim’s private life.Peter Jongho Na, a professor of psychiatry at the Yale School of Medicine, lamented on Facebook that South Korean society had become a giant version of “Squid Game,” the brutal Netflix survival drama, “abandoning people who make mistakes or fall behind, acting as though nothing happened.” Media blamed for celebrity deaths The National Police Agency said officers found no signs of foul play at Kim’s home and that she left no note.But a spate of high-profile deaths has sparked discussions about how news organizations cover the private lives of celebrities and whether floods of critical online comments are harming their mental health. Similar conversations happened after the 2008 death of mega movie star Choi Jin-sil; the death of her former baseball star husband, Cho Sung-min, in 2013; the deaths of K-Pop singers Sulli and Goo Hara in 2019; and the death of “Parasite” actor Lee Sun-kyun in 2023.Sensational but unsubstantiated claims like from social media are widely recycled and amplified by traditional media outlets as they compete for audience attention, said Hyun-jae Yu, a communications professor at Seoul’s Sogang University.Struggling with a sharp decline in traditional media readership, he said, media turn to covering YouTube drama as the easiest way to drive up traffic, often skipping the work of reporting and verifying facts.Following the 2019 deaths of Sulli and Goo Hara, which were widely attributed to cyberbullying and sexual harassment both in the public and media, lawmakers proposed various measures to discourage harsh online comments. These included expanding real-name requirements and strengthening websites’ requirements to weed out hate speech and false information, but none of these proposed laws passed. Reforms remain elusive South Korean management agencies are getting increasingly active in taking legal action to protect their entertainers from online bullying. Hybe, which manages several K-Pop groups including BTS, publishes regular updates about lawsuits it’s filing against social media commentators it deems malicious.But Yu said it’s crucial for mainstream media companies to strengthen self-regulation and limit their use of YouTube content as news sources. Government authorities could also compel YouTube and other social media platforms to take greater responsibility for content created by their users, he said, including actively removing problematic videos and preventing creators from monetizing them.The South Korean office of Google, YouTube’s parent company, didn’t immediately respond to a request for comment.Heo Chanhaeng, an executive director at the Center for Media Responsibility and Human Rights, said news organizations and websites should consider shutting down the comments sections on entertainment stories entirely.“Her private life was indiscriminately reported beyond what was necessary,” Heo said. “That’s not a legitimate matter of public interest.” Kim Tong-Hyung, Associated Press
Category:
E-Commerce
A TikTok trend claims giving your baby a tablespoon or two of butter before bed will help them sleep better at night. What if I told you my toddler was still waking up every 2 hours at almost 2 years old until I started giving her real grass fed butter before bed, reads one TikTok post by creator @bridgette_.gray. Since then, her child has experienced a week straight of sleeping almost 8 hours every night. @bridgette_.gray We will be trying double the amount next week and aiming for 12 hours a night! #fyp #buttermagic #toddlerlife #toddlermom #hack #lifesecret #momcheatcode Save My Soul – noahrinker Another TikTok user @abbyexplainsitall calls butter (importantly, not margarine) the best sleep hack for kids and she lets hers eat as much as they want. The video currently has 279.8K views. In the caption she adds, The fats help keep them satiated and that helps with sleeping! My kids sleep from 6:30pm – 6:30am and still take amazing naps throughout the day. @abbyexplainsitall The best sleep hack for kids – butter – (not margarine) my kids love butter and I let them eat as much as they want. The fats help keep them satiated and that helps with sleeping! My kids sleep from 6:30p – 6:30a and still take amazing naps throughout the day. We also use avocados – Healthy fats are great for brain development and cognitive function. #sleeptraining #motherhood #babyhacks #toddlersoftiktok original sound – Abby But experts are pumping the brakes on the trend. According to pediatric consultant Niamh Lynch, there is actually no scientific evidence that giving babies butter before bed makes them sleep longer. Unfortunately butter is not going to make babies sleep better, she said in a video posted to Instagram. It might upset their tummy. It might cause diarrhoea. Its a choking hazard obviously to give them a big chunk of butter. So, park the butter idea. Instead she suggests a list of foods that do actually help with sleep, including kiwi, cherries, milk, fatty fish, nuts, and rice (although beware of allergies). View this post on Instagram A post shared by Dr Niamh Lynch (@dr_niamh_lynch) Giving babies any solid food before they are around 6 months old is also not recommended. From about 6 months old, babies can begin to be offered nutritious solid foods. Even then, butter is not the best option as it is high in salt and saturated fat, which are not recommended in large amounts. Butter is not the only sleep-hack tried and tested by desperate parents. It was once thought that adding cereal in a bottle of milk before bedtime would also help babies sleep through the night (research found this did not increase sleep in the slightest). More recently, the viral lime hack, where parents cut a lime in half, place it in a dish, and position it next to their childs bed for better sleep, has been doing the rounds online. The truth is, it is perfectly normal for babies to wake during the night. Not even a stick of Kerrygold or half a lime can come to parents’ rescue.
Category:
E-Commerce
All news |
||||||||||||||||||
|