|
|||||
Yann LeCun, Metas outgoing chief AI scientist, says his employer tested its latest Llama model in a way that may have made the model look better than it really was. In a recent Financial Times interview, LeCun says Meta researchers fudged a little bit by using different versions of Llama 4 Maverick and Llama 4 Scout models on different benchmarks to improve test results. Normally researchers use a single version of a new model for all benchmarks, instead of choosing a variant that will score best on a given benchmark. Prior to the launch of the Llama 4 models, Meta had begun to fall behind rivals Anthropic, OpenAI, and Google in pushing the envelope. The company was under pressure to reassert Llamas prowess, especially in an environment where stock prices can turn on the latest model benchmarks. After Meta released the Llama 4 models, third-party researchers and independent testers tried to verify the companys benchmark claims by running their own evaluations. But many found that their results didnt align with Metas. Some doubted that the models it used in the benchmark testing were the same as the models released to the public. Ahmad Al-Dahle, Metas vice president of generative AI, denied that charge, and attributed the discrepancies in model performance to differences in the models cloud implementations. The benchmark-fudging, LeCun said, contributed to internal frustration about the progress of the Llama models and led to a loss of confidence among Meta leadership, including CEO Mark Zuckerberg. In June, Zuckerberg announced an overhaul of Metas AI organization, which included the establishment of a division called Meta Superintelligence Labs (MSL). Meta also paid between $14.3 billion and $15 billion to buy 49% of AI training data company Scale AI, and tapped Scales CEO, Alexandr Wang, to lead MSL. On paper, at least, LeCun, who won the coveted Turing Award for his pioneering work on neural networks, reported to the 28-year-old Wang. LeCun told FTs Melissa Heikkilä that while Wang is a quick learner and is aware of what he doesnt know, hes also young and inexperienced. Theres no experience with research or how you practice research, how you do it. Or what would be attractive or repulsive to a researcher, LeCun said. The division LeCun ran at Meta for a decade, FAIR (Fundamental Artificial Intelligence Research), was a pure research organization that picked its own areas of inquiry. An adjacent applied AI group worked closely with the lab to find ways to use the research in Metas own products. But the organizational changes werent the only reason LeCun wanted to leave Meta. He has long expressed doubts that the current thrust of Metas AI researchlarge language modelswill lead to human-level intelligence because such models cant learn fast and continuously. LLMs can learn a certain amount about the world through words and images, but the models of the future will also have an understanding of the real world through physics. And it’s those world models that LeCun hopes to invent at his new company, Advanced Machine Intelligence. LeCun will act as executive chair, which will allow him to spend much of his time doing research. Alex LeBrun, CEO of French healthcare AI startup Nabla, will become CEO of AMI. Im a scientist, a visionary. . . . I can inspire people to work on interesting things, LeCun told Heikkilä. Im pretty good at guessing what type of technology will work or not.
Category:
E-Commerce
McDonalds limited-time McRib sandwich is a cultural icon. And like any item of its ilk, its divisive. On the one hand, the saucy, vaguely rib-esque boneless pork sandwich has a fan base so dedicated that its inspired its own Reddit megathread, merch, and a website called the McRib Locator. But on the other, the McRib has long been critiqued for its off-putting form factor and dubious ingredients. Now, a new class action lawsuit is asking the question thats always plagued the sandwich: Is the McRib actually rib? The lawsuit, which was filed on December 23, 2025, in the Northern District of Illinois, alleges that McDonalds has purposefully been misleading customers with the name and shape of the McRib. The four plaintiffs who jointly filed the suit claim that the sandwich is advertised to resemble a rack of pork ribs, which McDonalds does despite knowing that the sandwich in fact does not contain any meaningful quantity of actual pork rib meatindeed, none at all. Ultimately, this lawsuit is all about marketingand how we define deceptive marketing practices. Does a rib-shaped seasoned boneless pork patty, as McDonalds describes it, a rib make? Or is the McRib a mere imitation of a true rib sandwich, masquerading as the real thing to allow McDonalds to jack up its prices? While most Americans probably have their own knee-jerk reaction to these questions, the official answer will be left up to the court. For now, here are the facts. [Images: United States Department of Justice] “The name McRib is a deliberate sleight of hand The crux of the new lawsuit rests on proving whether the McRib can definitionally be called riband, as it turns out, thats easier said than done. According to the filing, McDonalds has cultivated a scarcity mindset around the McRib by only releasing it for a brief time each year since its 1981 debut, using annual anticipation to drive sales. Its authors suggest that the term rib refers to a more premium cut of meatgenerating an expectation of quality that allows McDonalds to price the sandwich at up to $7.89 in some regions, making it among the most expensive single-item options on the menu. Further, they argue, McDonalds purposefully misleads customers by calling the sandwich a McRib and shaping it to resemble a rack of pork ribs. This mislabeling rests at the core of their claim that the McRibs status as a fleeting hero of McDonalds menus nationwide rests on an inherently deceptive premise. The name McRib is a deliberate sleight of hand, the suit reads. By including the word Rib in the name of the sandwich, McDonalds knowingly markets the sandwich in a way that deceives reasonable consumers, who reasonably (but mistakenly) believe that a product named the McRib will include at least some meaningful quantity of actual pork rib meat, which commands a premium price on the market. Instead, it adds, theyre actually eating a “reconstructed meat product. Yikes. [Source Images: United States Department of Justice] To rib, or not to rib? To understand the difference between a pork rib and a reconstructed meat product, the filing dives into its definition of actual pork rib meat. It says pork rib meat refers either to spare ribs, a cut at the bottom of the rib cage, or baby back ribs, located at the top of the rib cage. Both cuts, it explains, are consistently priced higher than lower-quality cuts like loin or butt. Compare that definition to the McRibs contents, and things get a little dicey. Per the filing, the McRibs meat patty is constructed using ground-up portions of lower-grade pork products, such as pork shoulder, heart, tripe, and scalded stomach. In an email to Fast Company, McDonald’s wrote that the lawsuit “distorts the facts” with “meritless claims,” adding, “Our fan-favorite McRib sandwich is made with 100% pork sourced from farmers and suppliers across the U.S.there are no hearts, tripe or scalded stomach used in the McRib patty as falsely alleged in this lawsuit. Weve always been transparent about our ingredients so guests can make the right choice for them. Already, an army of McRib fans are rising to defend the sandwichs honor on Reddit. Do people have nothing better to do or have no shame? one commenter wrote. Who really really thought the McRib was meat from ribs? Another added, Dumb. . . . Imagine all the there was a bone in my McRib post if it was actually ribs. Whether you believed McDonalds nebulous meat slab was made of real ribs or not, t remains to be seen whether this case will impact the McRibs future. Regardless, its a good day to be a vegetarian.
Category:
E-Commerce
Michael Jordan is widely recognized as one of the best basketball players to ever live. In a recent interview, Jordan revealed one of the secrets to his success: His love of the game. Jordan says he loved the game so much that he made sure to have a special clause included in his contract when playing with the Chicago Bulls, one which hes positive players today dont have: the love of the game clause. If I was driving with you down the street, and I see a basketball game on the side of the road, I can go play in that basketball game, Jordan told NBCs Mike Tirico. And if I get hurt, my contract is still guaranteed. Jordan went on to explain that constant practice, not just doing drills but playing real games, helped him and other NBA players like Larry Bird master their craft. It was playing in games that helped players develop their love of basketball, and helped them remain passionate about the game, rather than just viewing it as a job. I love the game so much. I would never let someone take the opportunity for me to play the game away from me, Jordan said. Jordans love of the game clause teaches us an important secret to finding career success, namely: To truly become the best at what you do, you have to love it. This secret is related to emotional intelligence, the ability to understand and manage emotions to reach a goal. How can you leverage emotional intelligence to master what you do? Lets explore. (If you enjoy this article, consider signing up for my free emotional intelligence course.) Leveraging your “love of the game” To clarify, Jordan wasnt speaking about becoming the best basketball player ever. Although countless fans and analysts alike have pegged Jordan as the GOAT (greatest of all time), Jordan typically steers away from that conversation, saying that title disrespects the basketball legends whove come before him, and the players who play today. Rather, Jordan was primarily interested in reaching his full potentialand his love of the game fueled that drive. Basketball was that type of love for me, Jordan said. I had to find a way to make sure I was the best basketball player I could be. Jordans success led to his becoming the wealthiest professional athlete in history. Most of his earnings didnt come from his playing contracts, though. Rather, they resulted in multiple business ventures and branding deals, most notably the Jordan brand with Nike. But Jordan says that for him, the brand never affected what he was going to do on the basketball court. I put the work first, and then the brand evolved based on the work, Jordan said. We would play this game for free. We did. And now we just happen to get paid for it. So, how can you apply this to your own work? There are several reasons business owners run the businesses they do. You may have taken over a family business. Maybe you dabbled in the world of self-employment and discovered you enjoyed the freedom it offered. Other entrepreneurs become so out of necessity: Mark Cuban started his first business after getting fired. But regardless of how you got into the business you now run, the secret to mastering your craft is to develop a love for what you do. Ask yourself: What aspects of my work do I really love? The things Id do for free? How can I practice those things as much as possible? How can I further leverage that love to master my craft? As you answer those questions, and as you put in the work, youll find yourself constantly improving, continually growing, and consistently becoming a better (work) version of yourself. Because if theres one thing that Michael Jordan taught us, its that natural ability, talent, and skill will get you far, but love is what makes you the best. Justin Bariso This article originally appeared on Fast Companys sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
Category:
E-Commerce
The founder of Slack once deemed email the cockroach of the internet. He wasnt the first to lament the extreme survivability of our inbox. From text messages to social media to office messaging platforms, all sorts of communication technologies have teased the promise of killing email by connecting us to others in faster, richer ways. And yet, more than 50 years after its invention, ye olde email is more popular than ever. Some 1 billion people spend three hours a day in emailadding up to more than a trillion hours collectively per year, according to the email app Superhuman. And theres no sign of this slowing down. More people use Gmail every single month than ever before, says Blake Barnes, head of Gmail product, who oversees the experience of more than 2.5 billion users on the worlds most popular email platform. To some, email is an endless guilt machine: The average person receives dozens of messages each day but takes action on fewer than five, according to Yahoo. And the range of emails we receive is wild to comprehend: personal notes. Newsletters. Amazon package updates. Dinner reservations. Jira tickets. LinkedIn invites. Passwords weve sent to ourselves. Strange conspiracy theory chain letters forwarded along by a second cousin once removed. Email has become the junk drawer for our digital lives. A catchall for intimate and automated messages, our inboxes contain too much information for most people to process. Your last 100 emails are more unique than your fingerprint, says Anant Vijay, product lead behind the encrypted-email platform Proton Mail. Even if youre using another app to do something, theres an imprint left in your email. And therein lies the opportunity. Not only is email refusing to go away, its becoming more important than ever in our new, data-hungry world. And startups and incumbent tech companies alike are vying to control it. A slew of email apps have launched in recent yearsincluding Notion Mail, from emerging productivity giant Notion, and the organization-minded Shortwaveeach with a different set of handy UX features for juggling your inbox. At the same time, giants like Yahoo and Google are racing to maintain their dominance. But nowhere is the value of email more evident than writing-assistance titan Grammarlys acquisition of email startup Superhuman for an undisclosed amount over the summer. (Superhuman was last valued at $825 million, in 2021, according to PitchBook.) In October, Grammarly rebranded itself as Superhuman. Ultimately, these companies arent so much betting that email will be the future of communication but that its treasure trove of data contains all the information needed to create the personalized AI systems of tomorrow. By owning email, they plan to claim your whole life. The ‘Overwhelming’ inbox The promise behind most email platforms is sanity. The average person faces 400 unread emails at any given moment. And given that the subject and first few lines of any email tend to be generic, it can be hard for people to extract insights at a glance. Its just overwhelming, says Kyle Miller, GM at Yahoo Mail, the worlds second-largest email platform, with hundreds of millions of users. Some users dont see [inbox zero] as a goal, and thats okay. What were trying to do is help them get out this clutter so then they dont miss the stuff thats important. To help users tackle the mess, Yahoo recently started gamifying the task with a daily Inbox Challenge that gives users trophies for triaging their messages. Other email platforms are supercharging the auto-sort function. Eleven-year-old Proton, which relaunched its security-focused email app earlier this year, not only compiles your newsletters into one stack, it also displays your average open rate for each, so you can decide if its time to unsubscribe. Notion Mail, launched in April, distinguishes itself by letting you sort email by any content criteria you can think of. For instance, you can ask Notion to label incoming job applications as Job Candidates or have your home renovation emails sorted to Home Improvements. Superhuman offers similar features, along with an auto-reply service that drafts responses tailored to recipients and in your own voice and tone. All you need to do is hit send. Modern AI makes these advanced features possible. When Superhuman founder Rahul Vohra was fundraising 11 years ago, an investor asked him how he planned to realize the magical interactions hed teased in his investment deck. Frankly, I dont know, Vohra said at the time, though he, like many, trusted that the technologies would eventually arrive. Today, Superhuman says that its users reply to 72% more emails per hour after signing up, thanks largely to a combination of auto-sorting and auto-writing tools. [Email has] always had all this data, but up until large language models, there was no way for the computer to access that information, says Andrew Lee, who launched the email client Shortwave in 2022 to pick up where Google left off when it folded its short-lived Inbox app (which bundled messages into categories and allowed you to snooze messages). You can go through and read [your emails], but its a huge amount. We have people with 10 million emails in the system. And now the computer can just go and read 10 million emails! Email apps are going beyond mere sorting to use LLMs to extract data and surface insights that allow users to make faster decisions. In the case of Yahoo Mail, that means emails now have action buttons placed right below the subject line. Those actions might be copying a security code or RSVPing for a birthday party or paying a bill, so you dont even have to open the email, Miller says. Superhuman and Shortwave, meanwhile, let you manage the deluge by querying your mail directly. You can ask the AI straightforward questions (Where is the Q1 off-site? or What time is my flight to Denver?), and these services will analyze your email for the answers, much like Perplexity will hunt for information across the internet. Proton Mail, which encrypts email to offer a higher level of security, is the rare exception: The company sees cloud-based LLMs as an inherent security risk. But product team lead Anant Vijay believes that within a few years, high-quality AI models will be able to run on your phone or computerallowing them to analyze your emails securely. A growing number of email users, however, seem willing to hand over their most precious data in the name of unlocking new efficiencies. To set up a new Shortwave account, for example, you first have to copy over your inbox for analysis on the companys servers. Shortwave, which has enterprise plans for teams of 50 or more, explains the security risk to prospective clients. I have calls with people at investment firms and Fortune 500 companies. I see the concern on their faces. And then theyre like, Nah, but I want it! says Lee. Theres a lot of pressure in these companies for security, but theres even more pressure to figure out a [corporate] AI strategy. Agentic for email While some of these email services can be used for free, all of them reserve their best features for people willing to pay for a subscriptionup to $40 per month for a Superhuman business account. But those initial dollars arent the endgame. Modern email apps are positioning themselves at the top of the funnel to pull you inand offer agentic services that go well beyond managing your correspondence. Yahoos first salvo will be connecting your inbox more directly to your calendar. The company is working on a product that could take information out of your email and offer it back to you as a listand then pin the items to suggested dates on your calendar. Yahoo plans to further build this out, so its AI agent will eventually handle many of these to-dos for you. Google is thinking along similar lines. In the future, you can imagine a world where [your] calendar understands you deeply, says Google VP Barnes. It knows when youre eating dinner with your family. It knows when its best to meet a new prospective client, when youre most fresh. Vohra from Superhuman envisions a future where an AI agent is ccd on emails, allowing it to take over tasks, like scheduling a meeting. Our two AI agents can find time and book meetings for us despite neither of us actually having access to each others calendar, he says. Indeed, AI is rapidly breaking email out of its inbox. Shortwave recently launched a spin-off platform, called Tasklet, that lets users program background agents that connect their email and calendar to more than 3,000 services via APIs. For heavy email users, these agents hold a lot of promise. Real estate agents could use plain language to program a daily search of new homes for a prospective client. Meanwhile, product developers could use agents to track updates from disparate apps and correlate them into a dashboard that tracks bug reports and patches. As for Gmail, Barnes says that not only will it get the power of the AI Overviews weve seen in Google Search, but Google Search will get the knowledge of your email to personalize its results: What if Gemini could help you plan a vacation with all of the context Gmail has? Imagine that experience. We know what kind of places you like to go to. We know the budget you usually spend. We know how many people youre traveling with. Eventually, this could evolve into more than a shopping assistant. Its like having your own personal chief of staff, he says. In a world ruled by AI, most tech strategists believe well no longer be managing our lives by juggling individual apps or even platforms like Slack or Teams. All of this information and communication could sit largely out of sight, most of the time, while an AI with the most intimate and complete portrait of your life helps to make decisions on your behalf. Thats as exciting for a big data player like Google as it is for a newer startup like Superhumanbecause the first challenge is being adept at wrestling that treasured email junk drawer into shape. We actually feel really great about this, says Vohra. Primarily, because we have a massive head start.
Category:
E-Commerce
When ChatGPT burst onto the scene, much of academia reacted not with curiosity but with fear. Not fear of what artificial intelligence might enable students to learn, but fear of losing control over how learning has traditionally been policed. Almost immediately, professors declared generative AI poison, warned that it would destroy critical thinking, and demanded outright bans across campuses, a reaction widely documented by Inside Higher Ed. Others rushed to revive oral exams and handwritten assessments, as if rewinding the clock might make the problem disappear. This was never really about pedagogy. It was about authority. The integrity narrative masks a control problem The response has been so chaotic that researchers have already documented the resulting mess: contradictory policies, vague guidelines, and enforcement mechanisms that even faculty struggle to understand, as outlined in a widely cited paper on institutional responses to ChatGPT. Universities talk endlessly about academic integrity while quietly admitting they have no shared definition of what integrity means in an AI-augmented world. Meanwhile, everything that actually matters for learning, from motivation to autonomy, pacing, and the ability to try or fail without public humiliation, barely enters the conversation. Instead of asking how AI could improve education, institutions have obsessed over how to preserve surveillance. The evidence points in the opposite direction And yet the evidence points in a very different direction. Intelligent tutoring systems are already capable of adapting content, generating contextualized practice, and providing immediate feedback in ways that large classrooms simply cannot, as summarized in recent educational research. That disconnect reveals something uncomfortable. AI doesnt threaten the essence of education: it threatens the bureaucracy built around it. Students themselves are not rejecting AI: Surveys consistently show they view responsible AI use as a core professional skill and want guidance, not punishment, for using it well. The disconnect is glaring: Learners are moving forward, while academic institutions are digging in. What an ‘all-in’ approach actually looks like For more than 35 years, Ive been teaching at IE University, an institution that has consistently taken the opposite stance. Long before generative AI entered the public conversation, IE was experimenting with online education, hybrid models, and technology-enhanced learning. When ChatGPT arrived, the university didnt panic. Instead, it published a very clear Institutional Statement on Artificial Intelligence framing AI as a historic technological shift, comparable to the steam engine or the internet, and committing to integrating it ethically and intentionally across teaching, learning, and assessment. That all-in position wasnt about novelty or branding. It was grounded in a simple idea: technology should adapt to the learner, not the other way around. AI should amplify human teaching, not replace it. Students should be able to learn at their own pace, receive feedback without constant judgment, and experiment without fear. Data should belong to the learner, not the institution. And educators should spend less time policing outputs and more time doing what only humans can doguide, inspire, contextualize, and exercise judgment. IEs decision to integrate OpenAI tools across its academic ecosystem reflects that philosophy in practice. Uniformity was never rigor This approach stands in sharp contrast to universities that treat AI primarily as a cheating problem. Those institutions are defending a model built on uniformity, anxiety, memorization, and evaluation, rather than understanding. AI exposes the limits of that model precisely because it makes a better one possible: adaptive, student-centered learning at scale, an idea supported by decades of educational research. But embracing that possibility is hard. It requires letting go of the comforting fiction that teaching the same content to everyone, at the same time, judged by the same exams, is the pinnacle of rigor. AI reveals that this system was never about learning efficiency, it was about administrative convenience. Its not rigor . . . its rigor mortis. Alpha Schools and the illusion of disruption There are, of course, experiments that claim to point toward the future. Alpha Schools, a small network of AI-first private schools in the U.S., has drawn attention for radically restructuring the school day around AI tutors. Their pitch is appealing: Students complete core academics in a few hours with AI support, freeing the rest of the day for projects, collaboration, and social development. But Alpha Schools also illustrate how easy it is to get AI in education wrong: What they deploy today is not a sophisticated learning ecosystem, but a thin layer of AI-driven content delivery optimized for speed and test performance. The AI model, simplistic and weak, prioritizes acceleration over comprehension, efficiency over depth. Students may move faster through standardized material, but they do so along rigid, predefined paths with simplistic feedback loops. The result feels less like augmented learning, and more like automation masquerading as innovation. When AI becomes a conveyor belt This is the core risk facing AI in education: mistaking personalization for optimization, autonomy for isolation, and innovation for automation. When AI is treated as a conveyor belt rather than a companion, it reproduces the same structural flaws as traditional systems, just faster and cheaper. The limitation here isnt technological: its conceptual. Real AI-driven education is not about replacing teachers with chatbots or compressing curricula into shorter time slots. Its about creating environments where students can plan, manage, and reflect on complex learning processes; where effort and consistency become visible; where mistakes are safe; and where feedback is constant but respectful. AI should support experimentation, not enforce compliance. The real threat is not AI This is why the backlash against AI in universities is so misguided. By focusing on prohibition, institutions miss the opportunity to redefine learning around human growth rather than institutional control. They cling to exams because exams are easy to administer, not because they are effective. They fear AI because it makes obvious what students have long known: that much of higher education measures outputs while neglecting understanding. The universities that will thrive are not the ones banning tools or resurrecting 19th-century assessment rituals. They will be the ones that treat AI as core educational infrastructuresomething to be shaped, governed, and improved, not feared. They will recognize that the goal is not to automate teaching, but to reduce educational inequality, expand access to knowledge, and free time and attention for the deeply human aspects of learning. AI does not threaten education: it threatens the systems that forgot who education is for. If universities continue responding defensively, it wont be because AI displaced them. It will be because, when faced with the first technology capable of enabling genuinely student-centered learning at scale, they chose to protect their rituals instead of their students.
Category:
E-Commerce
Sites : [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] next »