Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2026-01-28 10:00:00| Fast Company

When one of the founders of modern AI walks away from one of the worlds most powerful tech companies to start something new, the industry should pay attention. Yann LeCuns departure from Meta after more than a decade shaping its AI research is not just another leadership change. It highlights a deep intellectual rift about the future of artificial intelligence: whether we should continue scaling large language models (LLMs) or pursue systems that understand the world, not merely echo it.  Who Yann LeCun is, and why it matters LeCun is a French American computer scientist widely acknowledged as one of the Godfathers of AI. Alongside Geoffrey Hinton and Yoshua Bengio, he received the 2018 Association for Computing Machinerys A.M. Turing Award for foundational work in deep learning.  He joined Meta (then Facebook) in 2013 to build its AI research organization, eventually known as FAIR (Facebook/META Artificial Intelligence Research), a lab that tried to advance foundational tools such as PyTorch and contributed to early versions of Llama.  Over the years, LeCun became a global figure in AI research, frequently arguing that current generative models, powerful as they are, do not constitute true intelligence.  What led him to leave Meta LeCuns decision to depart, confirmed in late 2025, was shaped by both strategic and philosophical differences with Metas evolving AI focus. In 2025, Meta reorganized its AI efforts under Meta Superintelligence Labs, a division emphasizing rapid product development and aggressive scaling of generative systems. This reorganization consolidated research, product, infrastructure, and LLM initiatives under leadership distinct from LeCuns traditional domain.  Within this new structure, LeCun reported not to a pure research leader, but to a product and commercialization-oriented chain of command, a sign of shifting priorities.  But more important than that, theres a deep philosophical divergence: LeCun has been increasingly vocal that LLMs, the backbone of generative AI, including Metas Llama models, are limited. They predict text patterns, but they do not reason or understand the physical world in a meaningful way. Contemporary LLMs excel at surface-level mimicry, but lack robust causal reasoning, planning, and grounding in sensory experience.  As he has said and written, LeCun believes LLMs are useful, but they are not a path to human-level intelligence. This tension was compounded by strategic reorganizations inside Meta, including workforce changes, budget reallocations, and a cultural shift toward short-term product cycles at the expense of long-term exploratory research.  The big idea behind his new company LeCuns new venture is centered on alternative AI architectures that prioritize grounded understanding over language mimicry.  While details remain scarce, some elements have emerged: The company will develop AI systems capable of real-world perception and reasoning, not merely text prediction. It will focus on world models, AI that understands environments through vision, causal interaction, and simulation rather than only statistical patterns in text.  LeCun has suggested the goal is systems that understand the physical world, have persistent memory, can reason, and can plan complex actions. In LeCuns own framing, this is not a minor variation on todays AI: Its a fundamentally different learning paradigm that could unlock genuine machine reasoning.  Although Meta founders and other insiders have not released official fundraising figures, multiple reports indicate that LeCun is in early talks with investors and that the venture is attracting atention precisely because of his reputation and vision.  Why this matters for the future of AI LeCuns break with Meta points to a larger debate unfolding across the AI industry. LLMs versus world models:LLMs have dominated public attention and corporate strategy because they are powerful, commercially viable, and increasingly useful. But there is growing recognition, echoed by researchers like LeCun, that understanding, planning, and physical reasoning will require architectures that go beyond text. Commercial urgency versus foundational science:Big Tech companies are understandably focused on shipping products and capturing market share. But foundational research, the kind that may not pay off for years, requires a different timeline and incentives structure. LeCuns exit underscores how those timelines can diverge.  A new wave of AI innovation:If LeCuns new company succeeds in advancing world models at scale, it could reshape the AI landscape. We may see AI systems that not only generate text but also predict outcomes, make decisions in complex environments, and reason about cause and effect.  This would have profound implications across industries, from robotics and autonomous systems to scientific research, climate modeling, and strategic decision-making.  What it means for Meta and the industry Metas AI strategy increasingly looks short-term, shallow, and opportunistic, shaped less by a coherent research vision than by Mark Zuckerbergs highly personalistic leadership style. Just as the metaverse pivot burned tens of billions of dollars chasing a narrative before the technology or market was ready, Metas current AI push prioritizes speed, positioning, and headlines over deep, patient inquiry.  In contrast, organizations like OpenAI, Google DeepMind, and Anthropic, whatever their flaws, remain anchored in long-horizon research agendas that treat foundational understanding as a prerequisite for durable advantage. Metas approach reflects a familiar pattern: abrupt strategic swings driven by executive conviction rather than epistemic rigor, where ambition substitutes for insight and scale is mistaken for progress. Yann LeCuns departure is less an anomaly than a predictable consequence of that model.  But LeCuns departure is also a reminder that the AI field is not monolithic. Different visions of intelligence, whether generative language, embodied reasoning, or something in between, are competing for dominance.  Corporations chasing short-term gains will always have a place in the ecosystem. But visionary research, the kind that might enable true understanding, may increasingly find its home in independent ventures, academic partnerships, and hybrid collaborations.  A turning point in AI LeCuns decision to leave Meta and pursue his own vision is more than a career move. It is a signal: that the current generative AI paradigm, brilliant though it is, will not be the final word in artificial intelligence. For leaders in business and technology, the question is no longer whether AI will transform industries, its how it will evolve next. LeCuns new line of research is not unique: Other companies are following the same idea. And this idea might not just shape the future of AI researchit could define it.


Category: E-Commerce

 

2026-01-28 09:30:00| Fast Company

Generative artificial intelligence technology is rapidly reshaping education in unprecedented ways. With its potential benefits and risks, K-12 schools are actively trying to adapt teaching and learning. But as schools seek to navigate into the age of generative AI, theres a challenge: Schools are operating in a policy vacuum. While a number of states offer guidance on AI, only a couple of states require local schools to form specific policies, even as teachers, students, and school leaders continue to use generative AI in countless new ways. As a policymaker noted in a survey, You have policy and whats actually happening in the classroomsthose are two very different things. As part of my labs research on AI and education policy, I conducted a survey in late 2025 with members of the National Association of State Boards of Education, the only nonprofit dedicated solely to helping state boards advance equity and excellence in public education. The survey of the associations members reflects how education policy is typically formed through dynamic interactions across national, state, and local levels, rather than being dictated by a single source. But even in the absence of hard-and-fast rules and guardrails on how AI can be used in schools, education policymakers identified a number of ethical concerns raised by the technologys spread, including student safety, data privacy, and negative impacts on student learning. They also expressed concerns over industry influence and that schools will later be charged by technology providers for large language model-based tools that are currently free. Others report that administrators in their state are very concerned about deepfakes: What happens when a student deepfakes my voice and sends it out to cancel school or report a bomb threat? At the same time, policymakers said teaching students to use AI technology to their benefit remains a priority. Local actions dominate Although chatbots have been widely available for more than three years, the survey revealed that states are in the early stages of addressing generative AI, with most yet to implement official policies. While many states are providing guidance or tool kits, or are starting to write state-level policies, local decisions dominate the landscape, with each school district primarily responsible for shaping its own plans. When asked whether their state has implemented any generative AI policies, respondents said there was a high degree of local influence regardless of whether a state issued guidance or not. We are a local control state, so some school districts have banned [generative AI], wrote one respondent. Our [state] department of education has an AI tool kit, but policies are all local, wrote another. One shared that their state has a basic requirement that districts adopt a local policy about AI. Like other education policies, generative AI adoption occurs within the existing state education governance structures, with authority and accountability balanced between state and local levels. As with previous waves of technology in K-12 schools, local decision-making plays a critical role. Yet there is generally a lack of evidence related to how AI will affect learners and teachers, which will take years to become more clear. That lag adds to the challenges in formulating policies. States as a lighthouse However, state policy can provide vital guidance by prioritizing ethics, equity, and safety, and by being adaptable to changing needs. A coherent state policy can also answer key questions, such as acceptable student use of AI, and ensure more consistent standards of practice. Without such direction, districts are left to their own devices to identify appropriate, effective uses and to construct guardrails. As it stands, AI usage and policy development are uneven, depending on how well resourced a school is. Data from a Rand-led panel of educators showed that teachers and principals in higher-poverty schools were about half as likely to report that AI guidance was provided. The poorest schools are also less likely to use AI tools. When asked about foundational generative AI policies in education, policymakers focused on privacy, safety, and equity. One respondent, for example, said school districts should have the same access to funding and training, including for administrators. And rather than having the technology imposed on schools and families, many argued for grounding the discussion in human values and broad participation. As one policymaker noted, What is the role that families play in all this? This is something that is constantly missing from the conversation and something to uplift. As we know, parents are our kids first teachers. Introducing new technology According to a Feb. 24, 2025, Gallup poll, 60% of teachers report using some AI for their work in a range of ways. Our survey also found there is shadow use of AI, as one policymaker put it, where employees implement generative AI without explicit school or district IT or security approval. Some states, such as Indiana, offer schools the opportunity to apply for a one-time competitive grant to fund a pilot of an AI-powered platform of their choosing, as long as the product vendors are approved by the state. Grant proposals that focus on supporting students or professional development for educators receive priority. In other states, schools opt in to pilot tests that are funded by nonprofits. For example, an eighth grade language arts teacher in California participated in a pilot where she used AI-powered tools to generate feedback on her students writing. Teaching 150 kids a day and providing meaningful feedback for every student is not possible; I would try anything to lessen grading and give me back my time to spend with kids. This is why I became a teacher: to spend time with the kids. This teacher also noted the tools showed bias when analyzing the work of her students learning English, which gave her the opportunity to discuss algorithmic bias in these tools. One initiative from the Netherlands offers a different approach than finding ways to implement products developed by technology companies. Instead, schools take the lead with questions or challenges they are facing and turn to industry to develop solutions informed by research. Core principles One theme that emerged from survey respondents is the need to emphasize ethical principles inproviding guidance on how to use AI technology in teaching and learning. This could begin with ensuring that students and teachers learn about the limitations and opportunities of generative AI, when and how to leverage these tools effectively, critically evaluate its output, and ethically disclose its use. Often, policymakers struggle to know where to begin in formulating policies. Analyzing tensions and decision-making in organizational contextor what my colleagues and I called “dilemma analysis” in a recent reportis an approach schools, districts, and states can take to navigate the myriad of ethical and societal impacts of generative AI. Despite the confusion around AI and a fragmented policy landscape, policymakers said they recognize it is incumbent upon each school, district, and state to engage their communities and families to co-create a path forward. As one policymaker put it: Knowing the horse has already left the barn [and that AI use] is already prevalent among students and faculty . . . [on] AI-human collaboration versus an outright ban, where on the spectrum do you want to be? Janice Mak is an assistant director and clinical assistant professor at Arizona State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

2026-01-28 09:00:00| Fast Company

“Snow Will Fall Too Fast for Plows,” ICE STORM APOCALYPSE, and Another Big Storm May Be Coming … were all headlines posted on YouTube this past weekend as the biggest snowstorm in years hit New York City.  These videos, each with tens or hundreds of thousands of views, are part of an increasingly popular genre of weather influencers,” as Americans increasingly turn to social media for news and weather updates.  People pay more attention to influencers on YouTube, Instagram, and TikTok than to journalists or mainstream media, a study by the Reuters Institute and the University of Oxford found in 2024. In the U.S., social media is how 20% of adults get their news or weather updates, according to the Pew Research Center. Its no surprise, then, that a number of online weather accounts have cropped up to cover the increasing number of extreme weather events in the U.S.  While some of these influencers have no science background, many of the most popular ones are accredited meteorologists. One of the most viewed digital meteorologistsor weather influencersis Ryan Hall, who calls himself “The Internet’s Weather Man” on his social media platforms. His YouTube channel, Ryan Hall, Yall, has more than 3 million subscribers.  Max Velocity is another. He’s a degreed meteorologist, according to his YouTube bio, who has 1.66 million followers. Reed Timmer, an extreme meteorologist and storm chaser, also posts to 1.46 million subscribers on YouTube. While most prefer to avoid the bad news that comes with bad weather, I charge towards it, Timmer writes in the description section on his channel.  The rising popularity of weather influencers is stemming not just from a mistrust in mainstream mediawhich is lingering at an all-time lowbut also from an appetite for real-time updates delivered in an engaging way to the social-first generation.  YouTube accounts like Halls will often livestream during extreme weather events, with his comments section hosting a flurry of activity. Theres even merch.  Of course, influencers are not required to uphold the same reporting standards as network weathercasters. Theres also the incentive, in terms of likes and engagement, to sensationalize events with clickbait titles and exaggerated claims, or sometimes even misinformation, as witnessed during the L.A. wildfires last year.  Still, as meteorologists navigate the new media landscape, the American Meteorological Society now offers a certification program in digital meteorology for those meteorologists who meet established criteria for scientific competence and effective communication skills in their weather presentations on all forms of digital media. While we wait to see whether another winter storm will hit the Northeast this weekend, rest assured, the weather influencers will be tracking the latest updates.


Category: E-Commerce

 

2026-01-28 07:00:00| Fast Company

When people complain about a lack of work-life balance, theyre typically feeling that they are spending too much time working. They may be spending a lot of combined time at the office and commuting, or just putting in a lot of hours both at work and at home. Fixing that problem cant be done abstractly, though. If youre going to address the balance of work and life activities, you have to start getting specific about where your time is going and where you really want it to go. Think about how you’re spending your time. At work, youre spending time in meetings, writing documents, engaging with clients, or doing particular technical tasks like coding. Similarly, your non-work life consists of other activities like going to the gym, spending time with family, going to concerts, or reading a novel for pleasure. Start by taking a look at where your time is going right now. If you keep a good work calendar, then flip through a few weeks and track the hours youre spending on different tasks. If you dont have a good record of the time youre spending at work, then start logging the time spent on different work tasks. How much of the time youre spending on work tasks is really necessary? Are there activities that are discretionary that you could replace with something else (potentially a non-work something else)? Are you wasting time shifting among tasks or doing other things inefficiently? Perhaps more importantly, you also need to think more clearly about what activities should go in your life bin. What are the activities or hobbies you wish you had more time for? Who are the people you want to spend more time with? You spend time on specific work tasks, because those end up on your calendar. You have to define life specifically enough that it ends up on your calendar as well. Then, create a calendar that includes both work events and life events. Dont just log your meetings, tasks (and commute time), but also time for working with your kids on their homework, going on a date with your partner, hanging out with friends, going to the gym, or reading a book. It may seem like micromanaging your life to start scheduling these personal events, but if you dont start doing things differently, the balance of the way you spend your time is not going to change. This approach also helps you to recognize when your work responsibilities have become overwhelming. If you truly dont have the time to do any of your life activities, then your job may be asking too much of you. Sit down with your supervisor or a mentor and talk through what youre currently doing at work. Ask for help prioritizing tasks so that you have more opportunities to do other things that are important to you. Your supervisor might even change some of your responsibilities to make the load more manageable in a reasonable amount of time. Ultimately, by scheduling the time for these life activities (and actually doing them), you are shifting your habits to include more regular life activities. You wont necessarily have to create a specific calendar for your life forever. As you start engaging in more non-work activities, that will shift the nature of your daily and weekly routine in ways that are likely to become self-sustaining.


Category: E-Commerce

 

2026-01-28 07:00:00| Fast Company

You know the ancient proverb: Give a man a fish, and you feed him for a day; teach a man to fish, and you feed him for a lifetime. For leaders, first-generation AI tools are like giving employees fish. Agentic AI, on the other hand, teaches them how to fishtruly empowering, and that empowerment lifts the entire organization. According to recent findings from McKinsey, nearly eight in ten companies report using gen AI, yet about the same number report no bottom-line impact. Agentic AI can help organizations achieve meaningful results. AI agents are highly capable assistants with the ability to execute tasks independently. Equipped with artificial intelligence that simulates human reasoning, they can recognize problems, remember past interactions, and proactively take steps to get things donewhether that means knocking out tedious manual tasks or helping to generate innovative solutions. For CEOs juggling numerous responsibilities, agentic AI can be a powerful ally in simplifying decision-making and scaling impact. Thats why I believe it belongs on every CEOs roadmap for 2026. As CEO of a SaaS company grounded in automation, Ive made it a priority to incorporate agentic AI into our everyday workflows. Here are three ways you can put it to work in your organization. 1. Take the effort out of scheduling Starting with one of the most basic functions of any organizationand one that can easily become a time and energy vacuumscheduling is perfect fodder for AI agents. And they go well beyond your typical AI-powered scheduling tool. For starters, theyre adaptable. AI agents can monitor incoming data and requests, proactively adjust schedules, and notify the relevant parties when issues arise. Lets say your team has a standing brainstorming session every Wednesday and a new client reaches out to request an intro meeting at the same time. Your agent can automatically respond with alternative time slots. On the other hand, if a client needs to connect on a time-sensitive issue, your agent can elevate the request to a human employee to decide whether rescheduling makes sense. You can also personalize AI agents based on your unique needs and priorities, including past interactions. If, for example, your agent learns that you religiously protect time for deep-focus work first thing in the morning, it wont keep proposing meetings then. By delegating scheduling tasks, organizationsfrom the CEO to internsfree up time for higher-level priorities and more meaningful work. You can build your own agent, or get started with a ready-to-use scheduling assistant that offers agentic capabilities, like Reclaim.ai. 2. Facilitate idea generation and innovation When we talk about AI and creativity, the conversation often stirs anxiety about artificial intelligence replacing human creativity. But agentic AI can help spark ideas for engagement, leadership development, and strategic initiatives. The goal is to cultivate the conditions in which these initiatives can thrive, not to replace the actual brainstorming or strategic thinking. For example, you can create an ideation-focused AI agent and train it on relevant organizational contextperformance data, KPIs, meeting notes, employee engagement data, culture touch points, and more. Your agent can continuously gather new information and update its internal knowledge. When the time comes for a brainstorming or strategy session (which the agent can also proactively prompt), it can draw on this working organizational memory plus any other resources it can access, and tap generative AI tools like ChatGPT or Gemini to generate themes, propose topics, and help guide the discussion. Meanwhile, leaders remain focused on evaluating ideas, decision-making, and execution. 3. Error-free progress updates and year-end recaps While generative AI can be incredibly powerful, the issue remains that it is largely reactive, not proactive. When it comes to tracking performance, team KPIs, and organizational progress, manual check-ins are still required. As Ive written before, manual tasks are subject to human error. Calendar alerts go unnoticed. Things slip through the cracks. Minor problems become big issues.  One solution is to design an AI agent that can autonomously monitor your organizations performance. Continuous, real-time oversight helps ensure processes run smoothly and that issues are flagged as soon as they arise. For example, if your company sells workout gear and sees a postNew Year surge in fitness resolutionsand demand for a specific productan agent can track sales patterns and alert the team to inventory shortages. An AI agent can also independently generate reports, including year-end recaps that are critical for continued growth.  Rather than waiting to be prompted by a human, they can do the work alone and elevate only the issues that require human judgment. Agents have the potential to create real value for organizations. Importantly, leaders have to rethink workflows so AI agents are meaningfully integrated, fully liberating employees from rote, manual tasks and freeing them to focus on more consequential, inspiring work like strategy and critical thinking. Ive found this leaves employees more energized, and the benefits continue to compound.


Category: E-Commerce

 

Sites : [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] next »

Privacy policy . Copyright . Contact form .