Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2026-02-22 17:00:00| Fast Company

American statesman and polymath Ben Franklins legacy includes inspirational quotes on frugality, honesty, and hard work. Hes less frequently thought of as an icon of successful aging. But as doctor and author Ezekiel Emanuel recently pointed out on Big Think, At a time when the average age at death was under 40, he lived to 84, fully mentally competent all the way to the end. That makes the founding father a worthy source of advice on aging well. Whats the biggest lesson we can learn from him. Unsurprisingly, given he lived at a time when dentures were made out of wood and surgery was done without anesthesia, Franklin cant teach us anything about the latest aging breakthroughs. But he can remind us of a fundamental truth thats thoroughly backed up by modern science, but still frequently forgotten: Staying useful is as important to aging well as any fancy new drug, fitness routine, or diet plan. Ben Franklins secret to healthy aging  Ben Franklin was 70 when he signed the Declaration of Independence, and he churned out inventions into his eighties. (Those include inventing bifocals to solve his own issues with failing eyesight). That might leave you with the impression that he was a work-until-you-drop kind of guy. But Emanuel points out thats not actually how Franklin understood his own life.  Franklin invented retirement for working-class people, Emanuel insists. He made enough as a printer that he could retire at 42, and he said, Im going to live a life of leisure.  That means everything that followed the ending of Franklins career as a printer, including much of his work helping to found the University of Pennsylvania and the United States, were technically retirement hobbies.  His golden years didnt look anything like the golf, pickleball, or Caribbean cruises many of us dream about today. But that, Emanuel stresses, is the central wellness lesson we take from Franklins long and exceptionally productive life.  Leisure, for Franklin, didnt mean going to the Jersey Shore. It meant that he didnt have to worry about business and making money. He could focus on doing good, and for him, doing good was science and social improvement activities, Emanuel says. Not contributing to society is not good for the soul. You have to be useful. You have to try to make the world a better place. Thats key to wellness, too.  What modern psychology says about purpose and aging  About 275 years ago, when Franklin stepped away from his first, moneymaking career, he understood that the key to aging well was to find purposeful ways to use his newfound leisure time. Thats a simple enough insight. But research suggests that even today a great many of us fail to remember it.  Research out of Insead, the European business school, shows that many successful entrepreneurs struggle after exiting their businesses with big paydays.  It is perfectly normal to discover that life post-financial freedom isnt as happy as one might have expected it to be, the researchers noted. The most common reason for these problems is a sense of aimlessness and boredom.  Studies of retired Japanese salarymen and personal commentary from many who have pursued the popular Financial Independence Retire Early (FIRE) movement point in the same direction. Many of us dream of wide open days after leaving the world of work. But when confronted with the reality of long stretches of unstructured time, unless people have many explicit plans to stay useful, they tend to spiral. And not just emotionally. Neuroscience research has found that a sense of purpose helps delay dementia. Its absence, on the other hand, can speed cognitive decline. Meanwhile, an absolute mountain of studies testified that one of the best ways to look after your own wellness is to find ways to help others.  A Google founder and the Governator agree  It can be tempting to think of retirement in terms of numbers. If you have enough saved, your later years will be comfortable and stress free, and therefore healthy and happy, too. But even billionaires seem to flail in retirement unless they, like Ben Franklin, figure out how to continue to contribute to society.  Sergey Brin is worth a cool $200 billion or so. He unretired and went back to work at Google because, he says, I was just kind of stewing and . . . not being sharp. Bill Gates is another guy with no financial constraints, but he, too, has written about how post-work life presents a lot of time to fill and that people need a reason to get out of bed in the morning. On the other hand, action star turned Governator Arnold Schwarzenegger credits his peace of mind at the age of 78 to a simple life motto: Stay busy. Be useful. Thats basically Ben Franklins whole approach to aging well boiled down to four snappy words.  Healthy aging wisdom thats stood the test of time  So if youre in the market for some good advice on how to stay mentally and physically health for as long as possible, you could look to wellness influencers and tech bros chasing immortality. But all their dubious routines probably wont buy you nearly as many healthy years as Ben Franklins straightforward 275-year-old wisdom.  If you want to age well, stay useful. By Jessica Stillman, Contributor, Inc.com This article originally appeared on Fast Companys sister website, Inc.com.  Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.


Category: E-Commerce

 

2026-02-22 12:01:00| Fast Company

The fleeting nature of the Olympic Winter Games makes them all the more alluring. The scarcity is almost sacred. Competitors work their whole lifetimes for one shot at glory that takes place over a period of just a few weeks. To celebrate every athletic achievement at the XXV Olympic Winter Games, the closing ceremony will take place Sunday, February 22. Heres everything you need to know including how to tune in. Where will the Milano Cortina Olympic Closing Ceremony take place? Just like William Shakespeare intended, its fair in Verona where we lay our scene. The Milano Cortina Closing Ceremony will be held at the Verona Arena, which many historians believe predates the Colosseum. Unlike the opening ceremony, which took place in multiple venues, this is the sole location. Verona lies about halfway between Milan and Cortina d’Ampezzo, the two cities where the majority of the competitions took place. What is the theme of the Winter Olympics Closing Ceremony? The theme of the closing ceremony is Beauty in Action. While exact details of the two-and-a-half-hour event are always kept under wraps for the element of surprise, it is known that the event will celebrate the host country, Italy. It will also convey climate changes impact on the games and the future challenges this brings. Elements such as music, dance, film, design, and technology will all be utilized to tell these stories and celebrate the games. Who is performing at the Winter Olympics Closing Ceremony? The first performer announced was ballet star Roberto Bolle. He is a principal dancer at La Scala Theatre Ballet and frequently performs as a guest artist around the world. Joining him is singer-songwriter Achille Lauro. He made a name for himself in the hip-hop world but also excels in other genres of music like pop and rock. Actress Benedetta Porcaroli will also take part in the closing ceremony. She is best known for her work as Chiara in the Netflix series Baby. Additionally, her film credits include Immaculate, The Leopard, and The Kidnapping of Arabella. DJ Gabry Ponte is planning on dropping some sick beats. He gained prominence as a member of the group Eiffel 65. He even has his own record label, Dance and Love. Who is hosting the 2030 Winter Olympics? Another important part of the closing ceremony is handing over the flag to the next host. The 2030 Winter Olympics will take place in France. The French Alps are already planning for another spectacular competition that will be here before we know it. How can I stream or watch the closing ceremony? The ceremony takes place on Sunday, February 22. If you want to catch the action in real time, turn on NBC or the streaming service Peacock at 2:30 p.m. ET. If that time doesnt work with your schedule, there will be another chance to see the pageantry during prime-time, beginning at 9 p.m. ET. You can watch NBC for free if you have an over-the-air antenna or a traditional cable subscription. Peacock is a paid subscription service, but if it’s not part of your streaming arsenal, you can turn to a live-TV streaming service that carries NBC. YouTube TV, Hulu + Live TV, or fuboTV carry NBC in most areas. Just make sure to double check before you sign up to account for regional differences.


Category: E-Commerce

 

2026-02-22 10:05:00| Fast Company

AI is transforming how teams work. But its not just the tools that matter. Its what happens to thinking when those tools do the heavy lifting, and whether managers notice before the gap widens. Across industries, theres a common pattern. AI-supported work looks polished. The reports are clean. The analyses are structured. But when someone asks the team to defend a decision, not summarize one, the room goes quiet. The output is there, but the reasoning isnt owned. For David, the COO of a midsize financial services firm, the problem surfaced during quarterly planning. Multiple teams presented the same compelling statistic about regulatory timelines, one that turned out to be wrong. It had come from an AI-generated summary that blended outdated guidance with a recent policy draft. No one had checked it. No one had questioned it. It simply sounded right. We werent lazy, David told us. We just didnt have a process that asked us to look twice. Through our work advising teams navigating AI adoption, Jenny as an executive coach, learning and development designer, and Noam as an AI strategist, we have seen a clear distinction: there are teams where AI flattens performance, and teams where it deepens it. The difference isnt whether AI is allowed. Its whether judgment is designed back into the work. In good news, teams can adopt practices to shift from producing answers to owning decisions. This new way of thinking doesnt slow things down. It moves performance to where it actually mattersand protects the judgment that no machine can replace in the process. 1. The Fact Audit: Question AIs Output AI produces fluent language. Thats exactly what makes it dangerous. When output sounds authoritative, people stop checking it. It’s a pattern often called workslop: AI-generated output that looks polished but lacks the substance to hold up under scrutiny. In contrast, critical thinking strengthens when teams learn to treat AI as unverified input, not a final source. David didnt punish the teams that got the statistic wrong. He redesigned the process. Before any strategic analysis could move forward, teams had to run a fact audit: identify AI-generated claims and validate each one against primary sources like regulatory filings, official announcements, or verified reports. The mandate wasnt about catching mistakes, but building a reflex. Over six months, the quality of planning inputs improved significantly. Teams started flagging uncertainty on their own, before anyone asked. The World Economic Forums 2025 Future of Jobs Report reinforces this: in high-stakes decisions, AI should augment, not replace, human judgment. Embedding that principle into daily work isnt optional. Its a competitive advantage. Pro tip: Start with three. Dont overhaul the whole process at once. Ask each team member to flag three AI-generated claims in their next deliverable and trace each one to a source. Keep it lightweight; the habit matters more than the volume. 2. The Fit Audit: Demand Context-Specific Thinking AI defaults to best practices. Thats by design. But generic advice rarely wins in a specific situation. The real test of critical thinking isnt whether an answer sounds smart, but whether it fits. Rachel, a managing partner at a global consulting firm, noticed it immediately. Her teams were leaning on AI to draft client recommendations, and the output was consistently competent, but painfully interchangeable. Improve stakeholder communication. Build organizational resilience, she told us. It could have been written for anyone. It was written for no one. She introduced a simple checkpoint. Before any recommendation could move forward, the team had to answer one question in writing: Why does this solution work here, and not at our last three clients? They had to map every suggestion explicitly to the clients constraints, the firms methodology, and the real stakeholder landscape. The shift was immediate. Teams started discarding generic AI language and replacing it with reasoning that was theirs. Client presentations became sharper. Debates replaced consensus. Gallups 2025 workplace data supports why this matters at scale. While nearly a quarter of employees now use AI weekly to consolidate information and generate ideas, effective use requires strategic integration, not just access. Managers are the ones who set that standard. Pro tip: Make it verbal. While written fit audits are good, ask a team member to explain their recommendation aloud, in a five-minute stand-up or a quick team check-in. Misalignment disappears fast when people cannot hide behind polished text. 3. The Asset Audit: Make Human Contributions Visible Heres what most managers miss: even when employees are thinking critically, that thinking is invisible. If its not surfaced, it doesnt get recognized, and it doesnt get developed. Marcus, a VP of strategy at a technology company, started requiring a short decision log alongside every quarterly business review. Not a summary of what AI produced. A record of what the team decided to do with it. The questions were simple: What assumptions did you challenge? What did you revise? What did you reject, and why? One regional manager used it to flag something the AI had missed entirely: the tension between short-term revenue targets and long-term customer retention. She rewrote the analysis framework to surface that trade-off. The review became a strategic conversation instead of a status update. It changed what we looked for, Marcus said. We stopped evaluating the output. We started evaluating the judgment. McKinseys research confirms the stakes: heavy users of AI report needing higher-level cognitive and decision-making skills more than technical ones. As AI handles routine work, the human contribution becomes the entire competitive edge. Making it visible isnt just good management. Its a strategy. Pro tip: Keep the log short, at just three to five bullet points. What was the AI input? What did the team change? What was the final call and why? The goal isnt documentation for its own sake: its making thinking something the team can see, discuss, and learn from. 4. The Prompt Audit: Capture How the Team Thinks Critical thinking deepens when people can trace their own reasoning: not just the final output, but the process that shaped it. Without it, every deliverable starts from scrach. With it, the team builds institutional knowledge. Sarah, a partner at a professional services firm, started requiring a brief process outline before every client presentation. Not a recap of the finished product. A trail: which prompts were used, which sources were checked, where the framing shifted, and why. After each presentation, team members wrote a short individual reflection: Where did my thinking change during this process? Over time, the artifacts became a shared learning resource. Teams could see which prompts produced shallow output, which revisions added real value, and how collaboration shaped the final judgment. It turned experimentation into something reusable, Sarah told us. Before, every project felt like starting over. Now, we build on what we have already figured out. The result wasnt just better deliverables. It was a team that got sharper and faster together. Pro tip: Create a shared tracker. Keep it simple: a shared doc, a Notion page, or even a Slack channel. Log what prompt was used, what worked, what didnt, and what you would try next. No slides, no pressure. The goal is to normalize small bets and shared learning in real time. Thinking Critically with AI AI is only as powerful as the people who use it with intention. The best teams arent winning because they have the fastest tools. They are winning because they have built habits that keep judgment in the loop. They question what sounds right. They demand context over consensus. They make their thinking visible, and they learn from it. Managing critical thinking in the AI era doesnt require banning tools or lowering standards. It requires clarity about where thinking lives. Drawing that line, between what AI should handle and what must stay human, is one of the defining responsibilities of leadership right now. AI changes how work gets done. Management shapes how people think while doing it.


Category: E-Commerce

 

2026-02-22 09:30:00| Fast Company

Corporate leaders today are stuck between a rock and a hard place. Nobody can see events playing out in the streets in Minnesota and elsewhere and not be moved in some way. At the same time, they have a fiduciary responsibility to act in the best interests of their stakeholders, regardless of their personal feelings.  I know this dilemma because I experienced it myself. In 2004, I was managing Ukraines leading news organization during the Orange Revolution, the third in a series of nonviolent uprisings known as the color revolutions that overwhelmed autocrats in Serbia and then the Georgian Republic before arriving in Kyiv. As I explained in my book, Cascades, these things follow a specific pattern of contagion, adoption, and defection driven by networks. Eventually, the nonlinear nature of network cascades overwhelms regimes and compels institutions to act. Now, that pattern is unfolding right here and, for corporate leaders, it is no longer something you can afford to ignore.   1. Contagion: How Movements Learn, Adapt, and Spread 2004 was an election year in Ukraine, so politics was in the air. We all saw the campaigns get underway, with ads hitting the air and rallies being held. But from my vantage point inside a news operation, I also began to hear about a youth group, called Pora, that was organizing students and activists against the regime. But the true origins started even earlier, in a Belgrade café in 1998. It was there that a small group of five activists met and established the youth group Otpor. Their efforts got a boost from a little-known academic named Gene Sharp, who had developed nonviolent methods of overthrowing authoritarian regimes and established the Albert Einstein Institution to support activists around the world. The Otpor activists would lead the overthrow of Serbian strongman Slobodan Milošević. Shortly after, West Wing star Martin Sheen would narrate a hit documentary about the events, and activists from other Eastern European countries began reaching out to learn how the Serbians applied Sharps methods. In 2003, President Eduard Shevardnadze was brought down in Georgias Rose Revolution. In the spring of 2004, the Ukrainian Pora activists traveled to Serbia to receive training to lay the ground for the events I witnessed in the Orange Revolution.  We can see a similar process unfolding in Minnesota and beyond. When federal agents began to descend on the community, activist networks first established in the aftermath of the killing of George Floyd were activated. They began to organize to protect their communities from ICE and CBP patrols, learning and honing their methods as they went.  Now, as other communities begin to prepare for ICE and CBP activity, activists around the country are watching and learning. Ordinary Americans are attending trainingonline and in personthat transmits what has been learned in Minnesota: how to organize, dispatch activists, and engage with federal officers on the ground. 2. Adoption: When Participation Becomes the Default We are a product of our environments. Decades of studies indicate that we tend to conform to the opinions and behaviors of those around us, and this effect extends out to three degrees of relationships. So not only do our friends friends influence us deeply, but their friends toopeople who we dont even knowaffect what we think and do.  Yet the inverse is also true. The people around us are usually doing pretty ordinary things, like going to work, taking the kids to soccer practice, and cooking dinner. Most people who are not actively opposing agents of the state have little idea how to go about doing so. We are, for the most part, trapped in mundane, ordinary lives and resist changing our habits significantly, yet that can change quickly. In a highly influential 1978 paper about resistance thresholds, sociologist Mark Granovetter showed how even small clusters of individuals, with low barriers to adoption, can influence those with greater resistance. Once these come on board, they begin to influence others as well. It is a pattern we see over and over again: small groups, loosely connected, but united by a shared purpose are what drive transformational change through network cascades.  We can see those same patterns unfolding in America today. Ordinary people, appalled by the actions of ICE and CBP patrols, have joined activists in opposing the raids. As they do, they tell their friends and neighbors, some of whom begin to join in. As they do, their actions influence others who are slightly more reticent and, as they join, momentum builds even more.  I experienced this directly during the Orange Revolution. In the spring of 2004, I was aware of the demonstrations, but not participating. As a foreigner, I wasnt sure it was my place. But then my wifes friends started going and invited my wife. Once she joined in, I began going too and others came with me. The numbers became overwhelming and the regime fell.  3. Defection: When Silence Stops Being Safe At this point, many readers will begin to notice a problem. Didnt other movements, such as #Occupy and Black Lives Matter, follow these very same patterns and fail to achieve their objectives? The answer, of course, is an unqualified yes. The presence of a network cascade is necessary, but not sufficient, to bring change about. For that, you ned institutions.  Martin Luther King Jr. didnt just organize marches and boycotts. He used the power of mobilization to influence politicians like Lyndon Johnson. In much the same way, in Poland the Solidarity activists didnt just organize strikes. They actively engaged the Catholic Church. Early on during the color revolutions, activists learned that international institutions could be powerful allies and were able to successfully leverage that support.  This is, perhaps, the most striking vulnerability for the present administration. Early on, it targeted institutions, such as law firms and universities, but went about it in a very ham-handed way, and key targets successfully fought back. Others, such as Senators Thom Tillis and Bill Cassidy, have voiced opposition to ICE and CBP tactics. Chris Madel, a Republican candidate for Minnesota governor, ended his campaign in protest.  Yet corporate leaders, despite widely reported misgivings, have been largely sitting it out, even as former CEOs like Reid Hoffman, Bill George and Robert Rubin have urged them to weigh in. Good corporate stewardship, however, requires more than just operating a business and managing a balance sheet. It requires being effective leaders of your corporate community. Getting Ahead Of What Comes Next I remember attending a group dinner in Kyiv in late 2007 and sitting across from an executive from Sony Ericsson, who confidently told me that the iPhone launch earlier that year hadnt yet affected his companys sales. Yet the same pattern of contagion, adoption and defection would soon kick in and Sony Ericsson would lose relevance and ultimately be absorbed, as the smartphone cascade reshaped the entire industry.  Once a cascade begins, it takes on a life of its own. Corporate leaders in America today face a similar dilemma. Their first responsibility is to their stakeholders, whatever their own personal feelings. Yet among those millions taking to the streets are employees, customers, shareholders and their family members. Hoping you can stay on the fence is dangerously naive. It is only a matter of time before someone in your corporate community is affected by ICE and CBP violence: an arrest, getting roughed up, pepper-sprayedor worse.  The time to act is now. If Renee Good or Alex Pretti were one of your people or their children, what would you want to have in place for them and their families? What legal, medical, or psychological support are they and their coworkers going to need? You need to start preparing for that eventuality now. In much the same way, you need to begin to audit your partners and suppliers. Make sure the people you do business with share your values and those of your stakeholders. If they are supporting or engaging in activities that could harm your corporate community, dont wait for an incident. Cut ties. Most of all, you need to be explicit about your values and make sure you are living up to them. That doesnt mean taking a political position, but it does mean being clear where you stand. As someone who has had to rise to the challenge of running a business during a revolution, I can tell you from experience that someday you will want to look back on these times, reflect on what you said and did, and be proud of what you did.


Category: E-Commerce

 

2026-02-22 09:00:00| Fast Company

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it? These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag at-risk students, optimize course scheduling, or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarize and study, instructors use them to build assignments and syllabuses, and researchers use them to write code, scan literature, and compress hours of tedious work into minutes. People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labor of research and learning, what happens to higher education? What purpose does the university serve? Over the past eight years, weve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences. As these technologies become better at producing knowledge workdesigning classes, writing papers, suggesting experiments, and summarizing difficult textsthey dont just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend. Nonautonomous AI Consider three kinds of AI systems and their respective impacts on university life: AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising, and institutional risk assessment. These are considered nonautonomous systems because they automate tasks, but a person is in the loop and using these systems as tools. These technologies can pose a risk to students privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are risk scores generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed? These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards, and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives. Hybrid AI Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalized feedback tools, and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified. Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners, and on-demand explainers. Faculty use them to generate rubrics, draft lectures, and design syllabuses. Researchers use them to summarize papers, comment on drafts, design experiments, and generate code. This is where the cheating conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions. One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when youre interacting with a human and when youre interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot. A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety, and distrust for students. These are problematic outcomes. A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages, or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibilitynot only for students, but also for faculty. Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and thats not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft, and learning to spot ones own mistakes. Autonomous agents The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher in a boxan agentic AI system that can performstudies on its ownis becoming increasingly realistic. Agentic tools are anticipated to free up time for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labor of instruction can be handed off to systems optimized for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation, and even select new tests based on prior results. At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the routine responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time. The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions, and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is inefficient and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion, and revising weak arguments. This is the work of learning how to learn. Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research, and learning. An uncomfortable inflection point So what purpose do universities serve in a world in which knowledge work is increasingly automated? One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them. But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgment and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimizing it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities, and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgment. In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars, and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes. Nir Eisikovits is a professor of philosophy and the director of the Applied Ethics Center at UMass Boston. Jacob Burley is a junior research fellow at the Applied Ethics Center at UMass Boston. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] next »

Privacy policy . Copyright . Contact form .