Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2026-02-08 12:01:00| Fast Company

This weekend, a showdown between the Seattle Seahawks and New England Patriots, some star-studded commercials, and a Bad Bunny concert are taking place. Regardless of which part of Super Bowl LX is most important to you, it is all going down on Sunday, February 8, at Levi’s Stadium in Santa Clara, California. Heres a quick recap before kick-off. How did the Seahawks and Patriots get to Super Bowl LX? This isnt the first time that the Seahawks and Patriots have faced off in the championship game. In 2015, Seattle was defeated by the Patriots 28-24 after an eleventh-hour interception on the one-yard line. New quarterbacks Drake Maye and Sam Darnold may not have chips on their shoulders, but they are still determined to lead their respective teams to victory in this rematch. Speaking of new, according to CBS Sports Research, this is the first Super Bowl that features two teams with both head coaches and QBs all in their first or second season with their respective teams. The Patriots head coach, Mike Vrabel, led his team to a 14-3 season in the American Football Conference. The last obstacle to securing their Super Bowl LX spot was the Denver Broncos, whom they defeated 10-7 in blizzard-like conditions. This marks their 12th Super Bowl appearance and the first since former QB Tom Brady left the team in 2020. If New England winds up victorious, the franchise will break the record for most Super Bowl titles. Both the Patriots and the Pittsburgh Steelers are currently tied with six titles. Mike Macdonald, Seattles head coach, is doing everything in his power to prevent history from being made. He led his team to a franchise record 14-3 season in the National Football Conference. Seattle defeated the Los Angeles Rams 31-27 to earn their spot in the big game. They are on a nine-game winning streak and dont want to see that end. The franchise has been in the Super Bowl three times prior, but they have only won once, in 2014. Dont let their lack of titles deceive you, though: The Seahawks are currently favored to win because of their strong defense and stellar scoring abilities. Who is performing at Super Bowl LX? If you are in it for the music, three diverse artists are ready to put on a show. Green Day will open up the festivities and honor NFL MVPs. Continuing the pre-game momentum, Charlie Puth will tackle the national anthem. Brandi Carlile will perform “America the Beautiful,” then Coco Jones will croon “Lift Every Voice and Sing.” At halftime Bad Bunny, known as Benito Antonio Martínez Ocasio on his birth certificate, is ready to rock the house. Hot off his three Grammy wins just one week prior, the BAILE INoLVIDABLE singer made sure to save his voice on musics biggest night so he could sing out for football fans. (This was despite host Trevor Noahs best efforts to get him to crack.) The Puerto Rico native says he couldn’t be prouder. “What I’m feeling goes beyond myself,” he said when his involvement was announced, according to People. “It’s for those who came before me and ran countless yards so I could come in and score a touchdown.” What commercials have already dropped? If you are in it for the commercials, several were teased or released ahead of the big game. This makes bathroom breaks so much easier. Among the early arrivals is a Grubhub spot featuring George Clooney in his first-ever Super Bowl ad directed by Yorgos Lanthimos. He is far from the only celebrity getting in on the action. Dunkin’ is utilizing the talents of four movie stars in its spot: Ben Affleck, Jennifer Aniston, Matt LeBlanc, and Jason Alexander. Budweiser went in a different direction, relying on nostalgia and its signature Clydesdale horses. This is just the tip of the iceberg of ads, which according to the Financial Times, cost on average about $8 million for a 30-second spot. How can I watch or stream the 2026 Super Bowl? The party gets started on Sunday, February 8, at 6:30 p.m. ET./3:30 p.m. PT. You can catch it on NBC, Telemundo, and Universo. This means that you are covered if you have a traditional cable subscription or over-the-air antenna. As a reminder, watching NBC live with an OTA antenna is free. NBCUniversals subscription-based streaming service is Peacock, which will also stream the big game live. If you cut the cord, you can also utilize a live-TV streaming services such as Hulu + Live TV, Fubo, or YouTube TV. Just be sure to double check it carries NBC in your area. NFL+ is also an option, but it only works on phones or tablets.


Category: E-Commerce

 

2026-02-08 10:00:00| Fast Company

With ever-shrinking attention spans, film students today are struggling to make it to the end of a feature-length movie without getting distracted by their phones. Thats according to a recent article by The Atlantics Rose Horowitch. In a snippet that has since circulated on X, gaining nearly 2 million views since it was posted last week, one of the film studies professors interviewed by Horowitch recalled asking his students about the ending of the 1962 François Truffaut film Jules and Jim. The attention crisis is so dire at schools right now that film professors can't even get their students to finish movies, and the kids don't even look up the plots of the movies they skip, so students fail basic in-class quizzes like "what happened at the end of the movie?" pic.twitter.com/e09bN5ia8J— Derek Thompson (@DKThomp) January 30, 2026 More than half of the class picked one of the wrong options, saying that characters hide from the Nazis (the film takes place during World War I) or get drunk with Ernest Hemingway (who does not appear in the movie), the screenshot read. The film has a run time of 1 hour and 45 minutes.  Naturally, much hand-wringing ensued online. Im so confused. You kind of have to go out of your way to take a film studies course, right?, one X user asked. Imagine not doing the homework, and the homework is watching a movie. Thats crazy, a Reddit user wrote. Im so confused. You kind of have to go out of your way to take a film studies course, right?— Joseph Guarino (@RoninJoey) January 30, 2026 Others called it a crisis of attention. This bleeds into everything. Can’t pay attention to or finish a novel. Need cues to watch a movie because they are on second screens, another X user wrote.  The rise of second-screening and the resulting genre of casual, background-friendly TV shows and movies has been well documented. Many, myself included, will admit to putting on a film only to scroll TikTok with one hand and place an online order on a laptop with the other. In a recent n+1 magazine article, Will Tavlin reports that screenwriters are now being told to have their protagonists announce what theyre doing so that viewers who have this program on in the background can follow along. Film studies professors interviewed by The Atlantics Horowitch say they have even resorted to assigning students only portions of films. One compares his students to nicotine addicts going through withdrawal. Short-form content on social media platforms like TikTok, Instagram, and YouTube have rewired the brain to expect a dopamine hit every few seconds. The closest thing we had to doomscrolling back then was channel surfing, one Reddit user pointed out. Could they play the movies on 2x with Minecraft footage? one X user suggested.  could they play the movies on 2x with minecraft footage?— halogen (@halogen1048576) January 30, 2026 Long films arent the problem here. If anything, they might be the solution. Im actively trying to break my phone addiction, and a big part of that has been using movies as a guaranteed two hours a night off my phone, one Reddit user admitted. Its therapeutic, and Id encourage anyone trying to click less screen time to give it a try. Homework assignment: Sit and watch The Brutalist without once touching your phone, and see how difficult it can be.


Category: E-Commerce

 

2026-02-08 09:30:00| Fast Company

As Valentines Day approaches, finding the perfect words to express your feelings for that special someone can seem like a daunting taskso much so that you may feel tempted to ask ChatGPT for an assist. After all, within seconds, it can dash off a well-written, romantic message. Even a short, personalized limerick or poem is no sweat. But before you copy and paste that AI-generated love note, you might want to consider how it could make you feel about yourself. We research the intersection of consumer behavior and technology, and weve been studying how people feel after using generative AI to write heartfelt messages. It turns out that theres a psychological cost to using the technology as your personal ghostwriter. The rise of the AI ghostwriter Generative AI has transformed how many people communicate. From drafting work emails to composing social media posts, these tools have become everyday writing assistants. So its no wonder some people are turning to them for more personal matters, too. Wedding vows, birthday wishes, thank-you notes, and even Valentines Day messages are increasingly being outsourced to algorithms. The technology is certainly capable. Chatbots can craft emotionally resonant responses that sound genuinely heartfelt. But theres a catch: When you present these words as your own, something doesnt sit right. When convenience breeds guilt We conducted five experiments with hundreds of participants, asking them to imagine using generative AI to write various emotional messages to loved ones. Across every scenario we testedfrom appreciation emails to birthday cards to love letterswe found the same pattern: People felt guilty when they used generative AI to write these messages compared to when they wrote the messages themselves. When you copy an AI-generated message and sign your name to it, youre essentially taking credit for words you didnt write. This creates what we call a source-credit discrepancy, which is a gap between who actually created the message and who appears to have created it. You can see these discrepancies in other contexts, whether its celebrity social media posts written by public relations teams or political speeches composed by professional speechwriters. When you use AI, even though you might tell yourself youre just being efficient, you can probably recognize, deep down, that youre misleading the recipient about the personal effort and thought that went into the message. The transparency test To better understand this guilt, we compared AI-generated messages to other scenarios. When people bought greeting cards with preprinted messages, they felt no guilt at all. This is because greeting cards are transparently not written by you. Greeting cards carry no deception: Everyone understands you selected the card and that you didnt write it yourself. We also tested another scenario: having a friend secretly write the message for you. This produced just as much guilt as using generative AI. Whether the ghostwriter is human or an artificial intelligence tool doesnt matter. What matters most is the dishonesty. There were some boundaries, however. We found that guilt decreased when messages were never delivered and when recipients were mere acquaintances rather than close friends. These findings confirm that the guilt stems from violating expectations of honesty in relationships where emotional authenticity matters most. Somewhat relatedly, research has found that people react more negatively when they learn a company used AI instead of a human to write a message to them. But the backlash was strongest when audiences expected personal efforta boss expressing sympathy after a tragedy, or a note sent to all staff members celebrating a colleagues recovery from a health scare. It was far weaker for purely factual or instructional notes, such as announcing routine personnel changes or providing basic business updates. What this means for your Valentines Day So, what should you do about that looming Valentines Day message? Our research suggests that the human hand behind a meaningful message can help both the writer and the recipient feel better. This doesnt mean you cant use generative AI as a brainstorming partner rather than a ghostwriter. Let it help you overcome writers block or suggest ideas, but make the final message truly yours. Edit, personalize, and add details that only you would know. The key is co-creation, not complete delegation. Generative AI is a powerful tool, but its also created a raft of ethical dilemmas, whether its in the classroom or in romantic relationships. As these technologies become more integrated into everyday life, people will need to decide where to draw the line between helpful assistance and emotional outsourcing. This Valentines Day, your heart and your conscience might thank you for keeping your message genuinely your own. Julian Givi is an assistant professor of marketing at West Virginia University. Colleen P. Kirk is an assistant professor of marketing at New York Institute of Technology. Danielle Hass is a Ph.D. candidate in marketing at West Virginia University. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

2026-02-08 09:00:00| Fast Company

Curious Kids is a series for children of all ages. If you have a question youd like an expert to answer, send it to CuriousKidsUS@theconversation.com. Is the whole universe just a simulation? Moumita B., age 13, Dhaka, Bangladesh How do you know anything is real? Some things you can see directly, like your fingers. Other things, like your chin, you need a mirror or a camera to see. Other things cant be seen, but you believe in them because a parent or a teacher told you, or you read it in a book. As a physicist, I use sensitive scientific instruments and complicated math to try to figure out whats real and whats not. But none of these sources of information is entirely reliable: Scientific measurements can be wrong, my calculations can have errors, and even your eyes can deceive you, like the dress that broke the internet because nobody could agree on what colors it was. Because every source of informationeven your teacherscan trick you some of the time, some people have always wondered whether we can ever trust any information. If you cant trust anything, are you sure youre awake? Thousands of years ago, Chinese philosopher Zhuangzi dreamed he was a butterfly and realized that he might actually be a butterfly dreaming he was a human. Plato wondered whether all we see could just be shadows of true objects. Maybe the world we live in our whole lives inside isnt the real one; maybe its more like a big video game, or the movie The Matrix. The simulation hypothesis The simulation hypothesis is a modern attempt to use logic and observations about technology to finally answer these questions and prove that were probably living in something like a giant video game. Twenty years ago, a philosopher named Nick Bostrom made such an argument, based on the fact that video games, virtual reality, and artificial intelligence were improving rapidly. That trend has continued, so that today people can jump into immersive virtual reality or talk to seemingly conscious artificial beings. Bostrom projected these technological trends into the future and imagined a world in which wed be able to realistically simulate trillions of human beings. He also suggested that if someone could create a simulation of you that seemed just like you from the outside, it would feel just like you inside, with all of your thoughts and feelings. Suppose thats right. Suppose that sometime in, say, the 31st century, humanity will be able to simulate whatever they want. Some of them will probably be fans of the 21st century and will run many different simulations of our world so that they can learn about us, or just be amused. Heres Bostroms shocking logical argument: If the 21st century planet Earth only ever existed one time, but it will eventually get simulated trillions of times, and if the simulations are so good that the people in the simulation feel just like real people, then youre probably living on one of the trillions of simulations of the Earth, not on the one original Earth. This argument would be even more convincing if you actually could run powerful simulations today; but as long as you believe that people will run those simulations someday, then you logically should believe that youre probably living in one today. Scientist Neil deGrasse Tyson explains the simulation hypothesis and why he thinks the odds are about 50-50 were part of a virtual reality. Signs were living in a simulation . . . or not If we are living in a simulation, does that explain anything? Maybe the simulation has glitches, and thats why your phone wasnt where you were sure you left it, or how you knew something was going to happen before it did, or why that dress on the internet looked so weird. There are more fundamental ways in which our world resembles a simulation. There is a particular length, much smaller than an atom, beyond which physicists theories about the universe break down. And we cant see anything more than about 50 billion light-years away because the light hasnt had time to reach us since the Big Bang. That sounds suspiciously like a computer game where you cant see anything smaller than a pixel or anything beyond the edge of the screen. Of course, there are other explanations for all of that stuff. Lets face it: You might have misremembered where you put your phone. But Bostroms argument doesnt require any scientific proof. Its logically true as long as you really believe that many powerful simulations will exist in the future. Thats why famous scientists like Neil deGrasse Tyson and tech titans like Elon Musk have been convinced of it, though Tyson now puts the odds at 50-50. Others of us are more skeptical. The technology required to run such large and realistic simulations is so powerful that Bostrom describes such simulators as godlike, and he admits that humanity may never get that good at simulations. Even though it is far from being resolved, the simulation hypothesis is an impressive logical and philosophical argument that has challenged our fundamental notions of reality and captured the imaginations of millions. Hello, curious kids! Do you have a question youd like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age, and the city where you live. And since curiosity has no age limitadults, let us know what youre wondering, too. We wont be able to answer every question, but we will do our best. Zeb Rocklin is an associate professor of physics at Georgia Institute of Technology. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

2026-02-08 07:00:00| Fast Company

A few months ago, I walked into the office of one of our customers, a publicly traded vertical software company with tens of thousands of small business customers. I expected to meet a traditional support team with rows of agents on the phones, sitting at computers triaging tickets. Instead, it looked more like a control room. There were specialists monitoring dashboards, tuning AI behavior, debugging API failures, and iterating on knowledge workflows. One team member who had started their career handling customer questions over chat and email (resetting passwords, explaining features, troubleshooting one-off issues, and escalating bugs) was now writing Python scripts to automate routing. Another was building quality-scoring models for the companys AI agent. This seemed markedly different from the hyperbole Id been hearing about customer support roles going away in large part due to AI. What I was seeing across our customer base looked more like a shift in how support work is defined. So I decided to take a closer look. I analyzed 21 customer support job postings across AI-native companies, high-growth startups, and enterprise SaaS. These jobs run the gamut from technical support for complex software products to more transactional, commercial support involving billing and other common issues. What I found was that customer support is being rebuilt around AI-native workflows and systems-level thinking. Yes, responding to individual tickets is still important, but roles are designing and operating the technical systems that resolve customer issues at scale. The result is a new kind of support role, one thats part operator, part technologist, part strategist. AI Skills Are Now Table Stakes For most of the last two decades, support hiring optimized for communication skills and product familiarity. But that baseline is now gone. Across the 21 job postings I analyzed, nearly three-quarters explicitly required experience with AI tools, automation platforms, or conversational AI systems. These roles are about configuring, monitoring, and improving the AI systems over time. They are reviewing conversation logs, auditing AI behavior, and identifying failure modes. In other words, AI literacy has become the baseline for modern support work. If you dont understand how AI systems behave, you cant support the customers relying on them. More than half of the roles I analyzed required candidates to debug APIs, analyze logs, write SQL queries, or script automations in Python or Bash. Many expected familiarity with cloud infrastructure, observability tools, or version control systems like Git. That would have been unthinkable in support job descriptions even five years ago. But it makes sense. When AI systems fail, they fail at scale. Diagnosing those failures requires technical fluency like understanding how models interact with external systems and when an issue is rooted in configuration versus product logic. The job has evolved from fixing problems ticket by ticket to preventing the next thousand tickets. Humans are Needed to Solve Harder Problems Once AI becomes part of the support workflow, the nature of the work becomes more technical. One support leader I spoke with at a company that now contains more than 80% of its tickets with AI put it plainly: once automation handles the easy questions, the work left behind gets harder. The same frontline agents who used to focus on quick wins are now handling the most frustrated customers and edge cases, and theyve had to scale up their skills accordingly. In practice, this often looks like a customer trying to complete a critical workflow, like syncing data between systems before running billing. An AI agent starts by working off documentation that a subject matter expert has synthesized from multiple functions across the company. From there, the AI agent can confirm that everything is configured correctly. However, the AI agent may not be integrated to the right underlying system that failed silently hours earlier. The customer follows the guidance, only to discover downstream that data didnt move as expected. When the issue escalates, the subject matter expert has to reconstruct what happened across systems, reason through what the AI agent missed, and help the customer recover without losing trust. This is the kind of end-to-end work that AI still cant do on its own. It requires both technical fluency to trace failures across disparate systems, in addition to human judgement to decide what can be fixed immediately versus what needs deeper product or engineering intervention. In this way, support has become less about answering questions out of the manual, and more about creating the manual and solving the problems that it doesnt cover. The Hybrid HumanAI Model Is the Default Despite widespread fear about AI replacing support jobs, not a single posting I analyzed suggested that support would be 100% automated in the future. Instead, nearly every role gravitated toward a hybrid model where AI handles routine interactions, while humans oversee quality and continuously improve the system. This makes sense when you consider the fact that 95% of customer support leaders said they would retain human agents in their operations to help define AIs role when surveyed by Gartner last year. Titles like AI Support Specialist, AI Quality Analyst, and Support Operations Specialist were almost entirely focused on orchestration, designing escalation logic and defining when humans step in. This is where the earlier control room image becomes reality. The work of humans changes from simply answering questions to actually shaping systems. Taken together, these trends point to a single conclusion: customer support is specializing. The repetitive work is going away, but the judgment-heavy, technical work is expanding. That shift is already visible in how companies hire. The question now becomes whether organizations (and workers) are ready to adapt fast enough.


Category: E-Commerce

 

Sites : [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] next »

Privacy policy . Copyright . Contact form .