Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-08 09:30:00| Fast Company

As Valentines Day approaches, finding the perfect words to express your feelings for that special someone can seem like a daunting taskso much so that you may feel tempted to ask ChatGPT for an assist. After all, within seconds, it can dash off a well-written, romantic message. Even a short, personalized limerick or poem is no sweat. But before you copy and paste that AI-generated love note, you might want to consider how it could make you feel about yourself. We research the intersection of consumer behavior and technology, and weve been studying how people feel after using generative AI to write heartfelt messages. It turns out that theres a psychological cost to using the technology as your personal ghostwriter. The rise of the AI ghostwriter Generative AI has transformed how many people communicate. From drafting work emails to composing social media posts, these tools have become everyday writing assistants. So its no wonder some people are turning to them for more personal matters, too. Wedding vows, birthday wishes, thank-you notes, and even Valentines Day messages are increasingly being outsourced to algorithms. The technology is certainly capable. Chatbots can craft emotionally resonant responses that sound genuinely heartfelt. But theres a catch: When you present these words as your own, something doesnt sit right. When convenience breeds guilt We conducted five experiments with hundreds of participants, asking them to imagine using generative AI to write various emotional messages to loved ones. Across every scenario we testedfrom appreciation emails to birthday cards to love letterswe found the same pattern: People felt guilty when they used generative AI to write these messages compared to when they wrote the messages themselves. When you copy an AI-generated message and sign your name to it, youre essentially taking credit for words you didnt write. This creates what we call a source-credit discrepancy, which is a gap between who actually created the message and who appears to have created it. You can see these discrepancies in other contexts, whether its celebrity social media posts written by public relations teams or political speeches composed by professional speechwriters. When you use AI, even though you might tell yourself youre just being efficient, you can probably recognize, deep down, that youre misleading the recipient about the personal effort and thought that went into the message. The transparency test To better understand this guilt, we compared AI-generated messages to other scenarios. When people bought greeting cards with preprinted messages, they felt no guilt at all. This is because greeting cards are transparently not written by you. Greeting cards carry no deception: Everyone understands you selected the card and that you didnt write it yourself. We also tested another scenario: having a friend secretly write the message for you. This produced just as much guilt as using generative AI. Whether the ghostwriter is human or an artificial intelligence tool doesnt matter. What matters most is the dishonesty. There were some boundaries, however. We found that guilt decreased when messages were never delivered and when recipients were mere acquaintances rather than close friends. These findings confirm that the guilt stems from violating expectations of honesty in relationships where emotional authenticity matters most. Somewhat relatedly, research has found that people react more negatively when they learn a company used AI instead of a human to write a message to them. But the backlash was strongest when audiences expected personal efforta boss expressing sympathy after a tragedy, or a note sent to all staff members celebrating a colleagues recovery from a health scare. It was far weaker for purely factual or instructional notes, such as announcing routine personnel changes or providing basic business updates. What this means for your Valentines Day So, what should you do about that looming Valentines Day message? Our research suggests that the human hand behind a meaningful message can help both the writer and the recipient feel better. This doesnt mean you cant use generative AI as a brainstorming partner rather than a ghostwriter. Let it help you overcome writers block or suggest ideas, but make the final message truly yours. Edit, personalize, and add details that only you would know. The key is co-creation, not complete delegation. Generative AI is a powerful tool, but its also created a raft of ethical dilemmas, whether its in the classroom or in romantic relationships. As these technologies become more integrated into everyday life, people will need to decide where to draw the line between helpful assistance and emotional outsourcing. This Valentines Day, your heart and your conscience might thank you for keeping your message genuinely your own. Julian Givi is an assistant professor of marketing at West Virginia University. Colleen P. Kirk is an assistant professor of marketing at New York Institute of Technology. Danielle Hass is a Ph.D. candidate in marketing at West Virginia University. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

LATEST NEWS

2026-02-08 09:00:00| Fast Company

Curious Kids is a series for children of all ages. If you have a question youd like an expert to answer, send it to CuriousKidsUS@theconversation.com. Is the whole universe just a simulation? Moumita B., age 13, Dhaka, Bangladesh How do you know anything is real? Some things you can see directly, like your fingers. Other things, like your chin, you need a mirror or a camera to see. Other things cant be seen, but you believe in them because a parent or a teacher told you, or you read it in a book. As a physicist, I use sensitive scientific instruments and complicated math to try to figure out whats real and whats not. But none of these sources of information is entirely reliable: Scientific measurements can be wrong, my calculations can have errors, and even your eyes can deceive you, like the dress that broke the internet because nobody could agree on what colors it was. Because every source of informationeven your teacherscan trick you some of the time, some people have always wondered whether we can ever trust any information. If you cant trust anything, are you sure youre awake? Thousands of years ago, Chinese philosopher Zhuangzi dreamed he was a butterfly and realized that he might actually be a butterfly dreaming he was a human. Plato wondered whether all we see could just be shadows of true objects. Maybe the world we live in our whole lives inside isnt the real one; maybe its more like a big video game, or the movie The Matrix. The simulation hypothesis The simulation hypothesis is a modern attempt to use logic and observations about technology to finally answer these questions and prove that were probably living in something like a giant video game. Twenty years ago, a philosopher named Nick Bostrom made such an argument, based on the fact that video games, virtual reality, and artificial intelligence were improving rapidly. That trend has continued, so that today people can jump into immersive virtual reality or talk to seemingly conscious artificial beings. Bostrom projected these technological trends into the future and imagined a world in which wed be able to realistically simulate trillions of human beings. He also suggested that if someone could create a simulation of you that seemed just like you from the outside, it would feel just like you inside, with all of your thoughts and feelings. Suppose thats right. Suppose that sometime in, say, the 31st century, humanity will be able to simulate whatever they want. Some of them will probably be fans of the 21st century and will run many different simulations of our world so that they can learn about us, or just be amused. Heres Bostroms shocking logical argument: If the 21st century planet Earth only ever existed one time, but it will eventually get simulated trillions of times, and if the simulations are so good that the people in the simulation feel just like real people, then youre probably living on one of the trillions of simulations of the Earth, not on the one original Earth. This argument would be even more convincing if you actually could run powerful simulations today; but as long as you believe that people will run those simulations someday, then you logically should believe that youre probably living in one today. Scientist Neil deGrasse Tyson explains the simulation hypothesis and why he thinks the odds are about 50-50 were part of a virtual reality. Signs were living in a simulation . . . or not If we are living in a simulation, does that explain anything? Maybe the simulation has glitches, and thats why your phone wasnt where you were sure you left it, or how you knew something was going to happen before it did, or why that dress on the internet looked so weird. There are more fundamental ways in which our world resembles a simulation. There is a particular length, much smaller than an atom, beyond which physicists theories about the universe break down. And we cant see anything more than about 50 billion light-years away because the light hasnt had time to reach us since the Big Bang. That sounds suspiciously like a computer game where you cant see anything smaller than a pixel or anything beyond the edge of the screen. Of course, there are other explanations for all of that stuff. Lets face it: You might have misremembered where you put your phone. But Bostroms argument doesnt require any scientific proof. Its logically true as long as you really believe that many powerful simulations will exist in the future. Thats why famous scientists like Neil deGrasse Tyson and tech titans like Elon Musk have been convinced of it, though Tyson now puts the odds at 50-50. Others of us are more skeptical. The technology required to run such large and realistic simulations is so powerful that Bostrom describes such simulators as godlike, and he admits that humanity may never get that good at simulations. Even though it is far from being resolved, the simulation hypothesis is an impressive logical and philosophical argument that has challenged our fundamental notions of reality and captured the imaginations of millions. Hello, curious kids! Do you have a question youd like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age, and the city where you live. And since curiosity has no age limitadults, let us know what youre wondering, too. We wont be able to answer every question, but we will do our best. Zeb Rocklin is an associate professor of physics at Georgia Institute of Technology. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

2026-02-08 07:00:00| Fast Company

A few months ago, I walked into the office of one of our customers, a publicly traded vertical software company with tens of thousands of small business customers. I expected to meet a traditional support team with rows of agents on the phones, sitting at computers triaging tickets. Instead, it looked more like a control room. There were specialists monitoring dashboards, tuning AI behavior, debugging API failures, and iterating on knowledge workflows. One team member who had started their career handling customer questions over chat and email (resetting passwords, explaining features, troubleshooting one-off issues, and escalating bugs) was now writing Python scripts to automate routing. Another was building quality-scoring models for the companys AI agent. This seemed markedly different from the hyperbole Id been hearing about customer support roles going away in large part due to AI. What I was seeing across our customer base looked more like a shift in how support work is defined. So I decided to take a closer look. I analyzed 21 customer support job postings across AI-native companies, high-growth startups, and enterprise SaaS. These jobs run the gamut from technical support for complex software products to more transactional, commercial support involving billing and other common issues. What I found was that customer support is being rebuilt around AI-native workflows and systems-level thinking. Yes, responding to individual tickets is still important, but roles are designing and operating the technical systems that resolve customer issues at scale. The result is a new kind of support role, one thats part operator, part technologist, part strategist. AI Skills Are Now Table Stakes For most of the last two decades, support hiring optimized for communication skills and product familiarity. But that baseline is now gone. Across the 21 job postings I analyzed, nearly three-quarters explicitly required experience with AI tools, automation platforms, or conversational AI systems. These roles are about configuring, monitoring, and improving the AI systems over time. They are reviewing conversation logs, auditing AI behavior, and identifying failure modes. In other words, AI literacy has become the baseline for modern support work. If you dont understand how AI systems behave, you cant support the customers relying on them. More than half of the roles I analyzed required candidates to debug APIs, analyze logs, write SQL queries, or script automations in Python or Bash. Many expected familiarity with cloud infrastructure, observability tools, or version control systems like Git. That would have been unthinkable in support job descriptions even five years ago. But it makes sense. When AI systems fail, they fail at scale. Diagnosing those failures requires technical fluency like understanding how models interact with external systems and when an issue is rooted in configuration versus product logic. The job has evolved from fixing problems ticket by ticket to preventing the next thousand tickets. Humans are Needed to Solve Harder Problems Once AI becomes part of the support workflow, the nature of the work becomes more technical. One support leader I spoke with at a company that now contains more than 80% of its tickets with AI put it plainly: once automation handles the easy questions, the work left behind gets harder. The same frontline agents who used to focus on quick wins are now handling the most frustrated customers and edge cases, and theyve had to scale up their skills accordingly. In practice, this often looks like a customer trying to complete a critical workflow, like syncing data between systems before running billing. An AI agent starts by working off documentation that a subject matter expert has synthesized from multiple functions across the company. From there, the AI agent can confirm that everything is configured correctly. However, the AI agent may not be integrated to the right underlying system that failed silently hours earlier. The customer follows the guidance, only to discover downstream that data didnt move as expected. When the issue escalates, the subject matter expert has to reconstruct what happened across systems, reason through what the AI agent missed, and help the customer recover without losing trust. This is the kind of end-to-end work that AI still cant do on its own. It requires both technical fluency to trace failures across disparate systems, in addition to human judgement to decide what can be fixed immediately versus what needs deeper product or engineering intervention. In this way, support has become less about answering questions out of the manual, and more about creating the manual and solving the problems that it doesnt cover. The Hybrid HumanAI Model Is the Default Despite widespread fear about AI replacing support jobs, not a single posting I analyzed suggested that support would be 100% automated in the future. Instead, nearly every role gravitated toward a hybrid model where AI handles routine interactions, while humans oversee quality and continuously improve the system. This makes sense when you consider the fact that 95% of customer support leaders said they would retain human agents in their operations to help define AIs role when surveyed by Gartner last year. Titles like AI Support Specialist, AI Quality Analyst, and Support Operations Specialist were almost entirely focused on orchestration, designing escalation logic and defining when humans step in. This is where the earlier control room image becomes reality. The work of humans changes from simply answering questions to actually shaping systems. Taken together, these trends point to a single conclusion: customer support is specializing. The repetitive work is going away, but the judgment-heavy, technical work is expanding. That shift is already visible in how companies hire. The question now becomes whether organizations (and workers) are ready to adapt fast enough.


Category: E-Commerce

 

Latest from this category

08.02Super Bowl 2026: How to watch the Seahawks vs. Patriots and halftime show live, including free options
08.02Need cues to watch a movie because they are on second screens: Even film students cant put their phones down
08.02Is having AI ghostwrite your Valentines Day messages a good idea?
08.02Are we living in a simulation?
08.02AI didnt kill customer support. Its rebuilding it
07.02Anthropic joins a long list of brands that have vowed to stay ad-free. They dont always keep their word
07.023 bad financial habits solopreneurs cant afford
07.02The new cola wars are upon usbut this time its the battle of AI
E-Commerce »

All news

08.02Moraine Valley art professors exhibit explores role of AI in society
08.02Super Bowl 2026: How to watch the Seahawks vs. Patriots and halftime show live, including free options
08.02Condo Adviser: Chicagos municipal code requires inspections of elevators, escalators in central business district
08.02Amid chaotic data center debates, industry warns Illinois will miss out unless privacy law weakened
08.02Need cues to watch a movie because they are on second screens: Even film students cant put their phones down
08.02Is having AI ghostwrite your Valentines Day messages a good idea?
08.02Are we living in a simulation?
08.02City volunteers seek drivers for meal deliveries
More »
Privacy policy . Copyright . Contact form .