Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2025-11-20 11:00:00| Fast Company

Youve just finished a strenuous hike to the top of a mountain. Youre exhausted but elated. The view of the city below is gorgeous, and you want to capture the moment on camera. But its already quite dark, and youre not sure youll get a good shot. Fortunately, your phone has an AI-powered night mode that can take stunning photos even after sunset. Heres something you might not know: That night mode may have been trained on synthetic nighttime images, computer-generated scenes that were never actually photographed. As artificial intelligence researchers exhaust the supply of real data on the web and in digitized archives, they are increasingly turning to synthetic data, artificially generated examples that mimic real ones. But that creates a paradox. In science, making up data is a cardinal sin. Fake data and misinformation are already undermining trust in information online. So how can synthetic data possibly be good? Is it just a polite euphemism for deception? As a machine learning researcher, I think the answer lies in intent and transparency. Synthetic data is generally not created to manipulate results or mislead people. In fact, ethics may require AI companies to use synthetic data: Releasing real human face images, for example, can violate privacy, whereas synthetic faces can offer similar benefit with formal privacy guarantees. There are other reasons that help explain the growing use of synthetic data in training AI models. Some things are so scarce or rare that they are barely represented in real data. Rather than letting these gaps become an Achilles heel, researchers can simulate those situations instead. Another motivation is that collecting real data can be costly or even risky. Imagine collecting data for a self-driving car during storms or on unpaved roads. It is often much more efficient, and far safer, to generate such data virtually. Heres a quick take on what synthetic data is and why researchers and developers use it. How synthetic data is made Training an AI model requires large amounts of data. Like students and athletes, the more an AI is trained, the better its performance tends to be. Researchers have known for a long time that if data is in short supply, they can use a technique known as data augmentation. For example, a given image can be rotated or scaled to yield additional training data. Synthetic data is data augmentation on steroids. Instead of making small alterations to existing images, researchers create entirely new ones. But how do researchers create synthetic data? There are two main approaches. The first approach relies on rule-based or physics-based models. For example, the laws of optics can be used to simulate how a scene would appear given the positions and orientations of objects within it. The second approach uses generative AI to produce data. Modern generative models are trained on vast amounts of data and can now create remarkably realistic text, audio, images, and videos. Generative AI offers a flexible way to produce large and diverse datasets. Both approaches share a common principle: If data does not come directly from the real world, it must come from a realistic model of the world. Downsides and dangers It is also important to remember that while synthetic data can be useful, it is not a panacea. Synthetic data is only as reliable as the models of reality it comes from, and even the best scientific or generative models have weaknesses. Researchers have to be careful about potential biases and inaccuracies in the data they produce. For example, researchers may simulate the home-insurance ecosystem to help detect fraud, but those simulations could embed unfair assumptions about neighborhoods or property types. The benefits of such data must be weighed against risks to fairness and equity. Its also important to maintain a clear distinction between models and simulations on one hand and the real world on the other. Synthetic data is invaluable for training and testing AI systems, but when an AI model is deployed in the real world, its performance and safety should be proved with real, not simulated, data for both technical and ethical reasons. Future research on synthetic data in AI is likely to face many challenges. Some are ethical, some are scientific, and others are engineering problems. As synthetic data becomes more realistic, it will be more useful for training AI, but it will also be easier to misuse. For example, increasingly realistic synthetic images can be used to create convincing deepfake videos. I believe that researchers and AI companies should keep clear records to show which data is synthetic and why it was created. Clearly disclosing which parts of the training data are real and which are synthetic is a key aspect of responsibly producing AI models. Californias law, Generative artificial intelligence: training data transparency, set to take effect on January 1, 2026, requires AI developers to disclose if they used synthetic data in training their models. Researchers should also study how mistakes in simulations or models can lead to bad data. Careful work will help keep synthetic data transparent, trustworthy, and reliable. Keeping it real Most AI systems learn by finding patterns in data. Researchers can improve their ability to do this by adding synthtic data. But AI has no sense of what is real or true. The desire to stay in touch with reality and to seek truth belongs to people, not machines. Human judgment and oversight in the use of synthetic data will remain essential for the future. The next time you use a cool AI feature on your smartphone, think about whether synthetic data might have played a role. Our AIs may learn from synthetic data, but reality remains the ultimate source of our knowledge and the final judge of our creations. Ambuj Tewari is a professor of statistics at the University of Michigan. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

2025-11-20 11:00:00| Fast Company

Michelle had barely knotted her apron strings before the day turned ugly.  When I told her I could only serve regular coffeenot the waffle-flavored one she wantedshe threw the boiling-hot pot at me, she tells Fast Company, recounting one violent encounter with a customer. Working at a popular all-day breakfast chain, Michelle has learned that customer service often means surviving other peoples rage: Ive been cussed out, had hot food thrown on meeven dodged a plate thrown at my head, she says. Lately, the sexual comments from male customers have gotten worse. (Workers in this story have been given pseudonyms to protect them from retaliation.) Still, she shows up, because she hopes to save enough to launch her own business soon. Once upon a time, the customer is king was a rallying cry for better service. Today, its a management mantra gone feral. What began as good business sense, touted by historic retail magnates like Marshall Field and Harry Selfridge, has curdled into a corporate servitude that treats employees as expendable shock absorbers for awful behavior and diva demands.  With the holiday rush looming, customer-facing workers in cafés, call centers and car garages are bracing themselves to smile through every clients tantrumno matter how absurd. Rampant hostilityand its getting worse At Michelles workplace, the patron always comes first, while the safety of staff barely makes the list. Even after several viral videos of incidents at the chains restaurants, she says her complaints rarely go anywhere.  One of her managers will step in if he sees something on the floor thats out of line, but others just ask what she did to provoke it. It makes me angry, yet I feel I just have to take it, she says. Its an epidemic. That dynamic is baked into North American service culture.  The customer is king mantra has become a free pass for people to act however they want, with impunity, says Gordon Sayre, a professor at Emlyon Business School in Lyon, France, who has been studying its impact on employees. It breeds entitlementand that entitlement gets abused, leaving workers with almost no room to push back.The mantra dictates that service staff stay deferentialcareful about their every word and gesturewhile clients hold the upper hand. With some workers getting all of their take-home pay from tips and gratuity, customers can quite literally decide how much an employee earns. And according to Sayres research, that mix of financial power and enforced politeness makes sexual harassment at on the job more likely. The data mirrors reality. In a 2025 survey of 21,000 US frontline workers in healthcare, food service, education, retail, transportation, more than half (53%) said theyd recently faced verbally abusive, threatening or unruly customers.  There’s also been a meaningful uptick in customers acting out. According to Arizona State Universitys annual National Customer Rage survey, 43% admit to having raised their voice to show displeasure, up from 35% in 2015. And since 2020, the percentage of customers seeking revenge for their hassles has tripled.  Such encounters take a toll: employees on the receiving end are twice as likely to report that their jobs are damaging their physical health, and nearly twice as likely to feel unsafe at work, according to analytics platform Perceptyx. Management didnt back my coworker Madison has been a server for more than a decade, bouncing between casual spots and fine dining rooms. These days, shes at a former Michelin-starred restaurant in New York, and shes long since accepted the industrys devotion to customer is always right. She sees it play out nightly, usually when someone insists a dish isnt cooked properly, or worse, admits they just dont like it.  Theres a specific type of persnickety person who gets drunk on the power of being rude and demanding, she tells Fast Company. Once I spot a table with that vibe, I know Im in for a long night. The problem is, the mentality rewards bad behavior. Recently, a diner claimed hed only had one beerwhen it was clearly two. Management didnt back my coworker, and the guy was charged for just one, which ultimately comes out of our tip pool, says Madison. He might have left with a bad taste, but he still got what he wanted. Most hospitality staff Fast Company spoke with said the same thing: comping drinks, desserts, and even entire checks has become routine when someone complains. That generosity, however, comes at a time when restaurants and bars can least afford it.  Across the US, the industry is being squeezed from both sidessoaring labor and ingredient costs on one end, and cautious consumer spending on the other. Growth in 2025 has been even slower than during the pandemic lockdown years. So why are so many establishments still giving freebies to difficult customers? Because in the age of online reviews, every unhappy diner is a one-person marketing department, ready to dish out brutal takedowns. A single post can tank a spots reputation, and naming individual staff is common practice. To avoid bad publicity, businesses are trading profit for peace, and making sacrifices to get those all-important five-star ratings. Even a middling three-star review, which most customers equate to a good or average experience, can obliterate visibility on platforms like Yelp or Google.For individual frontline employees, those digital judgments hit harder. A dip in ratings can mean being moved to a slower section or losing a lucrative shift. And in the platform gig economy, where algorithmic rankings rule, a single bad review can mean less work, or none at all. Danielle, a salon owner in Washington, remembers when an unhappy client not only left a bad review, but recruited 200 others to do the same.  Ive no idea how she found so many people, but it was traumatizing watching one-star reviews just flood in, she says. Danielle has contacted Google and Yelp in the past, but they refuse to remove reviews.  Even on online platforms stuffed with fake and fraudulent bot reviews, the customer is always right, right? Rest assured, well be talking about you behind your back The real problem with the beloved slogan isnt the complaints or stingy tips. Its the emotional contortion required to stay polite while being treated like a punching bag.  Rose Hackman, author of Emotional Labor: The Invisible Work Shaping Our Lives and How to Claim Our Power, interviewed service workers across the industries for her boo and found a resounding answer: what counts isnt the service, its the smile. Emotional labor is highly devalued, feminized and rendered invisible, despite it being one of the most central forms of work in our economy, says Hackman. We need to value it more.Of course, that responsibility sits not just with consumers, but with employers too. Until the culture actually changes, employees cope the best they can.  Avery, a server in an upmarket seafood restaurant in Philadelphia, has gotten better at protecting herself with age. I used to fold like a beach chair to their needs and demands, but Im less willing now, she explains. Outside of this job, Im a performer, and there are similarities there: I put on a mask, act out a show, then the lights come up, I clock out, and I get to be someone else.  Sadly, no coping strategy is perfect.  Closing yourself off and faking an emotionalso known as surface actingcan look professional, but it impacts your mood, explains Sayre. Trying to fix the situation or reframe the customers behavior can protect your emotional health, but hurts performance. Instead, venting with trusted coworkers acts as a vital pressure valvea place to express real emotions and recover from the constant stress. Jesse, a New York bartender, is amazed by the rancid behavior he sees on the daily, but the camaraderie with his team keeps him sane.  If you walk in and make my life harder, talking to me in a way you would never speak to a friend or your mother; babe, youve decided what our relationship is gonna be, he says. Rest assured, well be talking about you behind your back, laughing and joking about how youre dressed.With customer is king still reigning, America desperately needs a reminder about the inherent social contract of emotional labora contract that only works if respect flows both ways. Without it, the whole system falls apart, leaving behind burnt-out staff and sour customers. As Jesse says: Youre a guest in my home, so I’m gonna take care of you. All you have to do is enjoy your night, and pay me for the work I do. 


Category: E-Commerce

 

2025-11-20 11:00:00| Fast Company

Tech giants are making grand promises for the AI age. The technology, we are told, might discover a new generation of medical interventions, and possibly answer some of the most difficult questions facing physics and mathematics. Large language models could soon rival human intellectual abilities, they claim, and artificial superintelligence might even best us. This is exciting, but also scary, they say, since the rise of AGI, or artificial general intelligence, could pose an uncontrollable threat to the human species.  U.S. government officials working with AI, including those charged with both implementing and regulating the tech in the government, are taking a different tack. They admit that the government is still falling behind the private sector in implementing LLM tech, and there’s a reason for agencies to speed up adoption.  Still, many question the hyperbolic terminology used by AI companies to promote the technology. And they warn that the biggest dangers presented by AI are not those associated with AGI that might rival human abilities, but other concerns, including unreliability and the risk that LLMs are eventually used to undercut democratic values and civil rights.  Fast Company spoke with seven people whove worked at the intersection of government and technology on the hype behind AIand what excites and worries them about the technology. Heres what they said.  Charles Sun, former federal IT official Sun, a former employee at the Department of Homeland Security, believes AI is, yes, overhypedespecially, he says, when people claim that AI is bigger than the internet. He describes the technology simply as large-scale pattern recognition powered by statistical modeling, noting AIs current wave is impressive but not miraculous. Sun argues that the tech is an accelerator of human cognition, not a replacement for it. I prefer to say that AI will out-process us, not outthink us. Systems can already surpass human capacity in data scale and speed, but intelligence is not a linear metric. We created the algorithms, and we define the rules of their operation. AI in government should be treated as a critical-infrastructure component, not a novelty, he continues. The danger isnt that AI becomes ‘too intelligent,’ but that it becomes too influential without accountability. The real threat is unexamined adoption, not runaway intelligence. Former White House AI official  I was worried at the beginning of this . . . when we decided that instead of focusing on mundane everyday use cases for workers, we decided at a national security front that we need to wholesale replace much of our critical infrastructure to support and be used by AI, says the person, who spoke on background. That creates a massive single point of failure for us that depends largely on compute and data centers never failing, and models being impervious to attacksneither of which I don’t think anyone, no matter how technical they are or not, would place their faith in. The former official says theyre not worried about AGI, at least for now: Next token prediction is not nearly enough for us to model complex behaviors and pattern recognition that we would qualify as general intelligence.   David Nesting, former White House AI and cybersecurity adviser AI is fantastic at getting insights out of large amounts of data. Those who have AI will be better capable of using data to make better decisions, and to do so in seconds rather than days or weeks. Theres so much data about us out there that hasnt really hurt us because nobodys ever really had the tools to exploit it all, but thats changing quickly, Nesting says. Im worried about the government turning AI against its own people, and Im worried about AI being used to deprive people of their rights in ways that they cant easily understand or appeal. Nesting adds: Im also worried about the government setting requirements for AI models intended to eliminate ‘bias,’ but without a clear definition of what ‘bias’ means. Instead, we get AI models biased toward some ‘official’ ideological viewpoint. Weve already seen this in China: Ask DeepSeek about Tiananmen Square. Will American AI models be expected to maintain an official viewpoint on the January 6th riots? I think were going to be arguing about what AGI means long after its effectively here, he continues. Computers have been doing certain tasks better than people for nearly a century. AI is just expanding that set of tasks more quickly.  I think the more alarming milestone will be the point at which AI can be exploited by people to increase their own power and harm others. You dont need AGI for that, and in some ways were already there, Nesting says. Americans today are increasingly and unknowingly interacting online with fake accounts run by AI that are indistinguishable from real peopleeven whole communities of peopleconfirming every fear and anxiety they have, and validating their outrage and hatred. Abigail Haddad, former member of the AI Corps at DHS  The biggest problem currently, Haddad argues, is that AI is actually being underused in government. An immense amount of work went into making these tools available inside of federal agencies, she notes, but whats available in the government is still behind whats available commercially. There are concerns about LLMs training on data, but those tools are operating on cloud systems that follow federal cybersecurity standards.  People who care about public services and state capacity should be irate at how much is still happening manually and in Excel, she says.  Tony Arcadi, former chief information officer of the Treasury Department  Computers are already smarter than us. It’s a very nebulous term. What does that really consist of? At least my computer is smarter than me when it comes to complex mathematical calculations, Arcadi says. The sudden emergence of AGI or the singularity, there’s this thing called Rokos basilisk, where the AI will go back in time andI don’t remember the exact thingbut kill people who interfered with this development. I don’t really go for all of that. He adds: The big challenge that I see leveraging AI in government is less around, if you will, the fear factor of the AI gone rogue, but more around the resiliency, reliability, and dependability of AI, which, today, is not great.  Eric Hysen, former chief information officer at DHS When asked a few months ago about whether AI might become so powerful that the process of governing might be offloaded to software, Hysen shared the following: I think there is something fundamentally human that Americans expect about their government. . . . Government decision-making, at some level, is fundamentally different than the way private companies make decisions, even if they are of very similar complexity. Some decisions, he added, “we’re always going to want to be fundamentally made by a human being, even if i’s AI-assisted in a lot of ways. It’s going to look more long term like heavy use of AI that will still ultimately feed for a lot of key things to human decision makers. Arati Prabhakar, former science and technology adviser to President Biden Prabhakar, who led the Office of Science and Technology Policy under President Joe Biden, is concerned that the conversation about AGI is being used to influence policy around the technology more broadly. Shes also skeptical that the technology is as powerful as people foretell. I really feel like Im in a freshman dorm room at 2 in the morning when I start hearing those conversations, she says. Your brain is using 20 or 25 watts to do all the things that it does. That includes all kinds of things that are way beyond LLMs. [Its] about 25 watts compared to the mega data centers that it takes to train and then to use AI models. Thats just one hint that we are so far from anything approximating human intelligence, she argues. “Most troubling is it puts the focus on the technology rather than the human choices that are being made in companies by policymakers about what to build, where to use it, and what kind of guardrails really will make it effective.” This story was supported by the Tarbell Center for AI Journalism.


Category: E-Commerce

 

2025-11-20 10:56:00| Fast Company

President Trump recently promised to make America the “crypto capital of the world.” And his administration is working hard to make that pledge a reality.  White House officials have established a working group on digital asset markets and directed federal agencies to craft a strategy to cement U.S. leadership. The president’s legislative team, meanwhile, helped push the GENIUS Act (Guiding and Establishing National Innovation for U.S. Stablecoins Act),through Congress earlier this summer, thus creating the first federal framework for stablecoins. And they’re working to pass the Clarity Act (Digital Asset Market Clarity Act), which would finally settle disputes over which regulator oversees digital assets. It’s refreshing to see our political leaders working to bring digital assets into the financial mainstream, especially after years of hostility from the prior administration.  But the work is far from finishedand achieving universal legitimacy will require not just favorable laws and regulations, but also behavioral changes at leading crypto firms.   Conflicting guidance For more than a decade, crypto innovators faced a patchwork of state regimes and conflicting federal guidance. The lack of clear regulation led to a proliferation of scams and bad actorsand kept many investors on the sidelines. Big banks and other legacy financial institutions hesitated to adopt cryptocurrencies and the underlying blockchain technology they’re based on, even as top financiers acknowledged blockchain’s potential to reshape the entire industry. The GENIUS Act represents Washington’s first serious attempt to genuinely regulaterather than ignore or suppressone of the leading forms of cryptocurrency. The new law requires stablecoin issuers to maintain dollar-for-dollar reserves and submit to audits. Far from rejecting this level of regulation, crypto leaders practically begged for it. They recognized that federal oversight and transparent standards are needed to transform what the public previously viewed as a speculative product into a reliable payment instrument.  That’s why industry leaders are also working with the White House and Congress to finalize the Clarity Act, which would define the boundaries of authority between the Securities and Exchange Commission and the Commodity Futures Trading Commission, delivering the kind of predictability that underpins every functioning capital market. Cultural shift But better regulation alone won’t bring about the mainstream approval that industry leaders seek. Only an internal cultural shiftand rigorous self-policingcan deliver that.  Every blockchain transaction depends on various forms of intellectual propertyfrom patents on mobile crypto wallets and bitcoin mining data centers to trade secrets in proprietary trading algorithms, and copyrights protecting exchange software to trademarks that build consumer trust. Coinbase, for instance, holds nearly 200 active patents. But most of the intellectual property powering today’s blockchain activity belongs to third parties outside the crypto industry. Yet even as leading platforms generate billions in revenue, the industry remains reluctant to acknowledge the legitimacy of IP rights. This reluctance is playing out in court. In May, Bancor’s nonprofit arm sued Uniswap, alleging that the leading decentralized exchange built its multibillion-dollar business on Bancor’s patented automated market maker technology without authorization.  And earlier this summer, Malikie Innovations filed suits against Core Scientific and Marathon Digital, claiming their bitcoin mining operations infringe on Malikie’s patents for elliptic curve cryptography. Elliptic Curve Cryptography (ECC), a cryptographic technique developed and patented by Certicom years before crypto went mainstream, was licensed by companies like Cisco and Motorola as well as the National Security Agency.  Cases like these highlight the tension: Crypto companies depend on IP to function, but too many are willing to disregard the IP rights of others, even as they clamor for legitimacy.  Not how respectable companies operate This simply isn’t the way respectable companies in mature industries operate. Spotify and Apple Music wouldn’t enjoy their positive reputations if they refused to pay royalties to artists and record labels. Streaming platforms like Netflix and Hulu would be pariahs if they pirated films. Banks would be shunned by investors alike if they treated software licenses as optional.  If leading crypto firms want to be seen as respectable, investable pillars of the global economy, they need to meet those same standards when it comes to intellectual property.  Digital assets are here to stay. But universal legitimacy will come only from a combination of comprehensive regulation and a cultural shift within the industry itself.


Category: E-Commerce

 

2025-11-20 10:45:00| Fast Company

If you slip a tiny wearable device on your fingertip and slide it over a smooth surface like a touchscreen, you can feel digital textures like denim or mesh. The device, designed by researchers at Northwestern University, is the first of its kind to achieve human resolution, meaning that it can more accurately match the complex way a human fingertip senses the world. In previous attempts at haptic devices like this, once you compare them to real textures, you realize theres something still missing, says Sylvia Tan, a PhD student at Northwestern and one of the authors of a new study in Science Advances about the research. Its close, but not quite there. Our work is trying to just get that one step closer. [Photo: Northwestern University] The wearable, made from flexible, paper-thin latex, is embedded with tiny nodes that push into the skin in a precise way and can move up to 800 times per second. Past devices had low resolutionthe touch equivalent of a pixelated image or an early movie from the 1890s with so few frames that the movement looks jerky. Using nodes and arranging them in a particular density improves that resolution. [Photo: Northwestern University] Earlier devices were also bulky. The ultrathin new technology, which weighs less than a gram, is designed to be comfortable to wear. A big goal was to make it very lightweight so you arent distracted by it, Tan says. And [to make] something that we call ‘haptically transparent’that means that even when youre wearing it, you can still perceive the real world, so you can perform everyday tasks. [Photo: Northwestern University] In the study, users could identify fabrics like corduroy or leather with 81% accuracy. The technology is still in development, but in the future, it could make it possible to feel products as you shop online. It could also have more immediate uses for people who are visually impaired, like making it possible to feel a tactile map or translating text on a screen to braille without a large, expensive device. On devices like microwaves, where physical buttons have often been replaced by flat touchscreens, the wearable could help a visually impaired person know where to push. It could also help improve human-robot interfaces. “In the medical field, the Da Vinci robot has very good kinesthetic force feedback,” Tan says. “But getting a surgeon to feel exactly what’s happening at your fingertip as you move the angle of your finger is not quite there. And that’s very important for high-skill workers.”


Category: E-Commerce

 

Sites : [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] next »

Privacy policy . Copyright . Contact form .