Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-02-06 18:00:00| Fast Company

Are you suffering from Zoom fatigue?” Exhausted from being on video calls multiple times a week, or even every day? Well, it may be because you’re sick of looking at your own face, according to a new study. That study, from researchers at Michigan State University and published in the journal PLOS One, discovered that facial appearance dissatisfaction” could explain the weariness people are experiencing when using videoconferencing technology. Our increased reliance on virtual meetings in the workplace, especially with the rise of remote work, means we are spending a lot more time on camera, which has significant implications for workplace productivity and individual well-being, according to the researchers. Our study highlights that dissatisfaction with facial appearance contributes to Zoom fatigue, leading to reduced adoption of virtual meeting technologies,” said study author Chaeyun Lim of Michigan State University. The study also looked at impression management features, which are tools that enable users to adjust their self-video to manage their appearance, and found that dissatisfaction with facial appearance “also drives the use of impression management features, emphasizing the need to address worker well-being in virtual communication environments.” In other words, researchers found that individuals who didn’t like how they looked had more Zoom fatigue and saw virtual meetings as less useful, leading them to be less interested in adopting the technology. Taken together, the findings shed light on why some people are less likely to want to attend virtual meetings. The study involved 2,448 U.S.-based workers answering a 15-minute survey. The group included professional, technical, and scientific workers who worked remotely at least part of the time and regularly attended virtual meetings for work. It’s not the first study about negative feelings associated with virtual meetings. Another study from Austrian researchers in 2023 examined the effects of videoconferencing directly on the brain and heart. In that study, electrodes were stuck on the heads and chests of 35 students who took part in 50-minute lectures through videoconference and in person. According to the brain and heart readings, students had significantly greater levels of fatigue, drowsiness, negativity, and sadness, and less attention, after videoconferencing than after in-person lectures. So, what’s the solution? Individuals and organizations can adopt practices such as scheduling regular breaks, according to Graz University of Technology’s René Riedl, co-senior author of the Austrian study. Based on our research results, we recommend a break after 30 minutes, because we found that after 50 minutes of videoconferencing, significant changes in physiological and subjective fatigue could be observed. Moreover, utilizing features like speaker view to mitigate the intensity of perceived continuous eye contact could be helpful.


Category: E-Commerce

 

LATEST NEWS

2025-02-06 17:00:00| Fast Company

The Brannock devicethat sliding metal gadget used in shoe stores to measure the dimensions of your feetwas invented 100 years ago this year. But footwear fitting hasnt really gotten more advanced since, says Dan Cataldi, founder and CEO of custom insole maker Groov. For most people, it still comes down to finding shoes by style and size, taking a few steps in them, and hoping for the best. And when it comes to insoles, the part of the shoe that you actually walk on, people with medical issues and professional athletes might get custom orthotic inserts fitted by a doctor, while most people make do with what comes in their shoes or, in a pinch, a cushioning insert from the drugstore.[Photo: Groov]Groov is designed to bridge that gap, using an app that lets customers scan their own feet and footwear at home with their iPhone cameras so it can build 3D models of their two feet and understand the shapes of their shoes. Then, the companys machine learning algorithms can design a variety of styles of insoles for each customer. For me, the whole notion of Groov is taking something that should exist within footwear and bringing it into footwear, Cataldi says. If youve got to go see a clinician, if youve got to make an appointment, thats not footwear.Options include the Plush, an everyday cushioning model designed for comfort, and a high elasticity model designed for athletes, known as the Response model. Here we replace the soft, shock-absorbing, low-elasticity cushion with a high-elasticity, more explosive cushion for a quick first step, and I blast off in each step, he says.Theres also the Luxe, a more discreet replacement for built-in insoles for shoes like high heels, designed to be thin enough to stick into the shoe without being visible when the shoes are worn.[Photo: Groov]Groov insoles typically arrive within a few days, engraved with the customers name or another chosen nickname. If customers want to order more or try another style, like switching from the regular cushioning model to the athletics-focused variety, they can do so from the app. Customers are likely to want to retake their scans every year or two, or if they have reason to believe something has changed in their feet, and they can scan new shoes or the shoes existing insoles to order Groov insoles adapted to a particular pair, Cataldi says. Key to the easy customization is the TrueDepth camera system in the iPhone thats used for FaceID logins. The camera projects, then captures, a grid of invisible infrared dots, used in the FaceID system to create a unique model of the face and by Groov to similarly understand the contours of the foot. [Image: Groov]What that enables us to do is bypass any need for a clinical visit if its a non-medical situation, and get all of the data with millimeter-level precision, he says.And replacing that clinical visit with a brief, at-home foot scan means reaching a wide audience whod simply never think of getting inserts from a doctor. After all, Cataldi says, his own father is a chiropractor who provides orthotic inserts for patients and wore them himself, but even as a young athlete, Cataldi thought the medicinal-seeming devices felt like overkill. From Groovs perspective, being able to create insoles on demand is also an advantage, since theres no inventory that has to sit around company warehouses or on retailer shelves. But the company did in December do pilot pop-ups in Nordstroms mens and womens departments in Manhattan, where Groov was able to scan dozens of customers feet in-store. Future retail collaborations may be in the works, Cataldi says, and the company is also in talks with footwear companies about potential partnerships. Deals with e-commerce companies to integrate the technology into their shopping experiences might also be in Groovs future, he says, and for now the company is promoting the technology through social media, with athletes and others already highlighting their use of the inserts on Instagram. Another happy customer, Cataldi says, is his own father. While he still advocates orthotic inserts for patients who need them, hes switched away from them himself.Now, he wears Groovs, Cataldi says.


Category: E-Commerce

 

2025-02-06 17:00:00| Fast Company

Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. OpenAIs deep research gives a preview of the AI agents of the future OpenAI announced this week its AI research assistant, which it calls deep research. Powered by OpenAIs o3-mini model (which was trained to use trial and error to find answers to complex questions), deep research is one of OpenAIs first attempts at a real agent thats capable of following instructions and working on its own.  OpenAI says deep research is built for people in fields like finance, science, policy, and engineering who need thorough, precise, and reliable research. It can also be useful for big-ticket purchases, like houses or cars. Because the model needs to spin a lot of cycles and tote around a lot of memory during its task, it uses a lot of computing power on an OpenAI server. Thats why only the companys $200-per-month Pro users have access to the tool, and theyre limited to 100 searches per month. OpenAI was kind enough to grant me access for a week to try it out. I found a new deep research button just below the prompting window in ChatGPT.  I first asked it to research all the nondrug products that claim to help people with low back pain. I was thinking about consumer tech gadgets, but Id not specified that. So ChatGPT was unsure about the scope of my search (and, apparently, so was I), and it asked me if I wanted to include ergonomic furniture and posture correctors. The model researched the question for 6 minutes, cited 20 sources, and returned a 2,000-word essay on all the consumer back pain devices it could find on the internet. It discussed the relative values of heated vibration belts, contact pad systems, and Transcutaneous Electrical Nerve Stimulation (TENS) units. It even generated a grid that displayed all the details and pricing of 10 different devices. Not knowing a great deal about such devices, I couldnt find any gaps in the information, or any suspect statements.  I decided to try something a little harder. I would like an executive overview of the current research into using artificial intelligence to find new cancer treatments or diagnostic tools, I typed. Please organize your answer so that the treatments that are most promising, and closest to being used on real patients, are given emphasis. Like DeepSeeks R1 model and Googles Gemini Advanced 2.0 Flash Thinking Experimental, OpenAIs research tool also shows you its chain of thought, as it works toward a satisfying answer. While it searched it telegraphed its process: I’m working through AI’s integration in cancer diagnostics and treatment, covering imaging, pathology, genomics, and radiotherapy planning. Progressing towards a comprehensive understanding. OpenAI also makes a nice UX choice by putting this chain-of-thought flow in a separate pane at the right of the screen, instead of presenting it right on top of the research results. The only problem is, you only get one chance to see it, because it goes away after the agent finishes its research.  I was surprised that OpenAIs deep research tool used only 4 minutes to finish its work, and cited only 18 sources. It created a summary of how AI is being used in cancer research, citing specific studies that validated the AI in clinical settings. It discussed trends in using AI in reading medical imaging, finding cancer risk in genome data, AI-assisted surgery, drug discovery, and radiation therapy planning and dosing. However, I noticed that many of the studies and FDA approvals cited didnt occur within the past 18 months. Some of the statements in the report sounded outdated: Notably, several AI-driven tools are nearing real-world clinical usewith some already approvedparticularly in diagnostics (imaging and pathology), it stated, but AI diagnostic tools are already in clinical use.  Before starting the research, I was aware of a new landmark study published two days ago in The Lancet medical journal about AI assisting doctors in reading mammograms (more on that below). The deep research report mentioned this same study, but it outlined preliminary results published in 2023, not the more recent results published this month. I have full confidence in OpenAIs deep research tool for doing product searches. Im less confident, though, about scientific research, only because of the currency of the research it included in its report. Its also possible that my search was overbroad, since AI is now being used on many fronts to fight cancer. And to be clear: Two searches certainly isnt enough to pass judgement on deep research. The number and kinds of searches you can do is practically infinite, so Ill be testing it more while I still have access. On the whole Im impressed with OpenAIs new toolat the very least it gives you a framework and some sources and ideas to start you off on your own research. AI is working alongside doctors on early breast cancer detection A study of more than 100,000 breast images from mammography screenings in Sweden found that when an AI system assisted single doctors in reviewing mammograms, positive detections of cancer increased by 29%. The screenings were coordinated as part of the Swedish national screening program and performed at four screening sites in southwest Sweden.  The AI system, called Transpara, was developed by ScreenPoint Medical in the Netherlands. Normally, two doctors review mammograms together. When AI steps in for one of them, overall screen reading time drops by 44.2%, saving lots of time for oncologists. The AI makes no decisions; it merely points out potential problem spots in the image and assigns a risk score. The human doctor then decides how to proceed. With a nearly 30% improvement in early detections of cancer, the AI is quite literally saving lives. Healthcare providers have been using AI image recognition systems in diagnostics since 2017, and with success, but the results of large scale studies are only now beginning to appear.  Google touts the profitability of its AI search ads Alphabet announced its quarterly results earlier this week and hidden among the other results was some good news about Googles AI search results (called AI Overviews). Some observers feared that Google would struggle to find ad formats that brands like within the new AI results, or that ads around the AI results would cannibalize Googles regular search ads business. But Google may have found the right formats already, because the AI ads are selling well and are profitable, analysts say. We were particularly impressed by the firm’s commentary on AI Overviews monetization, which is approximately at par with traditional search monetization despite its launch just a few months ago, says Morningstar equity analyst Malik Ahmed Khan in a research brief.  Khan says Googles AI investments paid off in the companys revamped Shopping section within Google Search, which was upgraded last quarter with AI. The Shopping segment yielded 13% more daily active U.S. users in December 2024 compared with the same month a year earlier. Google also says that younger people who are attracted to AI Overviews end up using regular oogle Search more, with their usage increasing over time. This dynamic of AI Overviews being additive to Google Search stands at odds with the market narrative of generative AI being the death knell for traditional search, Khan says. Google also announced that it intends to spend $75 billion in capital expenditures during 2025, much of which will go toward new cloud capacity and AI infrastructure. More AI coverage from Fast Company:  Hundreds of rigged votes can skew AI model rankings on Chatbot Arena, study finds AI might run your next employee training You can try DeepSeeks R1 through Perplexitywithout the security risk Why this cybersecurity startup wants to watermark everything Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Category: E-Commerce

 

Latest from this category

06.02RBLX stock plummets 20% after Roblox misses target for daily active users
06.02Bluesky photo-sharing app Flashes launches in beta
06.02How Maxs international expansion is paying off
06.02Billabong, Roxy, Volcom on list of 120 stores closing as Liberated Brands files for Chapter 11 bankruptcy, blames fast fashion
06.02As the federal resignation offer looms, some workers dig in their heels
06.024 things entrepreneurial couples can teach every cofounder
06.02Got Zoom fatigue? It might be because youre sick of the way you look, says study
06.02This startup can measure custom insoles with just an iPhone camera
E-Commerce »

All news

06.02Bear Radar
06.02Stocks Slightly Higher into Final Hour on Stable Long-Term Rates, Earnings Outlook Optimism, Technical Buying, Construction/Road & Rail Sector Strength
06.02Tomorrow's Earnings/Economic Releases of Note; Market Movers
06.02Bull Radar
06.02What Makes This Trade Great: QNTM Still in Play?
06.02Protecting the US from hackers apparently isn't in Trump's budget
06.02Get one year of Peacock Premium for only $30
06.02The ESA wants to replace E3 with a bunch of buzzwords
More »
Privacy policy . Copyright . Contact form .