|
|||||
A few of the neatest gadgets at the Consumer Electronics Show (CES) 2026 werent anywhere near the Las Vegas Convention Center trade show venue. Instead, they were sitting on a table at The Venetian Resort’s food court, at least on Monday when Core Devices founder Eric Migicovsky was holding press meetings. He had a couple of quirky Pebble smartwatches to show off, with lo-fi e-paper screens in round and rectangular forms, and he was wearing an early version of the Pebble Index, a smart ring whose main job is capturing voice notes. (He moved to a booth in the bowels of the Venetian expo when CES officially got underway.) Unlike a lot of exhibitors, Migicovsky isnt promising anything revolutionary, but he also made clear that Cores mission has expanded. Beyond just making smartwatches, he now sees the company as a purveyor of fun but indispensable gadgets. The Pebble Index is just the start. Core Devices is building the gadgets we want, (because) no one else is, he says. Three watches and a smart ring Its now been about nine years since Pebble shut down, selling its assets to Fitbit after the Apple Watch sucked out the oxygen for smartwatch startups. Maybe Pebbles fate was unavoidable, but Migicovsky also regrets overextending into areas he wasnt passionate about, like fitness tracking. (Im not a Whoop guy, he says.) Core Devices is a chance to start fresh. After spending three years as a Y Combinator partner, and then selling his messaging startup Beeper to Automattic (reportedly for $125 million), Migicovsky has no desire to go the usual startup route again. When Google agreed to open-source the original Pebble operating system last year, he put up the R&D money for a new batch of watches, then started taking preorders. With the new Pebble watches, the core appeal is the same as the originals: Geeky watch faces, reliable push button controls, e-paper screens for long battery life, hackability. For Core Devices’ first new watch, the Pebble 2 Duo, the hardware is also similar, as Migicovsky found a supplier with some original Pebble 2 components and repurposed them into 8,000 new watches that shipped late last year. The next batch of Pebbles is more like what the original company mightve built if it survived for longer. The $225 Pebble Time 2 looks like a standard rectangular smartwatch, except it lasts for a month between charges, while the $199 Pebble Round 2 ditches heart rate monitoring for a slim design and two weeks of battery life. Both have larger screens and much narrower bezels than any of Pebbles original watches. As for the $99 Pebble Index 01 ring, Migicovsky says the idea came from struggling to remember things and wanting to record them in way that became muscle memory. Talking into the ring while holding its clickable button records a voice note, which a companion app transcribes into text. A double-click allows for programmable actions such as smart home controls or AI queries (whose answers, for instance, could appear on a Pebble). A Pebble app with similar functionality is coming, but the point of the ring is that you only need one free thumb to use it. Meanwhile, Migicovsky is cutting out all the things he hated about making hardware before. He raises money for the watches through preorders instead of investors, sells them through Cores website instead of dealing with retailers, and doesnt bother with sales forecasting. The resulting sales have been modest25,000 Pebble Time 2 preorders, 7,000 more for the Round, 8,000 for the now-sold-out Pebble 2 Duobut the company has far exceeded its minimums for what Migicovsky considers viable. That means Core Devices can keep making new gadgets. We decided this go-round that well just do the things that are fun, he says. Beyond the watch Among longtime Pebble fans, the Index ring has been contentious, in large part because it’s not designed to last. Its internal battery isn’t rechargeable or replaceable, and after 12 to 15 hours of recording time, it’ll simply stop working. (Migicovsky estimates a two-year lifespan for someone who records 10 to 20 thoughts per day.) Core Devices will offer to recycle the metal, but it’ll throw the electronics away. Migicovsky says the single-use battery was necessary for an attractive design with water resistance, and he likes the idea of never having to take the ring off, even in the shower. But because the original Pebble watches have endured for so longa decade later, thousands of people still use theirsthe Index’s disposable nature feels incongruous even if Migicovsky downplays it. I would say that most devices are made to be thrown away, and thats the secret of the industry that nobody ever talks about, he says. The Index also just indicates that Core Devices is more than a smartwatch company now. While the original goal was to scratch one specific itch, Migicovsky now says he has “lots more” ideas for new products. There will be prerequisites: Whatever Core Devices makes can’t already exist, must have low R&D costs, and should be possible to build with a small team. (The company currently employs five people, all on the software side besides Migicovsky himself.) Its products will have to solve everyday problems, even if they’re niche ones. Still, the company has more things to figure out first. While the Index uses on-device speech-to-text for voice notes, it’s unclear how it’ll cover the cost of using AI to process custom commands, or for its optional Wispr Flow-powered transcriptions. Migicovsky doesn’t love the idea of subscriptions but isn’t sure about alternatives. Employing a team obviously has ongoing costs as well, which means Core Devices will need to expand from its tiny audience, find recurring revenue streams, or keep releasing new things. But even as it expands, Core Devices is keeping its ambitions in check, which at a venue like CES can be pretty refreshing. Were not trying to invent some new computing category,” Migicovsky says. “Were not trying to take over the world.
Category:
E-Commerce
Although there is no shortage of AI enthusiasts, the general public remains uneasy about artificial intelligence. Two concerns dominate the conversation, both amplified by popular and business media. The first is AIs capacity to automate work, fueling widespread FOBO, or fear of becoming obsolete. The second is AIs tendency to reproduce or even exacerbate human bias. On the first, the evidence remains mixed. The clearest signal so far is not the wholesale replacement of jobs, but the automation of tasks and skills within jobs. Most workers are less likely to lose their roles outright than to be forced to rethink what they do at work and where they add value. In that sense, AI is less an executioner than a pressure test on human contribution. As we have previously noted, AI is exposing the BS economy, in the sense of automating low-value activity and commoditizing whats not relevant. On the second, however, concerns feel more visceral, since theres clear evidence of AI amplifying or at least perpetuating human biases. Indeed, algorithms replicate the loudest and most common outcomes. Tools trained on historical hiring and promotion data mirror the demographic preferences of past decision-makersoverlooking qualified candidates and harming both those individuals and the organizations that end up missing out on better talent. Large language models producing outputs that disadvantage marginalized users because of skewed training data. Add to this the political and moral assumptions embedded, often unintentionally, in AI systems, and its easy to conclude that AI is simply a faster, colder version of human prejudice. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9.jpg","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1x1-2.jpg","eyebrow":"","headline":"Get more insights from Tomas Chamorro-Premuzic","dek":"Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and the co-founder of DeeperSignals. He has authored 15 books and over 250 scientific articles on the psychology of talent, leadership, AI, and entrepreneurship. ","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/drtomas.com\/intro\/","theme":{"bg":"#2b2d30","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#3b3f46","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91424798,"imageMobileId":91424800,"shareable":false,"slug":""}} To be sure, AI will never be bias-free. And yet it can still be less biased than humans (okay, its a low bar). Importantly, under the right conditions, it can make things a lot better. Humans are biased, but thats not a bug, its a feature. Its a consequence of cognitive shortcuts that evolved for speed and survival. But survival is knee-jerk, and often optimizes for the immediateand shortchanges the long-term success that comes from thoughtfulness and fairness. Nobel Prize winner Daniel Kahneman showed us how quick decisions are often suboptimal, yet we rely on those quick, intuitive decisions frequently, and even more frequently when we are under stress and time pressure. Yet one of the great strengths of humanity is that we are also capable of reflection and correction. And AI is in some ways uniquely suited to help counteract predictable distortions that have plagued humanity for centuries. Consider six ways this is already beginning to happen. 1. AI can help us better understand others AI is now embedded in many of the platforms we use to communicate at work. Increasingly, it can analyze patterns in language, tone, and behavior to infer emotional states, intentions, or levels of engagement. Tools like Textio help us get out of our own way by flagging language thats not aligned to our goals. These systems are far from perfect, but they dont need to be. They simply need to outperform the average human in situations where human judgment is weakest. Research on emotional intelligence shows that people are generally better at reading members of their own group than outsiders. Cultural distance, unfamiliar communication styles, and implicit stereotypes distort perception. AI systems trained on data from different cultures and groups can sometimes decode signals more consistently than humans navigating unfamiliar social terrain. Theres evidence that using technologies like VR to experience others realities can build lasting empathy. Used responsibly, these kinds of augmentation can support empathy rather than replace it, helping people pause before misinterpreting disagreement as hostility or silence as disengagement. 2. AI can force us to confront alternative viewpoints One of the ironies of AI criticism is that we often accuse systems of bias as a way of deflecting attention from our own. When people complain that generative AI is politically or ideologically slanted, they are usually revealing where they themselves stand. Properly designed, AI can be used to surface competing perspectives rather than reinforce echo chambers. Whats more, AI can do this by framing arguments and evidence in ways that make them easier to understand and accept without triggering judgment or combativeness. For example, leaders can ask AI to articulate the strongest possible case against their preferred strategy, or to rewrite a proposal from the perspective of different stakeholders. In conflict resolution, AI can summarize disagreements in neutral language, stripping away emotional triggers while preserving substance. This doesnt make AI objective, but it can make us less lazy. By lowering the cognitive and emotional cost of perspective taking, AI can help counteract confirmation bias, one of the most pervasive and damaging distortions in organizational life. 3. AI can improve meritocracy in hiring and promotion Few domains are as saturated with bias as talent decisions. Decades of research show that human intuition performs poorly when predicting job performance, yet confidence in gut feeling remains stubbornly high. When trained on clean data and validated against real outcomes, AI consistently outperforms unstructured human judgment for job decisions. This is not just because algorithms can process more information, but because they can ignore information humans struggle to disregard. Demographic cues, accents, schools, and social similarity exert a powerful pull on human decisin-makers even when they believe theyre being fair. Well-designed AI systems can also be updated as job requirements evolve, allowing them to unlearn outdated assumptions. Humans, by contrast, often cling to obsolete success profiles long after they stop predicting performance. AI does not guarantee fairness, but it can move decisions closer to evidence and further from intuition. 4. AI can make bias visible rather than invisible One of the most underestimated benefits of AI is its diagnostic power. Algorithms can reveal patterns humans prefer not to see. Disparities in performance ratings, promotion velocity, pay progression, or feedback language are often dismissed as anecdotal until AI surfaces them at scale. When bias remains implicit, its easy to deny. When its quantified, it becomes discussable. Used transparently, AI can help organizations audit their own behavior and hold themselves accountable. For example, AI can help identify whether specific interview questions (or interviewers) are driving unexpectedly uneven outcomesso that the questions used are more likely to help pick the most qualified candidates. Importantly, this shifts bias reduction from moral aspiration to operational reality. 5. AI can slow us down at the right moments Bias thrives under speed, pressure, and ambiguity. Many of the most consequential workplace decisions are made quickly, under cognitive load, and with incomplete information. AI can introduce friction where it matters. By flagging inconsistent judgments, prompting justification, or suggesting structured criteria, AI can act as a cognitive speed bump. It doesnt remove responsibility from humans. It reminds them that intuition isnt always insight. 6. AI can help us understand ourselves, not just others Bias does not only distort how we judge other people. It also shapes how we see ourselves. Research on self-assessment consistently shows that people are poor judges of their own abilities, impact, and behavior. We overestimate our strengths, underestimate our blind spots, and rationalize patterns that others notice immediately. AI can help close this self-awareness gap. One increasingly common use case is AI as a coach or reflective mirror. Unlike human feedback, which is often delayed, filtered, or softened, AI can analyze large volumes of behavioral data and surface patterns that individuals struggle to see on their own. This might include identifying communication habits that derail meetings, emotional triggers that precede conflict, or leadership behaviors that correlate with disengagement in teams. Consider how AI is already being used to summarize feedback from performance reviews, engagement surveys, or 360 assessments. Rather than relying on selective memory or defensiveness, individuals can see recurring themes across contexts and time. This reduces self-serving bias, the tendency to attribute successes to skill and failures to circumstance. The same logic explains the growing popularity of AI as a therapeutic or coaching aid. AI systems dont replace trained professionals, but they can prompt reflection, ask structured questions, and challenge inconsistencies in peoples narratives. Because AI has no ego, no reputation to manage, and no emotional investment in the users self-image, it can sometimes feel safer to explore uncomfortable insights with a machine than with another human. Of course, self-awareness without judgment is not the same as wisdom. AI can highlight patterns, but humans must interpret and act on them. Used responsibly, however, AI can help individuals recognize how their intentions differ from their impact, how their habits shape outcomes, and how their own biases show up in everyday decisionsand it can help monitor and reinforce progress to support lasting change In that sense, AIs most underappreciated debiasing potential may not lie in correcting how we evaluate others but in helping us see ourselves more clearly. A necessary note of caution None of this implies that AI automatically reduces bias. Poorly designed systems can amplify inequality faster than any individual manager ever could. Debiasing requires intentional choices: representative data, continuous monitoring, transparency, and human oversight. The real danger is not trusting AI too muchits using AI carelessly while pretending its neutral. Bias is a human problem before its a technological one. AI simply forces us to confront it more explicitly. Used well, AI can help organizations move closer to the meritocratic ideals they already claim to valueand that help organizations be successful. Used badly, it will expose the gap between rhetoric and reality. The question is not whether AI will shape workplace decisions. It already does. The real question is whether we will use it to reinforce our blind spots, or to finally see them more clearly. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-16X9.jpg","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/tcp-photo-syndey-1x1-2.jpg","eyebrow":"","headline":"Get more insights from Tomas Chamorro-Premuzic","dek":"Dr. Tomas Chamorro-Premuzic is a professor of organizational psychology at UCL and Columbia University, and the co-founder of DeeperSignals. He has authored 15 books and over 250 scientific articles on the psychology of talent, leadership, AI, and entrepreneurship. ","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/drtomas.com\/intro\/","theme":{"bg":"#2b2d30","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#3b3f46","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91424798,"imageMobileId":91424800,"shareable":false,"slug":""}}
Category:
E-Commerce
Roblox, a gaming app used by nearly half of the entire U.S. population of under-16s, has rolled out a new mandatory safety feature to put a stop to children communicating with adults on the platform. Starting on January 7, players in the U.S. were required to submit to facial age estimation via the app to access the chat feature, although age verification remains optional to play the games themselves. Users in the U.K., Australia, New Zealand, and the Netherlands are already required to complete an age check to chat with other users, but the requirement will now roll out to the U.S. and beyond. The verification is being processed by a third-party vendor, Persona. Once the age check is processed, Roblox says it will delete any images or videos of users. If the age-check process incorrectly estimates a users age, the decision can be appealed and the child’s age verified through alternative methods. Users 13 or older may also opt for ID-based checks. Once users complete the age check, they are assigned to one of six age groups (under 9, 9-12, 13-15, 16-17, 18-20, and 21+). Users can only communicate with players directly above and below their own age group. For example, a 9-year-old cannot chat with users older than 15, and a 16-year-old can only chat with those ages 13 to 20. The feature is designed to prevent children younger than 16 from communicating with adults. About 42% of Roblox users are younger than 13. “As the first large online gaming platform to require facial age checks for users of all ages to access chat, this implementation is our next step toward what we believe will be the gold standard for communication safety,” wrote Matt Kaufman, Roblox’s chief safety officer, and Rajiv Bhatia, its head of user and discovery product, in a blog post. Parental consent is still required for users younger than 9 to access chat features, while age-checked users 13 and older can chat with people they know beyond their immediate age group via the Trusted Connections feature. Leveraging multiple signals, [Roblox is] constantly evaluating user behavior to determine if someone is significantly older or younger than expected, the company execs continued. In these situations, we will begin asking users to repeat the age-check process. The face scan is launching as the company faces increased scrutiny over child safety on the app. Attorneys general around the country are investigating Roblox, and nearly 80 active lawsuits accuse Roblox of enabling child exploitation, with some parents alleging their children encountered predators on the app.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||