|
The color of your house matters beyond aesthetics. An extensive body of research shows that painting buildings white (which reflects heat) can make them cooler, and painting them black (which absorbs heat) can make them warmer. This is the reason why most houses in Greece are white, and many houses across Scandinavia are black. But what about the rest of the world, where temperatures often shift with the seasons?Industrial designer Joe Doucet has developed what he calls a climate-adaptive paint that can change colors based on the temperature outside. The patent-pending formula, which is known as thermochromic paint, follows the same principle as 90s mood rings. Except instead of jewelry changing color, its the entire facade of a building. If the temperature outside is below 77F, the building will be black. If its above 77F, it will turn white.The formula can be mixed with other tints, so if you want a blue house, it would simply look light blue in the summer and dark blue in the winter. Its phenomenal to think about the built environment changing with the seasons as nature does, says Doucet, who estimates that painting a building with this climate-adaptive paint could save an average of 20 to 30% on energy costs.The power of paintMany cities have turned to paint to alleviate urban problems like the heat island effect. In 2019, teams across Senegal, Bangladesh, Mexico, and Indonesia painted a total of 250,000 small household rooftops with white reflective pain as part of the Million Cool Roofs Challenge. In 2022, the city of L.A. covered 1 million square feet of streets and sidewalks in Pacoima, a low-income neighborhood, with solar reflective paint. Surfaces cooled instantly by 10 to 12F, and a year in, studies showed that the ambient temperatures throughout the entire neighborhood had dropped by up to 3.5°F.[Image: courtesy Joe Doucet and Partners]A climate-adaptive paint could make a difference for houses and apartment buildings, but also large industrial facilities like climate-controlled farms and warehouses that would otherwise turn to AC or heating to maintain a desired temperature. It costs to heat and cool a large structure so anything you can do mitigate that cost makes sense commercially as well, says Richard Hinzel, partner and managing director at Joe Doucet and Partners.Doucet first had the idea for a climate-adaptive paint while renovating his own home in Chappaqua, New York. I put off what color it should be because I wanted to have an understanding of what color did in terms of energy use, he recalls. The designer, who recently gave wind turbines a much-needed design makeover, built two scale models of his house, with the same kind of insulation material he used in the actual house. He painted the first model in black and the second one in white. For a year, he measured the surface outside and inside both models, and found that, in high seasons like summer and winter, temperatures between the two varied by as much as 13F. More specifically, in the summer, the white house was 12F cooler inside than the black house, while in the winter, the black house was 7F warmer inside. He says the opposite was also true. The black house was 13F warmer inside in the summer, while the white house was 8F colder in the winter. [Image: courtesy Joe Doucet and Partners]Doucet obtained these measurements from a scale model, not a full-sized house, but he notes the only difference between the two would be the time it takes for each space to heat or cool. A smaller pan heats up and cools down faster than a larger one, but it does not get hotter or colder, he says by way of example.At the end of the experiment, it occurred to him that the answer to his original questionwhat color to paint his housewas to paint it black in the winter and white in the summer. But that wasnt a practical solution.The more practical solutiona paint that can be both at oncetook two years to develop and about 100 more models to get the formula right. The team used commercially available latex house paint as a base, then mixed in their own proprietary formula. But crafting a formula that can sustain the transition from light to dark without degradingand therefore ending up greyproved difficult.If youve ever had transition glasses that got stuck on dark and never returned to clear, you understand the problem. If the paint degrades too fast and you have to repaint your house every month, then nobody will buy it.The first few formulas were degrading too fast, but the team eventually concocted a secret sauce that helps the paint last at least one year with zero degradation. This number reflects how long Doucet has been testing the paint in his studio. The final number could be even higheror it could not.The paint is yet to undergo rigorous lab tests, so many unknowns remain. Were not starting a paint company, says Doucet. Instead, his team wants to license the formula to paint manufacturers who would then take the climate-adaptive paint to the finishing line and launch it themselves.If the idea resonates and paint companies jump on the bandwagon, they will have to develop a competitive product that is both durable and priced accordingly. For now, Doucet estimates that theclimate-adaptive paint will cost about 3 to 5 times more than a standard gallon of paintthough he says youd quickly make that back in energy savings. Im confident that if theres a positive response, this could do very well on the market, he says.In the meantime, Doucet finished renovating his house and opted for black. I couldnt wait, he says with a laugh.
Category:
E-Commerce
A new scientific study warns that using artificial intelligence can erode our capacity for critical thinking. The research, carried out by a Microsoft and Carnegie Mellon University scientific team, found that the dependence on AI tools without questioning their validity reduces the cognitive effort applied to the work. In other words: AI can make us dumber if we use it wrong. AI can synthesize ideas, enhance reasoning, and encourage critical engagement, pushing us to see beyond the obvious and challenge our assumptions, Lev Tankelevitch, a senior researcher at Microsoft Research and coauthor of the study, tells me in an email interview. But to reap those benefits, Tankelevitch says users need to treat AI as a thought partner, not just a tool for finding information faster. Much of this comes down to designing a user experience that encourages critical thinking rather than passive reliance. By making AIs reasoning processes more transparent and prompting users to verify and refine AI-generated content, a well-designed AI interface can act as a thought partner rather than a substitute for human judgment. From ‘task execution’ to ‘task stewardship’ The researchwhich surveyed 319 professionalsfound that high confidence in AI tools often reduces the cognitive effort people apply to their work. Higher confidence in AI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking, the study states. This over-reliance stems from a mental model that assumes AI is competent in simple tasks. As one participant admitted in the study, its a simple task and I knew ChatGPT could do it without difficulty, so I just never thought about it. Critical thinking didnt feel relevant because, well, who cares. This mindset has major implications for the future of work. Tankelevitch tells me that AI is shifting knowledge workers from task execution to task stewardship. Instead of manually performing tasks, professionals now oversee AI-generated content, making decisions about its accuracy and integration. They must actively oversee, guide, and refine AI-generated work rather than simply accepting the first output, Tankelevitch says. The study highlights that when knowledge workers actively evaluate AI-generated outputs rather than passively accepting them, they can improve their decision-making processes. Research also shows that experts who effectively apply their knowledge when working with AI see a boost in output, Tankelevitch points out. AI works best when it complements human expertisedriving better decisions and stronger outcomes. The study found that many knowledge workers struggle to critically engage with AI-generated outputs because they lack the necessary domain knowledge to assess their accuracy. Even if users recognize that AI might be wrong, they dont always have the expertise to correct it, Tankelevitch explains. This problem is particularly acute in technical fields where AI-generated code, data analysis, or financial reports require deep subject matter knowledge to verify. The cognitive offloading paradox Confidence in AI can lead to a problem called cognitive offloading. This phenomenon isn’t new. Humans have long outsourced mental tasks to tools, from calculators to GPS devices. Cognitive offloading is not inherently negative. When done correctly, it allows users to focus on higher-order thinking rather than mundane, repetitive tasks, Tankelevitch points out. But the very nature of generative AIwhich produces complex text, code, and analysisbrings a new level of potential mistakes and problems. Many people might blindly accept AI outputs without questioning them (and quite often these outputs are bad or just plain wrong). This is especially the case when people feel the task is not important. Our study suggests that when people view a task as low-stakes, they may not review outputs as critically, Tankelevitch points out. The role of UX AI developers should keep that idea in mind when designing AI user experiences. These chat UX should be organized in a way that encourages verification, prompting users to think through the reasoning behind AI-generated content. Redesigning AI interfaces to aid in this new task stewardship process and encourage critical engagement is key to mitigating the risks of cognitive offloading. Deep reasoning models are already supporting this by making AIs processes more transparentmaking it easier for users to review, question, and learn from the insights they generate, he says. Transparency matters. Users need to understand not just what the AI says, but why it says it. You probably have seen this in an AI platform like Perplexity. Its interface offers a clear logical path that outlines the thoughts and actions that the AI takes to obtain a result. By redesigning AI interfaces to also include contextual explanations, confidence ratings, or alternative perspectives when needed, AI tools can shift users away from blind trust and towards active evaluation of the results. Another UX intervention may involve actively prompting the user for key aspects of the AI-generated output, prompting users to directly question and refine these outputs rather than passively accepting them.The final product of this open collaboration between AI and human is better, just like creative processes are often much better when two people work together as a team, especially when the strengths of one person complements the strengths of the other. Some will get dumber The study raises crucial questions about the long-term impact of AI on human cognition. If knowledge workers become passive consumers of AI-generated content, their critical thinking skills could atrophy. However, if AI is designed and used as an interactive, thought-provoking tool, it could enhance human intelligence rather than degrade it. Tankelevitch points out that this is not just theory. Its been proven on the field. For example, there are studies that show that AI can boost learning when used in the right way, he says. In Nigeria, an early study suggests that AI tutors could help students achieve two years of learning progress in just six weeks, he says. Another study showed that students working with tutors supported by AI were more likely to master key topics. The key, Tankelevitch tells me, is that this was all teacher-led: Educators guided the prompts and provided context, thus encouraging that vital critical thinking. AI has also demonstrated that it can enhance problem-solving in scientific research, where experts use it to explore complex hypotheses. Researchers using AI to assist in discovery still rely on human intuition and critical judgment to validate results, Tankelevitch notes. The most successful AI applications are those where human oversight remains central. Given the current state of generative AI, the technologys effect on human intelligence will not depend on the AI itself, but on how we choose to use it. UX designers can certainly help promote good behavior, but its up to us to do the right thing. AI can either amplify or erode critcal thinking, depending on whether we critically engage with its outputs or blindly trust them. The future of AI-assisted work will be determined not by the sophistication of the technology but by humans. My bet, as with every other technological revolution in the history of civilization, some people will get a lot dumber and others will get a lot smarter.
Category:
E-Commerce
The generative AI revolution shows no sign of slowing as OpenAI recently rolled out its GPT-4.5 model to paying ChatGPT users, while competitors have announced plans to introduce their own latest modelsincluding Anthropic, which unveiled Claude 3.7 Sonnet, its latest language model, late last month. But the ease of use of these AI models is having a material impact on the information we encounter daily, according to a new study published in Cornell Universitys preprint server arXiv. An analysis of more than 300 million documents, including consumer complaints, corporate press releases, job postings, and messages for the media published by the United Nations suggests that the web is being swamped with AI-generated slop. The study tracks the purported involvement of generative AI tools to create content across those key sectors, above, between January 2022 and September 2024. We wanted to quantify how many people are using these tools, says Yaohui Zhang, one of the study’s coauthors, and a researcher at Stanford University. The answer was, a lot. Following the November 30, 2022, release of ChatGPT, the estimated proportion of content in each domain that saw suggestions of AI generation or involvement skyrocketed. From a baseline of around 1.5% in the 11 months prior to the release of ChatGPT, the proportion of customer complaints that exhibited some sort of AI help increased tenfold. Similarly, the share of press releases that had hints of AI involvement rapidly increased in the months after ChatGPT became widely available. Which areas of the United States were more likely to adopt AI to help write complaints was made possible by the data accompanying the text of each complaint made to the Consumer Financial Protection Bureau (CFPB), the government agency that Donald Trump has now dissolved. In the 2024 data analyzed by the academics, complainants in Arkansas, Missouri, and North Dakota were the most likely to use AI, with its presence in around one in four complaints; while West Virginia, Idaho, and Vermont residents were least likelywhere between one in 20 and one in 40 showed AI evidence. Unlike off-the-shelf AI detection tools, Zhang and his colleagues developed their own statistical framework to determine whether something was likely AI-generated that compared linguistic patternsincluding word frequency distributionsin texts written before the release of ChatGPT against those known to have been generated or modified by large language models. The outputs were then tested against known human- or AI-written texts, with prediction errors lower than 3.3%, suggesting it was able to accurately discern one from the other. Like many, the team behind the work is worried about the impact of samizdat content flooding the webparticularly in so many areas, from consumer complaints to corporate and non-governmental organization press releases. I think [generative AI] is somehow constraining the creativity of humans, says Zhang.
Category:
E-Commerce
All news |
||||||||||||||||||
|