|
|||||
A team of researchers has uncovered what they say is the first reported use of artificial intelligence to direct a hacking campaign in a largely automated fashion. The AI company Anthropic said this week that it disrupted a cyber operation that its researchers linked to the Chinese government. The operation involved the use of an artificial intelligence system to direct the hacking campaigns, which researchers called a disturbing development that could greatly expand the reach of AI-equipped hackers. While concerns about the use of AI to drive cyber operations are not new, what is concerning about the new operation is the degree to which AI was able to automate some of the work, the researchers said. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” they wrote in their report. The operation was modest in scope and only targeted about 30 individuals who worked at tech companies, financial institutions, chemical companies, and government agencies. Anthropic noticed the operation in September and took steps to shut it down and notify the affected parties. The hackers only succeeded in a small number of cases, according to Anthropic, which noted that while AI systems are increasingly being used in a variety of settings for work and leisure, they can also be weaponized by hacking groups working for foreign adversaries. Anthropic, maker of the generative AI chatbot Claude, is one of many tech companies pitching AI agents that go beyond a chatbot’s capability to access computer tools and take actions on a person’s behalf. Agents are valuable for everyday work and productivity but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks, the researchers concluded. These attacks are likely to only grow in their effectiveness. A spokesperson for China’s embassy in Washington did not immediately return a message seeking comment on the report. Microsoft warned earlier this year that foreign adversaries were increasingly embracing AI to make their cyber campaigns more efficient and less labor-intensive. Americas adversaries, as well as criminal gangs and hacking companies, have exploited AIs potential, using it to automate and improve cyberattacks, to spread inflammatory disinformation, and to penetrate sensitive systems. AI can translate poorly worded phishing emails into fluent English, for example, as well as generate digital clones of senior government officials.
Category:
E-Commerce
President Donald Trump has worked to blame Democrats for the government shutdown, but a majority of Americans are unconvinced that it’s Democrats’ fault. Trump’s administration has used the levers of the state to communicate partisan messages during the shutdown, which ended November 13. Ultimately, however, messaging through government channels like web design, out-of-office email replies, and public service announcements weren’t enough. A 52% majority of Americans blame Trump or Republican lawmakers for the shutdown, according to a poll this week from Stack Data Strategy, a London market research firm. That’s in line with an NBC News poll last month that found 52% blamed Trump and Republican lawmakers. And a YouGov poll released last week found more voters rate how Democrats in Congress handled the shutdown slightly better than Trump or Republicans in Congress. These are slim majorities, but they also show the limits of Trump’s influence over public opinion when it comes to the shutdown. “Nobody wins in a shutdown,” Kenneth Cosgrove, a professor in the department of political science and legal studies at Suffolk University, tells Fast Company in an email. “The question is which party gets more of the blame? Traditionally it’s been Congress just because of the media and marketing advantages the executive branch has.” But Trump himself hasn’t been fully engaged with ending the shutdown as his attention has been split between other efforts, including trips abroad to the Middle East and Asia, and overseeing his White House renovation project. Trump “wasn’t very visible,” during the shutdown, Cosgrove says. “Plus, how many people look at government websites on a regular basis? Probably not that many.” Most people aren’t browsing the Department of Housing and Urban Development (HUD) website, where a bright red banner for the duration of the shutdown said “The Radical Left in Congress shut down the government.” And because major airports refused to air a video filmed with Homeland Security Secretary Kristi Noem blaming Democrats for the shutdown, many travelers didn’t see it even as they spent extra hours at the airport due to delays and cancellations. With any political messaging, there are two important questions: “How many people actually saw or heard the message, and what else were they seeing or hearing?” says Yana Krupnikov, a professor of communications and media at the University of Michigan. “The information environment around us is so fullyes, we have messages on websites and out-of-office emails, but we also have news coverage from various sources, and we have people on social media. People also talk to each other,” Krupnikov says. It’s also not as if Democrats come out of the shutdown unscathed. The deal to reopen the government came from a handful of Senate Democrats who crossed party lines. The resulting deal doesn’t include Affordable Care Act subsidies, meaning millions of Americans’ health insurance premiums are expected to go up. The deal to reopen the government is unpopular with many Democratic lawmakers, including Elizabeth Warren of Massachusetts and Chris Murphy of Connecticut. Still, it turns out tearing down the East Wing draws more attention than a Department of Education OOO message ever could, and SNAP cuts and canceled flights resonate more deeply with the public than a White House website shutdown countdown clock blaming Democrats. In a busy news environment, it’s hard to break through, even for Trump.
Category:
E-Commerce
Emerging like a mirage in the desert outskirts of Dubai, a sight unfamiliar to those in the Middle East and Asia has risen up like a dream in the exact dimensions of the field at Yankee Stadium in New York. Now that it’s built, though, one question remains: Will the fans come? That’s the challenge for the inaugural season of Baseball United, a four-team, monthlong contest that will begin Friday at the new Barry Larkin Field, artificially turfed for the broiling sun of the United Arab Emirates and named for an investor who is a former Cincinnati Reds shortstop. The professional league seeks to draw on the sporting rivalry between India and Pakistan with two of its teams, as the Mumbai Cobras on Friday will face the Karachi Monarchs. Each team has Indian and Pakistani players seeking to break into the broadcast market saturated by soccer and cricket in this part of the world. And while having no big-name players from Major League Baseball, the league has created some of its own novel rules to speed up games and put more runs on the board and potentially generate interest for U.S. fans as the regular season there has ended. People here got to learn the rules anyway so were like if we get to start at a blank canvas then why dont we introduce some new rules that we believe are going to excite them from the onset,” Baseball United CEO and co-owner Kash Shaikh told The Associated Press. The dune of dreams All the games in the season, which ends mid-December, will be played at Baseball United’s stadium out in the reaches of Dubai’s desert in an area known as Ud al-Bayda, some 30 kilometers (18 miles) from the Burj Khalifa, the world’s tallest building. The stadium sits alongside The Sevens Stadium, which hosts an annual rugby sevens tournament known for hard-partying fans drinking alcohol and wearing costumes. As journalists met Baseball United officials on Thursday, two fighter jets and a military cargo plane came in for landings at the nearby Al Minhad Air Base, flying over a landfill. The field seats some 3,000 fans and will host games mostly at night, though the weather is starting to cool in the Emirates as the season changes. But environmental concerns have been kept in mind Baseball United decided to go for an artificial field to avoid the challenge of using more than 45 million liters (12 million gallons) of water a year to maintain a natural grass field, said John P. Miedreich, a co-founder and executive vice president at the league. We had to airlift clay in from the United States, airlift clay from Pakistan for the pitcher’s mound, he added. There will be four teams competing in the inaugural season. Joining the Cobras and the Monarchs will be the Arabia Wolves, Dubai’s team, and the Mideast Falcons of Abu Dhabi. There are changes to the traditional game in Baseball United, putting a different spin on the game similar to how the Twenty20 format drastically sped up traditional cricket. The baseball league has introduced a golden moneyball,” which gives managers three chances in a game to use at bat to double the runs scored off a home run. Teams can call in designated runners three times during a game. And if a game is tied after nine innings, the teams face off in a home run derby to decide the winner. Its entertainment, and its exciting, and its helping get new fans and young fans more engaged in the game,” Shaikh said. America’s pastime has limited success Baseball in the Middle East has had mixed success, to put a positive spin on the ball. A group of American supporters launched the professional Israel Baseball League in 2007, comprised almost entirely of foreign players. However, it folded after just one season. Americans spread the game in prerevolution Iran, Saudi Arabia and the UAE over the decades, though it has been dwarfed by soccer. Saudi Arabia, through the Americans at its oil company Aramco, has sent teams to the Little League World Series in the past. But soccer remains a favorite in the Mideast, which hosted the 2022 FIFA World Cup in Qatar. Then there’s cricket, which remains a passion in both India and Pakistan. The International Cricket Council, the world’s governing body for the sport, has its headquarters in Dubai near the city’s cricket stadium. Organizers know they have their work cut out for them. At one point during a news conference Thursday they went over baseball basics home runs, organ music and where center field sits. The most important part is the experience for fans to come out, eat a hot dog, see mascots running around, to see what baseball traditions that we all grew up with back home in the U.S. and start to fall in love with the game because we know that once they start to learn those, they will become big fans,” Shaikh said. Jon Gambrell, Associated Press
Category:
E-Commerce
AI was supposed to make our lives easier: automating tedious tasks, streamlining communication, and freeing up time for creative thinking. But what if the very tool meant to increase efficiency is fueling cognitive decline and burnout instead? The Workflation Effect Since AI entered the workplace, managers expect teams to produce more work in less time. They see tasks completed in two hours instead of two weeks, without understanding the process behind it. Yet, AI still makes too many mistakes for high-quality output, forcing workers to adjust, edit, and review everything it producescreating workflation, which adds more work to already overloaded plates. AI has accelerated expectations because managers know that teams using it can work faster, but quality work still requires time, focus, and expertise. “We are seeing that it can lead to a lot of churn and work sloppoor quality output, in particular when it’s being used by junior team members,” says Carey Bentley, CEO of Lifehack Method, a productivity coaching company. When team members lack the expertise to audit AI output, they take it at face value, which can lead to multimillion-dollar errors. The percentage of companies using AI in at least one business function is rising every year, and one of the most popular uses is in marketing. However, many brands flood social media with formulaic, off-putting content that prioritizes speed over emotional connection, sacrificing creativity and differentiation. The consequences of using AI without proper quality review aren’t just about brand reputation or lost dealsthey also add stress while eroding workers’ creativity, problem-solving abilities, and critical thinking. Cognitive Decline and Burnout with AI Research from MIT shows that relying on AI tools to think for us, rather than with us, leads to cognitive offloadingoutsourcing mental effort in ways that gradually weaken memory, problem-solving, and critical thinking. The study found that participants using GPT-based tools showed measurable declines in these areas compared to control groups. Just as GPS impairs spatial memory, relying on AI for thinking may weaken our capacity for original thought, because the brain needs practice to maintain cognitive functions. When we layer that cognitive debt on top of the relentless pace that AI enables, we aren’t just doing more work; we’re doing it with diminished mental capacity. Workers are reviewing AI outputs without having the time to thoroughly evaluate the quality, making decisions without space for reflection, and producing content without engaging the creative processes that generate real insight. In the long term, the overwhelm leads to small mistakes, such as forgetting to add a document, not finishing an edit, or missing a deadline; these are the first signs of burnout. It really starts small, and that’s why it gets missed so often,” explains Naomi Carmen, a business consultant specializing in leadership and company culture. These minor errors arent signs of laziness, distraction, or disengagement, and when managers respond with performance reviews instead of support, the cycle only accelerates. The Training Gap Most people using AI haven’t been adequately trained, confusing its confidence for truth. Neuroscientist David Eagleman refers to this as the “intelligence echo illusion”the perception that AI is intelligent because it responds with apparent insight, when in reality it merely reflects stored human knowledge. Without understanding how AI works, leadership develops unrealistic expectations that cascade through organizations, requiring faster and higher-quality work that’s nearly impossible to sustain. “Expecting your team to use AI without proper training is like handing them a Ferrari and expecting them to win races right away,” Bentley explains. Carmen adds, “The input is going to directly affect the output.” Warning Signs AI Is Fueling Burnout According to a 2024 study by The Upwork Research Institute, 77% of employees believe their workload has increased since they started using AI. Key warning signs include: Errors and delays: mistakes slip through because workers rush to meet unrealistic deadlines. Not feeling time savings: employees work harder than ever despite using “time-saving” tools. Always-on culture: leadership sets expectations at AI-speed, resulting in an always-on culture that multiplies workload and stress. How to Use AI Without Burning People Out The solution isn’t abandoning AI, but implementing it thoughtfully. Here are four ways to do it: Proper training: hire experts to audit existing workflows and provide recommendations, then show team members how to produce high-quality output. Clear goals: connect AI use to specific KPIs instead of chasing trends. Companies should remain rooted in their core mission and values, rather than adopting every new AI tool. Treat AI as a low-level assistant: use it for research, initial drafts, and data organization, but keep creative problem-solving and critical thinking in the hands of humans. Support your team: life events, stress, and fatigue mean employees cant deliver at a constant, AI-driven pace. Leadership should keep the human element at the center of decisions, recognizing that policies and expectations must account for the complexity of real lives, not just the output. Moving Forward with AI In an era defined by AI, sustainable performance comes from empathy, connection, and space for creativity. A healthy workplacewhere employees can rest, express themselves, and even have unboosts engagement, problem-solving, innovation, and efficiency. AI can support this, but only when implemented thoughtfully, with the human element at its core.
Category:
E-Commerce
Americans have done a shoddy job of teaching reading and math to the majority of our students. Our scores, when compared to other nationsmost with fewer resourcesare plummeting. As a scientist, I try to stay solution oriented. To ensure that we bend the curve and change the future, we must first concede that we have failed our students. We’re at the dawn of a new educational erathe age of artificial intelligence. And there is no way we will get it right in this new era if we are still struggling with the previous one. As a congenital optimist, I am hopeful that when it comes to teaching AII mean this in its broadest sense, well beyond the practice of codingthat we will learn from our mistakes and get it right this time. My genetic positivity is reinforced by two recent developments that are important milestones in building a national consensus for assuring that we create generational AI skills and wisdom. The White Houses executive order Trump’s executive order speaks directly to the existential need for our country to cultivate the skills and understanding to use and create the next generation of AI technology. Upon its issuance, I wrote a column commending its intention. But I also noted, speaking as president and CEO of the Center of Science and Industry (COSI), a board member of the National Academies of Sciences, and a lifelong STEM advocate, that the EO was insufficient: We cannot teach AI without also teaching critical thinking, ethics, and wisdom. Since then, I was asked to participate in the White House Task Force on AI Education that is guiding the implementation of the EO, and is also establishing public-private partnerships with leading stakeholders in AI. COSI is part of this group and we have signed on to President Trump’s pledge to invest in AI education. I recently attended a meeting of the White House Task Force on AI Education, where the inexorable link between national security, economic prosperity, and AI proficiency was the dominant theme. I would summarize it as: We need to winand we must be the global leader in AI capabilities to keep America on top. Yesbut how? The state of Ohio creates a new state of tomorrow After the meeting, I returned to Ohio, which has joined the AI conversation in a big way. Ohio is the first state to require that every school district adopt formal policies to govern AI use in schools. To put it simply, the EO urges the mustthat AI education needs to be a priority. The Ohio regulation, by contrast, insists on the how. It proceeds from the recognition that our schools will be teaching the technology of the future, and demands that the complex nuances of how be determined and agreed to. Chris Woolard, the chief integration officer at the Ohio Department of Education, described the challenge as creating new guardrails that include ground rules for privacy, data quality, ethical use, and academic honesty. And, importantly, What are the critical thinking skills that are needed for students. Beyond just governed, to taught I commend what Ohio has done. But there is a long way to go. To build foundational pedagogical techniques for the teaching of AI, with no baseline, no historical data, and no trials, is far from trivial. In fact, it is enormously complicated, as we have seen from our inability to effectively teach STEM. Ohios regulatory framework, which other states should follow, will involve the creation of new practices and metrics and will require vast sensitivity and nuance, given that every single aspect of education can be weaponized in our undeniably fraught world of culture wars. But we can learn from our mistakes. For example, so-called whole languageversus phonicsis ineffective for the 20% of children with dyslexia. We need to bring all children into the future, and to do that we need to assure that AI literacy becomes a core marker of educational success. Interestingly enough, AI can help with this Teaching AI is like developing AI. Sort of The rapid evolution of AI comes from the process of training the model; it is how the large language models (LLMs) learn and improve in an iterative and focused manner. But it is also a black box in many ways, which cannot be the case with how we teach AI in our schools. Only transparency and continual improvement will ensure that our K-12 students develop the skills necessary to succeed in a changing workforce. None of this will be easy. AI represents a profound turning point; the EO is broad and conceptual, while our Constitution assigns the responsibility of education to the states. But nothing can be more important, and I call upon educators everywhere to come together and work together. What makes their mission even more challenging is that AI is changing all the time, and with such speed. So those teaching it must also be capable of commensurate change. But educational standards tend to be fixed. It is hard enough to set them, let alone to build in agility and responsiveness. I look forward to working with educators, continuing to participate with the AI Task Force, to help develop standards and guardrails that are as responsive and dynamic as artificial intelligence itself. Indeed, the time is now.
Category:
E-Commerce
Sites : [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] next »