|
|||||
|
|
By Barrie Einarson | Trade Ideas If you’ve ever been stopped out of a trade and felt like you missed the whole move this one’s for you. Today’s edition of What Makes This Trade Great? features RXT, a textbook example of why the re-entry is one of the most powerful tools in a trader’s… Source
Indie publisher and developer Finji has accused TikTok of using generative AI to alter the ads for its games on the platform without its knowledge or permission. Finji, which published indie darlings like Night in the Woods and Tunic, said it only became aware of the seemingly modified ads after being alerted to them by followers of its official TikTok account. As reported by IGN, Finji alleges that one ad that went out on the platform was modified so it displayed a "racist, sexualized" representation of a character from one of its games. While it does advertise on TikTok, it told IGN that it has AI "turned all the way off," but after CEO and co-founder Rebekah Saltsman received screenshots of the ads in question from fans, she approached TikTok to investigate. A number of Finji ads have appeared on TikTok, some that include montages of the companys games, and others that are game-specific like this one for Usual June. According to IGN, the offending AI-modified ads (which are still posted as if theyre coming directly from Finji) appeared as slideshows. Some images dont appear to be that different from the source, but one possibly AI-generated example seen by IGN depicts Usual Junes titular protagonist with "a bikini bottom, impossibly large hips and thighs, and boots that rise up over her knees." Needless to say (and obvious from the official screenshot used as the lead image for this article), this is not how the character appears in the game. As for TikToks response, IGN printed a number of the platforms replies to Finjis complaints, in which it initially said, in part, that it could find no evidence that "AI-generated assets or slideshow formats are being used." This was despite Finji sending the customer support page a screenshot of the clearly edited image mentioned above. In a subsequent exchange, TikTok appeared to acknowledge the evidence and assured the publisher it was "no longer disputing whether this occurred." It added that it has escalated the issue internally and was investigating it thoroughly. TikTok does have a "Smart Creative" option on its ad platform, which essentially uses generative AI to modify user-created ads so that multiple versions are pushed out, with the ones its audience responds more positively to used more often. Another option is the Automate Creative features, which use AI to automatically optimize things like music, audio effects and general visual "quality" to "enhance the users viewing experience." Saltsman showed IGN evidence that Finji has both of these options turned off, which was also confirmed by a TikTok agent for the ad in question. After a number of increasingly frustrated exchanges in which TikTok eventually admitted to Saltsman that the ad "raises significant issues, including the unauthorized use of AI, the sexualization and misrepresentation of your characters, and the resulting commercial and reputational harm to your studio," the Finji co-founder was offered something of an explanation. TikTok said that Finjis campaign used a "catalog ads format" designed to "demonstrate the performance benefits of combining carousel and video assets in Sales campaigns." It said that this "initiative" helped advertisers "achieve better results with less effort," but did not address the harmful content directly. Finji seemingly also opted into this ad format without knowing it had done so. TikTok declined to comment on the matter when approached by IGN. Saltsman was told the issue could not be escalated any higher, with communication not resolved at the time of IGN publishing its report. In a statement to the outlet, Saltsman said she was "a bit shocked by TikToks complete lack of appropriate response to the mess they made." She went on to say that she expected both an apology and clear reassurance of how a similar issue would not reoccur, but was "obviously not holding my breath for any of the above."This article originally appeared on Engadget at https://www.engadget.com/gaming/tunic-publisher-claims-tiktok-ran-racist-sexist-ai-ads-for-one-of-its-games-without-its-knowledge-185303395.html?src=rss
OpenAI is reportedly hard at work developing a series of AI-powered devices, including smart glasses, a smart speaker and a smart lamp. According to reporting by The Information, the AI company has a team of over 200 employees dedicated to the project. The first product scheduled to be released is reported to be a smart speaker that would include a camera, allowing it to better absorb information about its users and surroundings. According to a person familiar with the project, this would extend to identifying objects on a nearby table, as well as conversations being held in the vicinity of the speaker. The camera will also support a facial recognition feature similar to Apple's Face ID that would enable users to authenticate purchases. The speaker is expected to retail for between $200 and $300 and ship in early 2027 at the earliest. Reporting indicates the company's AI-powered smart glasses, a space currently dominated by Meta, would not come until 2028. As for the smart lamp, while prototypes have been made, it's unclear whether it will actually be brought to market. Last year OpenAI acquired ex-Apple designer Jony Ive's startup io Products for $6.5 billion. Ive is considered largely responsible for Apple's design aesthetic, having been involved in designing just about every major Apple device since joining the company in the '90s before his departure in 2019. The acquisition of his AI-focused design firm sets the stage for Ive to lead hardware product development now for OpenAI. Since the partnership was forged, there have already been delays due to technical issues, privacy concerns and logistical issues surrounding the computing power necessary to run a mass-produced AI device. Regardless of the behemoths behind the project, the speaker and other future products may still face a consumer reluctant to buy a product that is always listening to and watching its users.This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-reportedly-release-an-ai-powered-smart-speaker-in-2027-173344866.html?src=rss
A recent Amazon Web Services (AWS) outage that lasted 13 hours was reportedly caused by one of its own AI tools, according to reporting by Financial Times. This happened in December after engineers deployed the Kiro AI coding tool to make certain changes, say four people familiar with the matter. Kiro is an agentic tool, meaning it can take autonomous actions on behalf of users. In this case, the bot reportedly determined that it needed to "delete and recreate the environment." This is what allegedly led to the lengthy outage that primarily impacted China. Amazon says it was merely a "coincidence that AI tools were involved" and that "the same issue could occur with any developer tool or manual action." The company blamed the outage on "user error, not AI error." It said that by default the Kiro tool requests authorization before taking any action but that the staffer involved in the December incident had "broader permissions than expected a user access control issue, not an AI autonomy issue." Multiple Amazon employees spoke to Financial Times and noted that this was "at least" the second occasion in recent months in which the company's AI tools were at the center of a service disruption. "The outages were small but entirely foreseeable," said one senior AWS employee. A builder shares why their workflow finally clicked.Instead of jumping straight to code, the IDE pushed them to start with specs. Clear requirements. Acceptance criteria. Traceable tasks.Their takeaway:Think first. Code later.Get the full breakdown here pic.twitter.com/eD7ZrEdEn5 Kiro (@kirodotdev) January 14, 2026 The company launched Kiro in July and has since pushed employees into using the tool. Leadership set an 80 percent weekly use goal and has been closely tracking adoption rates. Amazon also sells access to the agentic tool for a monthly subscription fee. These recent outages follow a more serious event from October, in which a 15-hour AWS outage disrupted services like Alexa, Snapchat, Fortnite and Venmo, among others. The company blamed a bug in its automation software for that one.This article originally appeared on Engadget at https://www.engadget.com/ai/13-hour-aws-outage-reportedly-caused-by-amazons-own-ai-tools-170930190.html?src=rss