|
NASA says it was able to use the James Webb telescope to capture images of planet-forming disks around ancient stars that challenge theoretical models of how planets can form. The images support earlier findings from the Hubble telescope that havent been able to be confirmed until now. The new Webb highly detailed images were captured from the Small Magellanic Cloud, a neighboring dwarf galaxy to our home, the Milky Way. The Webb telescope was specifically focused on a cluster called NGC 346, which NASA says is a good proxy for similar conditions in the early, distant universe, and which lacks the heavier elements that have traditionally been connected to planet formation. Webb was able to capture a spectra of light which suggests protoplanetary disks are still hanging out around those stars, going against previous expectations that they would have blown away in a few million years. ASA, ESA, CSA, STScI, Olivia C. Jones (UK ATC), Guido De Marchi (ESTEC), Margaret Meixner (USRA) Hubble observations of NGC 346 from the mid 2000s revealed many stars about 20 to 30 million years old that seemed to still have planet-forming disks, NASA writes. Without more detailed evidence, that idea was controversial. The Webb telescope was able to fill in those details, suggesting the disks in our neighboring galaxies have a much longer period of time to collect the dust and gas that forms the basis of a new planet. As to why those disks are able to persist in the first place, NASA says researchers have two possible theories. One is that the radiation pressure expelled from stars in NGC 346 just takes longer to dissipate planet-forming disks. The other is that the larger gas cloud thats necessary to form a Sun-like star in an environment with fewer heavy elements would naturally produce larger disks that take longer to fade away. Whichever theory proves correct, the new images are beautiful evidence that we still dont have a full grasp of how planets are formed.This article originally appeared on Engadget at https://www.engadget.com/science/space/nasas-new-webb-telescope-images-support-previously-controversial-findings-about-how-planets-form-213312055.html?src=rss
Category:
Marketing and Advertising
After a federal court last week denied TikToks request to delay a law that could ban the app in the United States, the company is now turning to the Supreme Court in an effort to buy time. The social media company has asked the court to temporarily block the law, currently set to take effect January 19, 2025, it said in a brief statement. The Supreme Court has an established record of upholding Americans right to free speech, TikTok wrote in a post on X. Today, we are asking the Court to do what it has traditionally done in free speech cases: apply the most rigorous scrutiny to speech bans and conclude that it violates the First Amendment. The company, which has argued that the law is unconstitutional, lost its initial legal challenge of the law earlier this month. The company then requested a delay of the laws implementation, saying that President-elect Donal Trump had said he would save TikTok. That request was denied on Friday. In its filing with the Supreme Court, TikTok again referenced Trump's comments. "It would not be in the interest of anyonenot the parties, the public, or the courtsfor the Acts ban on TikTok to take effect only for the new Administration to halt its enforcement hours, days, or even weeks later," it wrote. Trump's inauguration is one day after a ban of the app would take effect. TikTok is now hoping the Supreme Court will intervene to suspend the law in order to give the company time to make its final legal appeal. Otherwise, app stores and Internet service providers will be forced to begin blocking TikTok next month, making the app inaccessible to its 170 million US users. Update December 16, 2024, 1:30 PM PT: Updated with details from TikTok's court filing. This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktok-asks-the-supreme-court-to-delay-upcoming-ban-211510659.html?src=rss
Category:
Marketing and Advertising
Google has yet another AI tool to add to the pile. Whisk is a Google Labs image generator that lets you use an existing image as your prompt. But its output only captures your starter images essence rather than recreating it with new details. So, its better for brainstorming and rapid-fire visualizations than edits of the source image. The company describes Whisk as a new type of creative tool. The input screen starts with a bare-bones interface with inputs for style and subject. This simple introductory interface only lets you choose from three predefined styles: sticker, enamel pin and plushie. I suspect Google found those three allowed for the kind of rough-outline outputs the experimental tool is most ideal for in its current form. As you can see in the image above, it produced a solid image of a Wilford Brimley plushie. (Googles terms forbid pictures of celebrities, but Wilford slipped through the gates, Quaker Oats in tow, without alerting the guards.) Whisk also includes a more advanced editor (found by clicking Start from scratch from the main screen). In this mode, you can use text or a source image in three categories: subject, scene and style. Theres also an input bar to add more text for finishing touches. However, in its current form, the advanced controls didnt produce results that looked anything like my queries. For example, check out my attempt to generate the late Mr. Brimley in a lightbox scene in the style of a walrus plushie image I found online: Google / Screenshot by Will Shanklin for Engadget Whisk spit out what looks like a vaguely Wilford Brimley-esque actor eating oatmeal inside a lightbox frame. As far as I can tell, that dude is not a plushie. So, its clear why Google recommends using the tool more for rapid visual exploration and less for production-ready content. Google acknowledges that Whisk will only draw from a few key characteristics of your source image. For example, the generated subject might have a different height, weight, hairstyle or skin tone, the company warns. To understand why, look no further than Googles description of how Whisk works under the hood. It uses the Gemini language model to write a detailed caption of the source image you upload. It then feeds that description into the Imagen 3 image generator. So, the result is an image based on Geminis words about your image not the source image itself. Whisk is only available in the US, at least for now. You can try it at the projects Google Labs site.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-new-ai-tool-whisk-uses-images-as-prompts-210105371.html?src=rss
Category:
Marketing and Advertising
All news |
||||||||||||||||||
|