Feed engadget Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Favorite IconEngadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Link https://www.engadget.com/
Feed https://www.engadget.com/rss.xml
Copyright copyright Yahoo 2025
Updated 2025-09-08 14:17
AMD unveils Radeon RX 9060 XT at Computex 2025
AMD has unveiled its 9060 XT GPU at Computex 2025. The midrange GPU will be the clear competitor to Nvidia's 5060 Ti and goes toe-to-toe with it on almost every spec. Built on AMD's 4-nanometer RDNA 4 silicon, the 9060 XT will pack 32 compute units, along with 64 dedicated AI accelerators and 32 ray-tracing cores.AMDNotably, the RX 9060 XT will ship in 8GB and 16GB GDDR6 versions, whereas Nvidia's RTX 5060 Ti uses faster 28 Gb/s GDDR7, delivering roughly 40 percent more bandwidth (448 GB/s vs. approximately 322 GB/s) on the same 128-bit bus. We'll have to wait for some side-by-side performance comparisons before drawing any strong conclusions from those specs.AMD has listed the 9060 XT's boost clock at speeds up to 3.13 GHz. The GPU boasts 821 TOPS for AI workloads and will draw a modest 150 to 182 watts from the board. The card will connect via PCIe 5.0 x16 and supports the now-standard DisplayPort 2.1a and HDMI 2.1b. Based on these initial specs, the 9060 XT should be a solid entry for games running at 1080p and a decent option for those at 1440p. Those wishing to play at 4K should still opt for the Radeon RX 9070 or 9070 XT.Pricing and exact release timelines have not yet been announced.This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/amd-unveils-radeon-rx-9060-xt-at-computex-2025-030021776.html?src=rss
AMD's Ryzen Threadripper 9000 chips have up to 96 cores, just like the last bunch
Not many people need a 96-core processor. But for creative professionals, engineers and AI developers who do, AMD has a new batch of chips on display at Computex 2025. The company announced its new Ryzen Threadripper 9000 series on Tuesday, with bonkers specs to power pro-level workstations and ultra-high-end prosumer desktops.At the top of the line in the series is the AMD Threadripper Pro 9995WX. This chip has a staggering 96 cores and 192 threads, matching the highest-end model from 2023's Threadripper Pro 7000 line. But the new 9000 series tops out with a higher maximum boost speed of 5.4GHz. That's up from 5.1GHz in the premiere 7000 Pro chip.AMD's new batch includes six processors in the Threadripper Pro WX series, designed for pro-level workstations. (In addition to the 96-core 9995WX, options include 12-, 16-, 24-, 32- and 64-core models.) Moving past the Pro series, the standard Threadripper 9000 line for high-end desktops maxes out with the 64-core, 128-thread 9980X.AMD hasn't yet announced pricing or specific retail models carrying the chips. But the 7000 Pro series offers a hint. The top-shelf model from that line costs a cool $10,000. (Yep, that's for the processor alone.) So, unless your work involves extremely demanding AI development, 3D modeling or ultra-high-res video editing, you can slowly step away and make your way back to the consumer aisle.This article originally appeared on Engadget at https://www.engadget.com/computing/amds-ryzen-threadripper-9000-chips-have-up-to-96-cores-just-like-the-last-bunch-030003537.html?src=rss
Fortnite is finally back in the US App Store
Fortnite is back in the US App Store. Epic CEO Tim Sweeney announced that he intended to relaunch the game in late April, following a court order that demanded Apple stop collecting a 27 percent fee on app transactions that happen outside of its in-app purchase system. The company finally amending its rules to remove that additional commission is why Epic moved forward with the relaunch.The origins of this conflict can be traced all the way back to 2020, when Epic added its own method for collecting payments for in-game items in Fortnite and encouraged players to circumvent Apple's system. Fortnite was removed from the App Store (and the Google Play Store for that matter), Epic sued and the rest is history.Epic didn't win its entire case against Apple, but it did secure a permanent injunction allowing developers to include in-app text that makes users aware of payment options other than the App Store. According to the latest court order, Apple allowed that text, but was still demanding developers pay it a fee for those non-App Store transactions. That prompted the judge overseeing the companies' case to demand Apple stop and remove even more obstacles from the payment process.
Google XR glasses hands-on: Lightweight but with a limited field of view
One of the biggest reveals of Google I/O was that the company is officially back in the mixed reality game with its own prototype XR smart glasses. It's been years since we've seen anything substantial from the search giant on the AR/VR/XR front, but with a swath of hardware partners to go with its XR platform it seems that's finally changing.Following the keynote, Google gave me a very short demo of the prototype device we saw onstage. I only got a few minutes with the device so my impressions are unfortunately very limited, but I was immediately impressed with how light the glasses were compared with Meta's Orion prototype and Snap's augmented reality Spectacles. While both of those are quite chunky, Google's prototype device was lightweight and felt much more like a normal pair of glasses. The frames were a bit thicker than what I typically wear, but not by a whole lot.Karissa Bell for EngadgetAt the same time, there are some notable differences between Google's XR glasses and what we've seen from Meta and Snap. Google's device only has a display on one side - the right lens, you can see it in the image at the top of this article - so the visuals are more "glanceable" than fully immersive. I noted during Google's demo onstage at I/O that the field of view looked narrow and I can confirm that it feels much more limited than even Snap's 46-degree field of view. (Google declined to share specifics on how wide the field of view is on its prototype.)Instead, the display felt a bit similar to the front display of a foldable phone. You can use it to get a quick look at the time and notifications and small snippets of info from your apps, like what music you're listening to.Gemini is meant to play a major role in the Android XR ecosystem, and Google walked me through a few demos of the AI assistant working on the smart glasses. I could look at a display of books or some art on the wall and ask Gemini questions about what I was looking at. It felt very similar to multimodal capabilities we've seen with Project Astra and elsewhere.There were some bugs, though, even in the carefully orchestrated demo. In one instance, Gemini started to tell me about what I was looking at before I had even finished my question to it, which was followed by an awkward moment where we both paused and interrupted each other.One of the more interesting use cases Google was showing was Google Maps in the glasses. You can get a heads-up view of your next turn, much like Google augmented reality walking directions, and look down to see a little section of map on the floor. However, when I asked Gemini how long it would take to drive to San Francisco from my location it wasn't able to provide an answer. (It actually said something like "tool output," and my demo ended very quickly after.)EngadgetI also really liked how Google took advantage of the glasses' onboard camera. When I snapped a photo, a preview of the image immediately popped up on the display so I could see how it turned out. I really appreciated this because framing photos from a camera on smart glasses is inherently unintuitive because the final image can vary so much depending on where the lens is placed. I've often wished for a version of this when taking photos with my Ray-Ban Meta Smart Glasses, so it was cool to see a version of this actually in action.I honestly still have a lot of questions about Google's vision for XR and what eventual Gemini-powered smart glasses will be capable of. As with so many other mixed reality demos I've seen, it's obviously still very early days. Google was careful to emphasize that this is prototype hardware meant to show off what Android XR is capable of, not a device it's planning on selling anytime soon. So any smart glasses we get from Google or its hardware partners could look very different.What my few minutes with Android XR was able to show, though, was how Google is thinking about bringing AI and mixed reality together. It's not so different from Meta, which sees smart glasses as key to long-term adoption of its AI assistant too. But now that Gemini is coming to just about every Google product that exists, the company has a very solid foundation to actually accomplish this.This article originally appeared on Engadget at https://www.engadget.com/ar-vr/google-xr-glasses-hands-on-lightweight-but-with-a-limited-field-of-view-213940554.html?src=rss
Solar trade association warns of 'devastating energy shortages' if incentives are cut
The Solar Energy Industries Association released an assessment of how the budget reconciliation bill currently under review in Congress would have a negative impact on the economy. The legislation cuts incentives around solar power investment and adoption, such as the Section 25D residential tax credit.The group's analysis found that the bill, as it stands, would lead to the loss of nearly 300,000 current and future jobs in the US. It also said removal of incentives could mean a loss of $220 billion in investment in the sector by 2030. It also pointed to a future energy shortage, claiming that solar was on course to be responsible for about 73 percent of the 206.5 GW of new energy capacity needed in the country by 2030.Passing this bill would create a catastrophic energy shortfall, cede AI and tech leadership to China, and damage some of the most vital sectors of the U.S. economy," SEIA President and CEO Abigail Ross Hopper said.It's the type of reaction we expect to see when an industry is under threat from federal action. It's also the type of researched data that doesn't seem to have much influence on the current administration, particularly when it comes to the environment and sustainability.This article originally appeared on Engadget at https://www.engadget.com/science/solar-trade-association-warns-of-devastating-energy-shortages-if-incentives-are-cut-214607526.html?src=rss
Google demos Android XR glasses at I/O with live language translation
Google has dug back into its past and introduced its latest take on smart glasses during I/O 2025. Glasses with Android XR brings Gemini AI to smart glasses thanks to an expanded partnership between Google and Samsung. These smart glasses can sync with a smartphone to access apps, and they're equipped with speakers and an optional in-lens display for privately viewing information.And for those that remember the less-than-stylish old Google Glass frames, this iteration seems more focused on real world wearability and style. Google is also working with Gentle Monster and Warby Parker as inaugural partners for providing the frames. In an indicator of how seriously Google is taking this project, the tech giant is committing up to $150 million as part of the Warby Parker deal. Half of that is for product development and the other half is for potential equity investment into Warby Parker.The highlight of the I/O presentation of the glasses attempted to do a live translation. Shahram Izadi and Nishtha Bhatia spoke Farsi and Hindi to each other as the XR frames provided real-time translation into English. The demo fell victim to the curse of AI misbehaving during a live show, but there was a brief moment where each of their glasses did successfully work as hoped.In addition to that demo, Bhatia also showcased how the Gemini assistant could work with the XR glasses, asking it questions about images she was seeing backstage at the theater and calling up information about the cafe where she got coffee before the show.Update, May 20, 2025, 5:14PM ET: Added financial details about the Warby Parker partnership.This article originally appeared on Engadget at https://www.engadget.com/wearables/google-demos-android-xr-glasses-at-io-live-translation-191510280.html?src=rss
Telegram CEO Pavel Durov is banned from leaving France without permission following his arrest
The French government has forbidden Telegram CEO Pavel Durov from leaving the country without official authorization, according to a report from Politico. Durov was arrested in France in August 2024 and later indicted for being complicit in illegal activity that occurrs on Telegram, like money laundering and the distribution of CSAM (child sexual abuse material).Durov was attempting to travel to the US for "negotiations with investment funds," Politico writes, something that French officials decided "did not appear imperative or justified." In March, Durov received permission to travel to the United Arab Emirates, where he maintains citizenship.Following Durov's arrest, Telegram shared that it abided by EU laws, including the Digital Services Act, and that "its moderation is within industry standards and constantly improving." As evidence of that constant improvement, Telegram decided it would provide user IP addresses and phone numbers in response to legal requests in September 2024, something it originally made a point of avoiding. The messaging platform later partnered with the International Watch Foundation in December 2024 to use the organization's tools to block links to CSAM in Telegram. Both moves could be seen as attempts to appease authorities who might want the messaging platform to answer for the criminal activity it's seemingly enabled.This article originally appeared on Engadget at https://www.engadget.com/apps/telegram-ceo-pavel-durov-is-banned-from-leaving-france-without-permission-following-his-arrest-210401130.html?src=rss
US libraries cut ebook and audiobook lending programs following Trump executive order
In the latest episode of How to Dismantle Public Services in 12 Easy Steps, a Trump executive order targeting libraries has real-world consequences. The AP reported over the weekend that libraries across the country are cutting programs that offer ebooks, audiobooks and other loan programs. These initiatives exploded in popularity following the pandemic, with over 660 million people globally borrowing them in 2023 - a 19 percent annual increase.The cuts and slashing of grants followed a Trump executive order issued on March 14 targeting the Institute of Museum and Library Services (IMLS). His appointee to helm the agency, Keith E. Sonderling, quickly signaled that he was there to do the president's bidding. He placed the IMLS's entire staff on administrative leave, sent termination notices to most of them, canceled grants and contracts and fired everyone on the National Museum and Library Services Board.Federal judges have temporarily blocked the administration from further gutting the IMLS. But while lawsuits from 21 states and the American Library Association make their way through the courts, the agency's federal funding remains frozen. And libraries are scrambling to adjust.If you've ever used your library to borrow an ebook or audiobook through an app like Libby or Hoopla, there's a good chance federal funding made that possible. Libraries purchase digital leases for ebooks and audiobooks from publishers, enabling them to lend titles to patrons. The leases typically cost much more than physical copies and must be renewed after a set period or number of checkouts.With library digital borrowing surging, those federal funds went a long way toward keeping the programs afloat. Mississippi has indefinitely suspended its Hoopla-based lending program.The IMLS was created in 1996 by a Republican-controlled US Congress. The agency has an annual budget of under $300 million, with nearly half of that amount allocated to state libraries, which, in turn, help fund local libraries' digital lending programs. "The small library systems are not able to pay for the ebooks themselves," Rebecca Wendt, California's state library director, told the AP.This article originally appeared on Engadget at https://www.engadget.com/mobile/us-libraries-cut-ebook-and-audiobook-lending-programs-following-trump-executive-order-205113868.html?src=rss
An animated Clash of Clans series is coming to Netflix
The latest video game to be getting the TV show treatment is a pair of hugely popular mobile titles. Developer Supercell is partnering with Netflix for an animated series based on the world of its games Clash of Clans and Clash Royale. Fletcher Moules, who directed the original Clash of Clans animated videos on YouTube, will be the showrunner for the Netflix project and Ron Weiner, who has worked on Silicon Valley, 30 Rock, Futurama and Arrested Development, will be the head writer.Clash of Clans debuted in 2012 and the casual strategy game got a deck battler sequel in Clash Royale, which launched in 2016. According to the show announcement, the pair of games have more than 4 billion downloads and more than 180 billion gameplay hours logged by players. The Netflix show will center on the Barbarian character from this game universe as he tries to "rally a band of misfits to defend their village and navigate the comically absurd politics of war." The series is in pre-production, and no additional casting or release info has been shared at this stage.Netflix has hosted several animated shows based on video games, from Arcane to Devil May Cry.This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/an-animated-clash-of-clans-series-is-coming-to-netflix-204104822.html?src=rss
Apple's latest iPad is on sale for $50 off ahead of Memorial Day
Whether you're traveling for a vacation or just relaxing in the sunshine, a tablet is one of the easiest ways for you and your family to stay entertained while out and about during the summer. If you're looking for a new tablet, Amazon is selling the most recent Apple iPad (A16) for $50 off. It's an 11-inch model powered by the A16 chip. You can buy the 128GB tablet in any of the four available colors - silver, blue, pink or yellow - of the tablet for $299. If you need more storage, you can opt for the 256GB model for $399 or the 512GB version for $595. All of these discounts are for the WiFi only models and do not include any time under the Apple Care protection plan. Apple has a bunch of different iPads for sale these days, and the A16 one is our favorite budget option for this brand. That's because although the A16 chip is notably less powerful than the M3 or M4 you'll find in higher-end tablets, this model still performs well on the basic tasks that you'd use an iPad for. This iPad has a liquid Retina display with a resolution of 2360x1640. Again, it's not flashy, but plenty serviceable. One additional caveat for the A16 is that it can't run Apple Intelligence, so this isn't the iPad for you if you're looking to experiment with lots of AI tools. But for about $300, it's a great starter option if you want to have an easy way to play games, watch shows or read on a larger screen. Follow @EngadgetDeals on X for the latest tech deals and buying advice.This article originally appeared on Engadget at https://www.engadget.com/deals/apples-latest-ipad-is-on-sale-for-50-off-ahead-of-memorial-day-195749473.html?src=rss
SynthID Detector can check media to see if it was generated with Google's AI tools
Google originally launched SynthID, its digital watermark for AI-generated content, as a way to detect whether an image was created using the company's Imagen model in 2023. Now, at Google I/O 2025, the company is introducing a public-facing tool called SynthID Detector that claims to detect those watermarks in just about anything you upload.SynthID Detector will be available as a web portal where you can upload images, video, audio and text to be scanned. Once uploaded, SynthID Detector Google claims the portal can tell you whether your upload contains AI-generated material and even "highlight specific portions of the content most likely to be watermarked." For audio, the tool is supposed to be able to identify a specific portion of a track that contains the watermark, too.GoogleSynthID was designed to mark content from Google's models but Google hopes other companies will adopt the watermark for their own AI output. An open source version of SynthID is already available for text watermarking, and as part of the rollout of SynthID Detector, Google is partnering with NVIDIA to mark media its NVIDIA Cosmos model generates. SynthID Detector won't be the only tool that can spot Google's watermark, either. The company says GetReal Security will also be able to verify if media contains SynthID.Considering the sheer number of ways Google hopes people will using AI to create images, video, text and audio, from the Audio Overviews in NotebookLM to short films made with its new Flow tool, it makes sense that it would offer a way to know if any of those things are real. Until models from one company produces the vast majority of content or a digital watermark reaches widespread adoption, though, a tool like SynthID Detector can only be so useful.Journalists, researchers and developers can join a waitlist to try SynthID Detector through Google's online form.This article originally appeared on Engadget at https://www.engadget.com/ai/synthid-detector-can-check-media-to-see-if-it-was-generated-with-googles-ai-tools-194002070.html?src=rss
Google is rolling out AI Mode to everyone in the US
Google has begun rolling out AI Mode to every Search user in the US. The company announced the expansion during its I/O 2025 conference. Google first began previewing AI Mode with testers in its Labs program at the start of March. Since then, it has been gradually rolling out the feature to more people, including in recent weeks regular Search users. At its keynote today, Google shared a number of updates coming to AI Mode as well, including some new tools for shopping, as well as the ability to compare ticket prices for you and create custom charts and graphs for queries on finance and sports.For the uninitiated, AI Mode is a chatbot built directly into Google Search. It lives in a separate tab, and was designed by the company to tackle more complicated queries than people have historically used its search engine to answer. For instance, you can use AI Mode to generate a comparison between different fitness trackers. Before today, the chatbot was powered by Gemini 2.0. Now it's running a custom version of Gemini 2.5. What's more, Google plans to bring many of AI Mode's capabilities to other parts of the Search experience."AI Mode is where we'll first bring Gemini's frontier capabilities, and it's also a glimpse of what's to come," the company wrote in a blog post published during the event. "As we get feedback, we'll graduate many features and capabilities from AI Mode right into the core search experience in AI Overviews."Looking to the future, Google plans to bring Deep Search, an offshoot of its Deep Research mode, to AI Mode. Google was among the first companies to debut the tool in December. Since then, most AI companies, including OpenAI, have gone on to offer their take on Deep Research, which you can use to prompt Gemini and other chatbots to take extra time to create a comprehensive report on a subject. With today's announcement, Google is making the tool available in a place where more of its users are likely to encounter it.Another new feature that's coming to AI Mode builds on the work Google did with Project Mariner, the web-surfing AI agent the company began previewing with "trusted testers" at the end of last year. This addition gives AI Mode the ability to complete tasks for you on the web. For example, you can ask it to find two affordable tickets for the next MLB game in your city. AI Mode will compare "hundreds of potential" tickets for you and return with a few of the best options. From there, you can complete a purchase without having done the comparison work yourself."This will start with event tickets, restaurant reservations and local appointments," says Google. "And we'll be working with companies like Ticketmaster, StubHub, Resy and Vagaro to create a seamless and helpful experience."AI Mode will also soon include the ability to generate custom charts and graphics tailored to your specific queries. At the same time, AI Mode will be more personalized in the near future, with Google introducing an optional feature allowing the tool to draw their past searches. The company will also give people the option to connect their other Google apps to AI Mode, starting with Gmail, for even more granular recommendations.As mentioned above, Google is adding a suite of shopping features to AI Mode. Engadget has a separate post dedicated to the Shopping features Google announced today, but the short of it is that AI Mode will be able to narrow down products for you and complete purchases on your behalf - with your permission, of course.All of the new AI Mode features Google previewed today will be available to Labs users first before they roll out more broadly.Update, May 20 2025, 2:45PM ET: This story has been updated to preview in the intro some of the updates coming to AI Mode.This article originally appeared on Engadget at https://www.engadget.com/ai/google-is-rolling-out-ai-mode-to-everyone-in-the-us-174917628.html?src=rss
Google wants $250 (!) per month for its new AI Ultra plan
Google has just announced a new $250 per month AI Ultra plan for people who want unlimited access to its most advanced machine learning features. Yes, you read that right. It means the new subscription is $50 more expensive than the already pricey ChatGPT Pro and Claude Max plans from OpenAI and Anthropic.For $250, you're getting early access to new models like Veo 3, and unlimited usage of features like Flow (the new AI film-making app the company announced today) and the compute-intensive Deep Research. In the coming weeks, Google will also roll out Deep Think to AI Ultra users, which is the new enhanced reasoning mode that is part of its Gemini 2.5 Pro model. Subscribers can also look forward to access to Project Mariner, Google's web-surfing agent, and Gemini within Chrome, plus all the usual places where you can find the chatbot like Gmail and Docs.Google is partly justifying the high cost of AI Ultra by touting the inclusion of YouTube Premium and 30TB of cloud storage across Google Photos, Drive and Gmail. On its own, a YouTube Premium subscription would cost you $14 per month, and Google doesn't offer 30TB of cloud storage separately. The closest comparison would be Google One, which includes a Premium tier that comes with 2TB of storage for $10 per month. As another incentive to sign up for AI Ultra, Google is giving new subscribers 50 percent off their first three months.As of today, Google is also revamping its existing AI Premium plan. The subscription, which will be known as Google AI Pro moving forward, now includes the Flow app and early access to Gemini in Chrome. Google says the new benefits will come to US subscribers first, with availability in other countries to follow.This article originally appeared on Engadget at https://www.engadget.com/ai/google-wants-250-per-month-in-return-for-its-new-ai-ultra-plan-180248513.html?src=rss
Google’s new filmmaking tool Flow adds editing tools and some consistency to AI-generated video
At I/O today, Google pitched creators on a new app for "AI filmmaking": Flow. Combining all of Google's recent announcements and developments across AI-powered services, including Veo (video), Imagen (images) and Gemini, the company bills Flow as a storytelling aid "built with creatives." If it sounds familiar, this is the advanced version of VideoFX, previously a Google Labs experiment.The company says Flow is aimed at helping storytellers to explore ideas and create clips and scenes, almost like storyboards and sketches in motion. Google's generally impressive Veo 2 model seems to form the core of Flow, able to extend footage and create video that excel(s) at physics and realism", although I'm not sure many agree with that.You can use Gemini's natural language skills to construct and tweak the video output, and creatives can pull in their own assets or create things with Imagen through simple text input. What's notable is the ability to integrate your creations and scenes into different clips and scenes with consistency. While the early demo footage we saw was impressive, it still had a not-so-faint AI-slop aroma.There are further film-making tools, too. Flow will also feature direct control over the movement of your camera', and even choose camera angles. You can also edit and extend shots, adding different transitions between AI-generated videos. Creating video with Veo is often a piecemeal process, but Flow will have its own asset management system to organize assets and even your prompts. These richer controls and editing abilities could make for more compelling creations in time. Let's not forget: It's been less than a year since that very weird Toys R'Us ad.Google buddied up with several notable filmmakers to attempt to legitimize collaborate on these still-early steps into AI video creation, including Dave Clark, Henry Daubrez and Junie Lau. It says it offered creatives early access to the tools, and folded in their insights and feedback into what is now called Flow.Flow is now available to AI Pro and AI Ultra subscribers in the US, and will roll out to other countries soon. Pro users will get Flow tools outlined so far and 100 generations each month. With the Ultra sub, you'll get unlimited generation and early access to Veo 3, with native audio generation.This article originally appeared on Engadget at https://www.engadget.com/ai/google-filmmaking-tool-flow-ai-generated-video-175212520.html?src=rss
Google Chrome previews feature to instantly change compromised passwords
Google Chrome has announced a feature for its built-in password manager that it claims will let users instantly change passwords compromised in data breaches. Google Password Manager already alerts you when your credentials have appeared in a data breach, and partially automates the process of changing your password, but - until now - you still had to go through the steps manually for each of your online accounts.The Automated Password Change feature, announced at today's Google I/O keynote presentation, goes a step farther. It will apparently let you generate a new password and substitute it for the old one with a single click, without ever seeing a "Create New Password" page. The feature only works on participating websites. Google is currently in talks with developers to expand the range of sites that will support one-click password changes, with plans for a full rollout later in 2025.Automated Password Change was discovered as far back as February by eagle-eyed software diggers, but was limited to the early developer-only builds made public as Chrome Canary. At that time, it was located in the "AI Innovations" settings menu, though it's not yet clear how AI figures in the process.This feature builds on password health functionality that Google has been steadily incorporating into Chrome since it released the Password Checkup extension in 2019, recognizing that compromised credentials are a common vector for cybercrime. People often reuse the same short, memorable password on multiple websites. If hackers steal a credential database from one weakly defended site and dump it on the dark web, other cybercriminals can try the leaked usernames and passwords on more secure sites - like online banks and cash apps - until one fits.The best way to prevent this is to use a password manager to generate and save a different strong password for every account you make, even ones you don't think will handle sensitive information. If you haven't done this, the second-best prevention is to monitor password data breaches and immediately change any password that gets leaked. If Automated Password Change works as advertised, it'll make that crisis response a lot more convenient.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/google-chrome-previews-feature-to-instantly-change-compromised-passwords-175051933.html?src=rss
Google is bringing Gemini to Chrome so it can answer questions about your open tabs
Google's Chrome browser is the latest major product from the company to get its own built-in Gemini features. Today at Google I/O, the company detailed its plans to bring its AI assistant to Chrome.While Gemini can already distill information from websites, having the assistant baked into Chrome allows it to provide insights and answer questions about your open tabs without ever having to move to a different window or application. Instead, Gemini lives in a new menu at the top of your browser window as well as in the taskbar.The company envisions its assistant as being able to help out with tasks that may normally require switching between several open tabs or scrolling around to different parts of a web page. For example, Google showed off how Gemini can give advice about potential modifications for dietary restrictions while looking at a recipe blog. Gemini in the browser could also come in handy while shopping as it can answer specific questions about products or even summarize reviews.To start, Gemini will only be able to answer queries about a single open tab, but the company plans to add multi-tab capabilities in a future update. This would allow the assistant to synthesize info across multiple open tabs and answer even more complex questions. Gemini in Chrome will also have Gemini Live capabilities, for anyone more comfortable conversing with the assistant using their voice. The company also teased a future update that will allow Gemini to actually scroll through web pages on your behalf, like asking it to jump to a specific step in a recipe. (Notably, all this is separate from Google's other web-browsing AI, Project Mariner, which is still a research prototype.)Gemini is starting to roll out to Chrome users on Mac and Windows today, beginning with AI Pro and AI Ultra subscribers in the United States. The company hasn't indicated whether it plans to bring similar features to Chromebooks or Chrome's mobile app.This article originally appeared on Engadget at https://www.engadget.com/ai/google-is-bringing-gemini-to-chrome-so-it-can-answer-questions-about-your-open-tabs-174903787.html?src=rss
Google's AI Mode lets you virtually try clothes on by uploading a single photo
As part of its announcements for I/O 2025 today, Google shared details on some new features that would make shopping in AI Mode more novel. It's describing the three new tools as being part of its new shopping experience in AI Mode, and they cover the discovery, trying on and checkout parts of the process. These will be available "in the coming months" for online shoppers in the US.The first update is when you're looking for a specific thing to buy. The examples Google shared were searches for travel bags or a rug that matches the other furniture in a room. By combining Gemini's reasoning capabilities with its shopping graph database of products, Google AI will determine from your query that you'd like lots of pictures to look at and pull up a new image-laden panel.It's somewhat reminiscent of Image search results, except these photos take up the right half or so of the page and are laid out vertically in four columns, according to the screenshots the company shared. Of course, some of the best spots in this grid can be paid for by companies looking for better placement for their products.As you continue to refine your search results with Gemini, the "new righthand panel dynamically updates with relevant products and images," the company said. If you specify that the travel bag you're looking for should withstand a trip to Oregon, for example, the AI can prioritize weatherproof products and show you those images in this panel.The second, and more intriguing part of the shopping updates in AI Mode, is a change coming to the company's virtual try-on tool. Since its launch in 2023, this feature has gotten more sophisticated, letting you pick specific models that most closely match your body type and then virtually reimagine the outfit you've found on them. At Google I/O today, the company shared that it will soon allow users to upload a single picture of themselves and its new image generation model that has been designed for fashion will overlay articles of clothing on your AI-imagined self.According to Google, the custom image generation model "understands the human body and nuances of clothing - like how different materials fold, stretch and drape on different bodies." It added that the software will "preserve these subtleties when applied to poses in your photos." The company said this is "the first of its kind working at this scale, allowing shoppers to try on billions of items of clothing from our Shopping Graph." The Try It On with an upload of your photo is rolling out in Search Labs in the US today, and when you're testing it, you'll need to look for the "try it on" icon on compatible product listings.GoogleFinally, when you've found what you want, you might not want to purchase it immediately. Many of us know the feeling of having online shopping carts packed and ready for the next upcoming sale (Memorial Day in the US is this weekend, by the way). Google's new "agentic checkout feature" can keep an eye on price drops on your behalf. You'll soon see a "track price" option on product listings similar to those already available on Google Flights, and after selecting it you'll be able to set your desired price, size, color and other options. The tracker will alert you when those parameters are met, and if you're ready to hand over your money, the agentic checkout tool can also simplify that process if you tap "buy for me."According to Google, "behind the scenes, we'll add the item to your cart on the merchant's site and securely complete the checkout on your behalf with Google Pay." The agentic checkout feature will be available "in the coming months" for product listings in the US.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-ai-mode-lets-you-virtually-try-clothes-on-by-uploading-a-single-photo-174820693.html?src=rss
Google's Veo 3 AI model can generate videos with sound
As part of this year's announcements at its I/O developer conference, Google has revealed its latest media generation models. Most notable, perhaps, is the Veo 3, which is the first iteration of the model that can generate videos with sounds. It can, for instance, create a video of birds with an audio of their singing, or a city street with the sounds of traffic in the background. Google says Veo 3 also excels in real-world physics and in lip syncing. At the moment, the model is only available for Gemini Ultra subscribers in the US within the Gemini app and for enterprise users on Vertex AI. It's also available in Flow, Google's new AI filmmaking tool.Flow brings Veo, Imagen and Gemini together to create cinematic clips and scenes. Users can describe the final output they want in natural language, and Flow will go to work making it for them. The new tool will only be available to Google AI Pro and Ultra subscribers in the US for now, but Google says it will roll out to more countries soon.While the company has released a brand new video-generating model, it hasn't abandoned Veo 2 just yet. Users will be able to give Veo 2 images of people, scenes, styles and objects to use as reference for their desired output in Flow. They'll have access to camera controls that will allow them to rotate scenes and zoom into specific objects for Flow, as well. Plus, they'll be able to broaden their frames from portrait to landscape if they want to and add or remove objects from their videos.Google has also introduced its latest image-generating model, Imagen 4, at the event. The company said Imagen 4 does fine details like intricate fabrics and animal fur with "remarkable clarity" and excels at generating both photorealistic and abstract images. It's also significantly better at rendering typography than its predecessors and can create images in various aspect ratios with resolutions of up to 2K. Imagen 4 is now available via the Gemini app, Vertex AI and in Workspace apps, including Docs and Slides. Google said it's also releasing a version of Imagen 4 that's 10 times faster than Imagen 3 "soon."Finally, to help people identify AI-generated content, which is becoming more and more difficult these days, Google has launched SynthID Detector. It's a portal where users can upload a piece of media they think could be AI-generated, and Google will determine if it contains SynthID, its watermarking and identification tool for AI art. Google had open sourced its watermarking tool, but not all image generators use it, so the portal still won't be able to identify all AI-generated images.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-veo-3-ai-model-can-generate-videos-with-sound-174541183.html?src=rss
Apple's WWDC 2025 keynote will be June 9 at 1PM ET
Apple has sent the invites for its in-person WWDC 2025 festivities on Monday, June 9, featuring the keynote session at 1PM ET/10AM PT. Attendees will be able to watch the keynote presentation at the company's Cupertino campus, as well as meet with developers and participating in special activities. For everyone who hasn't received an invite to Apple Park, the keynote will stream online. Developers can also participate in the rest of WWDC's programming online for free.We've already got pretty high hopes for the keynote announcements, with a lot of potential news expected about the upcoming redesign for iOS 19. We've heard that the operating system could have features including AI-powered battery management and improved public Wi-Fi sign ins, and our own Nathan Ingraham has penned an impassioned plea for a normal letter "a" in the Notes app. The full WWDC conference runs from June 9-13.This article originally appeared on Engadget at https://www.engadget.com/big-tech/apples-wwdc-2025-keynote-will-be-june-9-at-1pm-et-150621700.html?src=rss
Google I/O 2025: Live updates on Gemini, Android XR, Android 16 updates and more
Ready to see Google's next big slate of AI announcements? That's precisely what we expect to be unveiled today at Google I/O 2025, the search giant's developer conference that kicks off today at 1PM ET / 10AM PT. Engadget will be covering it in real-time right here, via a liveblog and on-the-ground reporting from our very own Karissa Bell.Ahead of I/O, Google already gave us some substantive details on the updated look and feel of its mobile operating system at The Android Show last week. Google included some Gemini news there as well: Its AI platform is coming to Wear OS, Android Auto and Google TV, too. But with that Android news out of the way, Google can use today's keynote to stay laser-focused on sharing its advances on the artificial intelligence front. Expect news about how Google is using AI in search to be featured prominently, along with some other surprises, like the possible debut of an AI-powered Pinterest alternative.The company made it clear during its Android showcase that Android XR, its mixed reality platform, will also be featured during I/O. That could include the mixed reality headset Google and Samsung are collaborating on, or, as teased at the end of The Android Show, smart glasses with Google's Project Astra built-in.As usual, there will be a developer-centric keynote following the main presentation (4:30PM ET / 1:30PM PT), and while we'll be paying attention to make sure we don't miss out any news there, our liveblog will predominantly focus on the headliner.You can watch Google's keynote in the embedded livestream above or on the company's YouTube channel, and follow our liveblog embedded below starting at 1PM ET today. Note that the company plans to hold breakout sessions through May 21 on a variety of different topics relevant to developers.Update, May 20 2025, 9:45AM ET: This story has been updated to include a liveblog of the event.Update, May 19 2025, 1:01PM ET: This story has been updated to include details on the developer keynote taking place later in the day, as well as tweak wording throughout for accuracy with the new timestamp.This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-io-2025-live-updates-on-gemini-android-xr-android-16-updates-and-more-214622870.html?src=rss
Amazon Music gets AI-powered search results in new beta
Amazon is updating Amazon Music with a a new "AI-powered search experience" that should make it easier to discover music based on the albums and artists you're already looking for. The company says the new beta feature "includes results for many of your favorite artists today," which is to say, not everyone, but it'll continue to expand to include more over time.A traditional search uses a search term - an artist's name, a song or an album title - and tries to pull up results that are as close to whatever you entered as possible. You'll still be able to make those kinds of searches in Amazon Music, but now under a new "Explore" tab in the iOS Amazon Music app, you'll also be able to see new AI-powered recommendations. These include "curated music collections," an easy jumping-off-point for creating an AI-generated playlists and more.AmazonAmazon suggests these results will vary depending on what you search you do. Looking up Bad Bunny's "Debi Tirar Mas Fotos" will show the album, but also "influential artists who influenced his sound" and other musicians he's collaborated with, the company says. A search for BLACKPINK, meanwhile, would highlight the K-pop group's early hits before surfacing solo work from members like Lisa or Jennie. It all sounds like a more flexible and expansive version of the X-Ray feature Amazon includes in Prime Video, which provides things like actors' names, trivia and related movies and TV shows with a button press.This new search experience was built using Amazon Bedrock, Amazon's cloud service for hosting AI models. It's one of several ways the company is trying to incorporate more AI features into its products. Earlier this year, Amazon started rolling out Alexa+, a version of the popular voice assistant rebuilt around generative AI, to select Echo devices.AI search in Amazon Music is available today on iOS for a select number of Amazon Music Unlimited subscribers in the US. If you're not included in this beta, you could be included in future tests.This article originally appeared on Engadget at https://www.engadget.com/apps/amazon-music-gets-ai-powered-search-results-in-new-beta-140055428.html?src=rss
The first Marshall soundbar is the $1,000 Heston 120 with Dolby Atmos
When a company enters a new product category, it might as well swing for the fences. That's exactly what Marshall is doing with its first soundbar. The Heston 120 is a $1,000 Dolby Atmos and DTS-X living room speaker, equipped with 11 drivers to power that spatial audio. Like the company's headphones and speakers, there's plenty of the iconic guitar amplifier aesthetic to go around. Inside, two subwoofers, two mid-range units, two tweeters and five full-range drivers produce the Heston 120's sound. There are also 11 Class D amplifiers (two 50W and nine 30W) inside and the soundbar has a total power output of 150 watts. Bluetooth (5.3) and Wi-Fi are also onboard, which means AirPlay 2, Google Cast, Spotify Connect and Tidal Connect are all available. For wired connectivity, there are two HDMI 2.1 ports (1 eARC) for your TV and other home theater gear, plus an RCA input allows you to hook up a turntable or other audio devices. Marshall The Heston 120 takes design cues from Marshall's line of guitar amps. This has been the case for the company's headphones, earbuds and speakers, and it will continue with soundbars. To that end, there's a mix of leather and metal, complete with the trademark gold script logo. There are also tactile controls you typically don't see on a soundbar, like the gold knobs and preset buttons akin to those that adorn an amplifier. This soundbar doesn't come with a subwoofer, but Marshall says a standalone option is on the way. What's more, that Heston Sub 200 and a smaller Heston 60 are both due to arrive "at a later date." Lots of companies are bundling at least a sub with their high-end soundbars, so it's disappointing that Marshall didn't do the same. I look forward to getting a review unit to see if the company's promise of "bass rumbling from below like never before" from the soundbar itself hold true. The Heston 120 will be available for purchase from Marshall's website on June 3. This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/the-first-marshall-soundbar-is-the-1000-heston-120-with-dolby-atmos-140041873.html?src=rss
Apple's 13-inch iPad Air M3 is $100 off for Memorial Day
We think the iPad Air is the best blend of price, features and performance in Apple's tablet lineup, and the 13-inch version in particular is a fine buy if you want a roomier display for multitasking or streaming video without paying the iPad Pro's extravagant prices. If you've been waiting for a sale on the jumbo-sized slate, good news: The device is $100 off Apple's list price and back down to $699 at Amazon and B&H. That's a deal we've seen for much of the last few weeks, but it still matches the lowest price we've tracked for the most recent model, which was released in March and runs on Apple's M3 chip. This offer applies to the base model with 128GB of storage. If you need more space, the 256GB and 512GB variants are also $100 off at $799 and $999, respectively. The former is another all-time low, while the latter only fell about $25 lower during a brief dip at the start of the month. The one catch is that these discounts only apply to the Space Gray colorway. We gave the newest 13-inch iPad Air a score of 89 in our review. This year's model is a straightforward spec bump, with the only major upgrade being the faster chip. So if you're coming from a prior M2 or M1 model and are still happy with its performance, there's no real need to upgrade. The M2 version in particular is still worth buying if you see it on sale - right now Target has the 256GB version of that slate down to $699, so feel free to grab that instead if you don't mind buying something slightly less futureproof. Either way, the Air remains a fairly definitive upgrade over the entry-level iPad (A16). It's certainly more expensive, but its laminated display doesn't look as cheap, holds up better against glare and can pump out slightly bolder colors. Its speakers sound less compressed, and it works with superior Apple accessories like the Pencil Pro stylus and latest Magic Keyboard. The M3 chip is noticeably faster for more involved tasks like exporting high-res videos or playing new games as well. More importantly, it sets the Air up better going forward, as features like Apple Intelligence and the Stage Manager aren't available on the lower-cost model at all. Plus, the base model is only available with an 11-inch display; if you want that bigger screen, this is the most affordable way to get it. Check out our coverage of the best Apple deals for more discounts, and follow @EngadgetDeals on X for the latest tech deals and buying advice.This article originally appeared on Engadget at https://www.engadget.com/deals/apples-13-inch-ipad-air-m3-is-100-off-for-memorial-day-133034317.html?src=rss
Hyundai's Ioniq 9 is a big electric SUV with big style
The pool of electric vehicles currently available on the North American market keeps getting wider and deeper. But, since the beginning, there's been something of a hole right in the middle. A big hole, as it turns out. The three-row SUV, one of the most popular segments in American motoring, has been woefully underserved. The only real options come on the high-end, with things like the Rivian R1S or the Mercedes-Benz EQS SUV.Kia added a new and more attainable option last year with the EV9, and now it's time for the other side of the corporate family to enter the fray with its own option, the Hyundai Ioniq 9. The latest American-made electric SUV from the Korean giant bears sharp styling and impressive performance. After a day piloting one through the countryside around the Savannah, Georgia factory where it'll be built, it's hard to argue against its $58,955 starting price.Economy-SizedTim Stevens for EngadgetThere's no denying that Hyundai's new Ioniq is huge. At 199 inches long, it's three inches bigger than the Hyundai Palisade, the company's now second-biggest three-row SUV. However, Hyundai's designers have done a stellar job of giving its new biggest baby a very compelling shape.Many SUVs with that much space resort to acres of flat sheet metal just to cover the distance between the bumpers, but the Ioniq 9 has a subtle, sophisticated and, equally importantly, aerodynamic shape. I confess I'm not a massive fan of the nose and its bland curves, but I absolutely love the subtle taper at the rear. That not only helps with the coefficient of drag (which measures at 0.269), but also helps make this thing look much smaller than it is.The Ioniq 9 has a stance more like a Volvo station wagon than a gigantic family hauler, but make no mistake, it's the latter. That's immediately evident as soon as you climb into the third row. It's a bit of a slow process thanks to the power second-row seats, but once your path is clear, access to the rear is easy, and I was shocked to find generous headroom back there. There's even a tolerable amount of legroom for an adult.Even better are the 100-watt USB-C outlets that are present even in the way-back. All three rows have access to high power outputs that'll keep just about anything short of a portable gaming rig juiced on the go. Second-row seating is far more comfortable, especially if you opt for the Ioniq 9 Limited or Calligraphy trims with a six-seat configuration. These give you a set of heated and ventilated captain's chairs. (A seven-seat, bench configuration is also available.)The seats up front are quite similar, also heated and ventilated, with the driver's seat adding massage. Extending leg rests also make the Ioniq 9 an ideal space for a nap during a charging stop. It'll need to be a quick one, though.Power and ChargingTim Stevens for EngadgetThe Ioniq 9 is built on Hyundai's E-GMP platform, which also underpins the Ioniq 5 and Ioniq 6, among others. That includes an 800-volt architecture and a maximum charging speed of 350 kW. Find a charger with enough juice and it'll go from 10 to 80 percent in 24 minutes.Yes, it has a Tesla-style NACS plug, which means you can use Superchargers without an adapter. Still, sadly, Tesla's current suite of chargers isn't fast enough to support that charging rate. That means you'll have to use a CCS adapter, which is included.All those electrons get shoved into a 110.3-kWh battery pack, with roughly 104 kWh usable. Maximum range depends on which trim you choose, from 335 miles for a base, rear-drive model, dropping to 311 miles for a top-shelf Performance model with dual-motor AWD. Naturally, that upgrade gets you more power, either 303 or 422 horsepower, depending on which dual-motor variant you choose. Still, even the single motor has 215 hp.I sadly was not able to sample the single-motor flavor, but the Performance Calligraphy Design I drove was plenty snappy. Even in Eco, the most relaxed of the available on-road drive modes, the Ioniq 9 had plenty of response to make impromptu passes or simply to satisfy my occasional need for G-forces. There's also a selection of off-road drive modes for various types of terrain, but that's clearly not a focus for this machine. While it'll do just fine on unpaved surfaces and some light off-roading, given the sheer dimensions of this thing, I wouldn't point it down any particularly tricky trails.Behind the WheelTim Stevens for EngadgetMuch of my time spent driving the Ioniq 9 I was sitting in traffic, cruising on metropolitan streets or casually motoring between rest stops over broken rural roads. I'd say that's close to the average duty cycle for a vehicle like this, and the Ioniq 9 was a treat over most of it.At slower speeds, the suspension proved a bit rough, possibly due to the 21-inch wheels on the Calligraphy trim. But, over 30 mph or so, everything smoothed out nicely. This three-row SUV is calm and quiet at speed, helped by sound-isolating laminated glass in the first and second rows, plus active sound canceling akin to your headphones, but on a significantly larger scale.The only place where you hear any road noise is back in the third row. There's noticeably more wind noise and a bit more whine from the rear motor, too, but I'd gladly take that over the drone of an average SUV's exhaust out the back.Behind those rear seats, there's 21.9 cubic feet of cargo space, or a whopping 86.9 if you fold both rows down. Yes, there is a frunk, but it's tiny and it's fully occupied by the charging cable, CCS adapter and flat tire kit.All the TechTim Stevens for EngadgetThose 100-watt USB-C ports are definitely the tech highlight on the inside of the machine. Still, you'll also find Hyundai's standard infotainment experience here, including both wireless Android Auto and Apple CarPlay. They're experienced through a pair of 12.3-inch displays joined at the bezel to form one display, sweeping from behind the wheel out to the middle of the dashboard. On the Ioniq 5 and Ioniq 6, this looks impressive. On the Ioniq 9, it honestly looks a bit Lilliputian given the giant scale of everything else here.The Ioniq 9 features some lovely styling touches, subtle RGB LED mood lighting and generally nice-feeling surfaces - so long as your fingers don't wander too far down. Harsh plastics covering the lower portions of the interior feel less than premium for a machine that otherwise looks this posh.But it at least carries a fair price. You can get in an Ioniq 9 for as little as $58,955, if you don't mind the single-motor version. You can also subtract the $7,500 federal incentive for as long as that lasts. There are six trims to choose from, with the top-shelf Performance Calligraphy Design AWD model you see pictured here costing $79,540 after a $1,600 destination charge.Yes, that's a lot, entering into Rivian R1S territory. But, where the Rivian is quicker and certainly more capable off-road, the Ioniq 9 is roomier, more practical and honestly more comfortable for the daily grind.You can also save a few thousand by going with a Kia EV9, but I feel like the extra presence and features of the Hyundai will woo many. Either way, you're getting a winner, which is yet more proof that our current slate of EV options is the best yet, and only getting better.This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/hyundais-ioniq-9-is-a-big-electric-suv-with-big-style-130050754.html?src=rss
Fender just launched its own free DAW software for recording music
The iconic instrument and amp maker Fender is diving deep into the digital domain. The company just announced Fender Studio, an all-in-one music-creation software platform. It's basically a digital audio workstation (DAW) but one that's intended for newbies. Think GarageBand and not Pro Tools. Just like GarageBand, Fender Studio is free.The software looks perfect for going straight into an audio interface without any complications. Players can select from a wide variety of digital amp recreations. These include some real icons, like the '65 Twin Reverb guitar amp, the Rumble 800 bass amp, the '59 Bassman, the Super-Sonic, the SWR Redhead and several more. More amp models are likely on the way.Along with the amp models, the software comes with a bunch of effects inspired by iconic Fender pedals. There's a vintage tremolo, a stereo tape delay, a small hall reverb, a triangle flanger, a compressor and, of course, overdrive and distortion. There's an integrated tuner and plenty of effects presets for those who don't want to fiddle with virtual knobs.The software includes several dedicated effects for vocalists. There's a de-tuner, a vocal transformer and a vocoder, in addition to standard stuff like compression, EQ, reverb and delay.FenderThere's also a cool feature for those who just want to practice. Fender Studio offers "remixable jam tracks" that lets folks play along with songs in a wide variety of genres. These let players mute or delete an instrument, for playing along. To that end, users can slow everything down or speed things up. Fender promises that new songs will be added to this platform in regular intervals.As for the nuts and bolts of recording, the arranger can currently handle up to 16 tracks. Despite the track limitation, the software offers some real pro-grade features. There are various ruler formats, a global transpose, input monitoring, looping abilities, time stretching and even a simple pitch-shifting tool. Tracks allow for fades, FX sends and more.FenderThe mobile version of the app includes a pinch-to-zoom feature, which is always handy with recording software. All of those squiggly lines can get tough on the old eyeballs.Fender Studio is available on just about everything. There's a version for Mac, Windows, iOS, Android and Linux. It should even run well on Chromebooks. Again, this software is free, though some features do require signing up for a Fender account.This is certainly Fender's biggest push into digital audio, but not its first. The company has long-maintained the Mustang Micro line of personal guitar amplifiers. These plug straight into a guitar or bass and offer models of various amps and effects. The company also released its own audio interface, the budget-friendly Fender Link I/O, and a digital workstation that emulates over 100 amps.This article originally appeared on Engadget at https://www.engadget.com/audio/fender-just-launched-its-own-free-daw-software-for-recording-music-130007067.html?src=rss
Nintendo is reportedly using Samsung to build the main Switch 2 chips
Nintendo hired Samsung to build the main chips for the Switch 2, including an 8-nanometer processor custom designed by NVIDIA, Bloomberg reported. That would mark a move by Nintendo away from TSMC, which manufactured the chipset for the original 2017 Switch. Nintendo had no comment, saying it doesn't disclose its suppliers. Samsung and NVIDIA also declined to discuss the matter.Samsung has previously supplied Nintendo with flash memory and displays, but building the Switch 2's processor would be a rare win for the company's contract chip division. Samsung can reportedly build enough chips to allow Nintendo to ship 20 million or more Switch 2s by March of 2026.NVIDIA's new chipset was reportedly optimized for Samsung's, rather than TSMC's manufacturing process. Using Samsung also means that Nintendo won't be competing with Apple and others for TSMC's resources. During Nintendo's latest earnings call, President Shuntaro Furukawa's said that the company didn't expect any component shortages with its new console - an issue that plagued the original Switch.Nintendo said in the same earnings report that it was caught by surprise with 2.2 million applications for Switch 2 pre-orders in Japan alone. Despite that, the company projected sales of 15 million Switch 2 units in its first year on sale to March 2026, fewer than analyst predictions of 16.8 million - likely due to the impact of Trump's tariffs.This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/nintendo-is-reportedly-using-samsung-to-build-the-main-switch-2-chips-120006403.html?src=rss
The Morning After: Computex's new laptops from ASUS, Razer and more
If you've been holding out for the latest 2025 PC models and graphics card loadouts, Computex is usually when you have to check your bank balance. The PC-centric tech show in Taiwan has kicked off with a barrage of new laptops from the likes of Razer, ASUS and Acer.ASUS has revealed the new ROG Zephyrus G14, with a 14-inch (of course) screen at 3K resolution, a refresh rate of 120Hz, 500 nits of peak brightness and Dolby Vision support. The G14 can be outfitted with up to an AMD Ryzen AI 9 HX 370 processor with 12 cores and 24 threads and an AMD XDNA NPU with up to 50 TOPS. The graphics card maxes out with the NVIDIA GeForce RTX 5080, while RAM options go up to 64GB and on-board storage up to 2TB.RazerMeanwhile, Razer's new Blade 14 laptops will arrive with RTX 5000 series cards, while still remaining thin, thin, thin. Those NVIDIA cards can tap into the company's DLSS 4 tech to provide the highest quality gaming experience possible in a 14-inch" laptop, according to Razer. The laptops have AMD Ryzen AI 9 365 processors that can achieve up to 50 TOPS. And if you're feeling even more lavish, there's also the bigger Blade 18, which you can load out with the RTX 5090. And then there's Acer, which is doing something special with thermal interface materials.- Mat SmithGet Engadget's newsletter delivered direct to your inbox. Subscribe right here!You might have missed:
The ASUS ProArt A16 laptop gets you the latest from AMD and a giant screen
ASUS is updating both its ProArt laptop and its Chromebooks with the latest internals for Computex 2025, and giving both families of laptops a more premium look, with new colors and tasteful finishes.The ASUS ProArt A16 stands out as the most premium pick, with a black aluminum body, "stealth" hinge that bring the top half of the laptop nearly flush with the bottom and a smudge-resistant finish that should hopefully avoid fingerprints. Inside, ASUS is offering an AMD Ryzen AI 9 HX processor and a NVIDIA GeForce RTX 5070 Laptop GPU, both of which qualify the new ProArt as a Copilot+ PC. That means you'll get access to Windows' growing list of AI features, and ASUS is also including to apps - StoryCube and MuseTree - that can run generative AI models entirely locally. All packed into a laptop that's around half-an-inch thick and has a 16-inch 4K OLED.AsusIn terms of Chromebooks, ASUS is offering both normal models and Chromebook Plus versions that support Google's AI tools. The ASUS Chromebook Plus CX34 has a 14-inch display that can fold flat and a 1080p webcam, alongside up to an Intel Core i5 and 8GB of LPDDR5 RAM. That's enough to offer Gemini features locally, and you'll get priority access to Gemini Advanced. The only real disadvantage is the giant ASUS logo that still looks awkward next to the similarly prominent Chromebook logo, and the limited color options: You can only pick between white or grey.AsusThe ASUS Chromebook CX14 and CX15 come with up to an Intel Core N355 processor, put to 8GB of LPDDR5 RAM and up to 256GB of storage. If you're curious about Google's AI features, you can also purchase a Plus version of the CX14. Whether you get the 14-inch or 15-inch model, both come with a respectable selection of ports, including HDMI for connecting to external displays. Either size also gets a variety of color options: blue, and a sliver-y grey or a greenish-grey in a either a matte or textured finish.AsusThe ASUS Chromebook CX34 is available now starting at $400 from both Walmart and Best Buy. Meanwhile, the rest of the above laptops won't be available until Q2 2025. The ProArt A16 starts at $2,500 from ASUS' online store and Best Buy. The Chromebook CX14 starts at $279 from Best Buy or Costco. The Chromebook Plus CX14 will be available for $429 from Best Buy. And finally the Chromebook CX15 starts at $220 and will be able to be purchased from Best Buy and Amazon.This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/the-asus-proart-a16-laptop-gets-you-the-latest-from-amd-and-a-giant-screen-013037587.html?src=rss
Spotify iOS users can now buy audiobooks directly from the app
Spotify is continuing to add more ways for listeners to directly make purchases within its iOS app. Following on the streaming service's changes to make purchasing subscriptions easier earlier this month, there's now an an option for users to buy audiobooks in Spotify."Spotify submitted a new app update that Apple has approved: Spotify users in the United States can now see pricing, buy individual audiobooks and purchase additional 'Top Up' hours for audiobook listening beyond the 15 hours included in Premium each month," the company said in its updated blog post.The wave of changes stem from the ongoing court case between Apple and Epic Games surrounding fees for purchases made outside the App Store. While things appear to be swinging in favor of app and service providers, Apple is likely to continue challenging the rulings even as it makes changes to allow for external payment options.This article originally appeared on Engadget at https://www.engadget.com/entertainment/spotify-ios-users-can-now-buy-audiobooks-directly-from-the-app-230304105.html?src=rss
Elgato's Stream Deck expands beyond the company's hardware
The Elgato Stream Deck is expanding into a hardware-agnostic platform. On Monday, the company unveiled a software version of the programmable shortcut device. Also on tap are a module for integration in third-party products and DIY projects, an Ethernet dock and an updated Stream Deck MK.2 with scissor-switch keys.Stream Deck MK.2 Scissor KeysThere's a new version of the popular Stream Deck MK.2. The only difference is that this version ditches membrane keys in favor of scissor-switch ones. Scissor keys (found on many laptops, like modern MacBooks) have a shorter travel distance and sharper actuation than the mushy-feeling ones on the (still available) legacy MK.2.The Stream Deck MK.2 Scissor Keys costs $150. Shipments begin around the beginning of June.Virtual Stream DeckVirtual Stream Deck (VSD) is a software-only counterpart of the classic devices. Like the hardware versions, the VSD includes a familiar grid of programmable shortcut buttons. Anything you'd configure for a device like the Stream Deck MK.2 or XL, you can also do for the VSD. Place the interface anywhere on your desktop, pin it for quick access or trigger it with a mouse click or hotkey.ElgatoPresumably to avoid cannibalizing its hardware business, Elgato is limiting the VSD to owners of its devices. Initially, it will only be available to people who have Stream Deck hardware or select Corsair peripherals (the Xeneon Edge and Scimitar Elite WE SE Mouse). The company says the VSD will soon be rolled out to owners of additional devices.The VSD has one frustrating requirement. It only works when one of those compatible accessories is connected to your computer. Unfortunately, that means you can't use it as a virtual Stream Deck replacement, mirroring your shortcuts while you and your laptop are on the go. That seems like a missed opportunity.Instead, it's more like a complement to Stream Deck hardware while it's connected - a way to get more shortcuts than the accessory supports. It's also a method for Corsair accessory owners to get Stream Deck functionality without buying one.Regardless, Virtual Stream Deck launches with the Stream Deck 7.0 beta software.Stream Deck ModulesElgatoStream Deck Modules can be built into hardware not made by Elgato. So, hobbyists, startups and manufacturers can incorporate the OLED shortcut buttons into their DIY projects or products. The only difference is their more flexible nature. Otherwise, they function the same as legacy Stream Deck products.Stream Deck Modules have an aluminum chassis that's "ready to drop straight into a custom mount, machine or product." They're available in six-, 15- and 32-key variants.The modules begin shipping today. You'll pay $50 for the six-key version, $130 for the 15-key one and $200 for the 32-key variant. (If you're providing them for an organization, Elgato offers volume discounts.)Elgato Network DockElgatoThe Elgato Network Dock gives Stream Deck devices their own Ethernet connections. This untethers the shortcuts from the desktop, allowing for "custom installations, remote stations and more."The Network Dock supports both Power over Ethernet (PoE) and non-PoE networks. You can set up its IP configuration on-device.The dock costs $80 and ships in August.This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/elgatos-stream-deck-breaks-free-from-the-companys-hardware-230052921.html?src=rss
New Orleans police secretly used facial recognition on over 200 live camera feeds
New Orleans' police force secretly used constant facial recognition to seek out suspects for two years. An investigation by The Washington Post discovered that the city's police department was using facial recognition technology on a privately owned camera network to continually look for suspects. This application seems to violate a city ordinance passed in 2022 that required facial recognition only be used by the NOLA police to search for specific suspects of violent crimes and then to provide details about the scans' use to the city council. However, WaPo found that officers did not reveal their reliance on the technology in the paperwork for several arrests where facial recognition was used, and none of those cases were included in mandatory city council reports."This is the facial recognition technology nightmare scenario that we have been worried about," said Nathan Freed Wessler, an ACLU deputy director. "This is the government giving itself the power to track anyone - for that matter, everyone - as we go about our lives walking around in public." Wessler added that the is the first known case in a major US city where police used AI-powered automated facial recognition to identify people in live camera feeds for the purpose of making immediate arrests.Police use and misuse of surveillance technology has been thoroughly documented over the years. Although several US cities and states have placed restrictions on how law enforcement can use facial recognition, those limits won't do anything to protect privacy if they're routinely ignored by officers.Read the full story on the New Orleans PD's surveillance program at The Washington Post.This article originally appeared on Engadget at https://www.engadget.com/ai/new-orleans-police-secretly-used-facial-recognition-on-over-200-live-camera-feeds-223723331.html?src=rss
Motorola has mysteriously delayed its new Razr phones, but only for some carriers
The latest generation of Motorola Razr smartphones was slated to go on sale last week beginning May 15, but availability has been delayed for purchases through select carriers. 9to5Google reported that the launch was delayed to May 22 for Verizon, Straight Talk, Total Wireless and Visible. We've reached out to Motorola for additional comment on the situation.When a potential customer asked on X about availability after the phones were not seen at the expected May 15 date, a Verizon rep replied that the launch was "placed on hold." The Verizon blog post announcing the plans and pricing for the Razr models has been updated to show a May 22 release date.Razr phones are still listed as available to buy at other mobile carriers. However some customers have taken to Reddit, sharing that their orders have been delayed and speculating as to why. Most of them did not specify which channels or carriers they used for the purchases, so it's possible that all of the issues are centered on the four carriers mentioned in Motorola's statement, although there are posts claiming their phones' new ship date will be May 28.This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/motorola-has-mysteriously-delayed-its-new-razr-phones-but-only-for-some-carriers-211654192.html?src=rss
Google I/O 2025: New Android 16, Gemini AI and everything else to expect at Tuesday's keynote
Google I/O, the search giant's annual developer conference, kicks off on Tuesday, May 20. The event is arguably the most important on the company's annual calendar, offering the opportunity for the company to share a glimpse at everything it has been working on over the past year - and contextualize its biggest priorities for the next twelve months.The dance card for Google I/O was apparently so packed that the company spun off a dedicated Android showcase a whole week earlier. (See everything that was announced at the Android Show or go to our liveblog to get a feel for how things played out.) With that event now behind us, Google can stay focused on its most important core competency: AI.Google's presentation will come on the heels of announcements from three big rivals in recent days. Further up the Pacific coast, Microsoft is hosting its Build developer conference, where it's already unveiled an updated Copilot AI app. Meanwhile, at the Computex show in Taiwan, NVIDIA CEO Jensen Huang highlighted a partnership with Foxconn to develop an "AI factory supercomputer" powered by 10,000 Blackwell AI chips. And Meta held its debut LlamaCon AI conference last month, but CEO Mark Zuckerberg's plans for AI dominance have reportedly since hit some snags. (Apple will share its updated AI roadmap on June 9 when its WWDC developers conference kicks off.)If you'd like to tune in from home and follow along as Google makes its announcements, check out our article on how to watch the Google I/O 2025 keynote. We'll also be liveblogging the event, so you can just come to Engadget for the breaking news.Android 16The presentation featured Android Ecosystem President Sameer Samat, who took over for Burke in 2024. We saw Samat and his colleagues show off years, Android hasn't had much of a spotlight at Google's annual developer conference. Thankfully, last week's Android Show breakout let Google's mobile operating system take the spotlight for at least a day.The presentation featured Android Ecosystem President Sameer Samat, who took over for Burke in 2024. We saw Samat and his colleagues show off the new Material 3 Expressive design, and what we learned confirmed some of the features that were previously leaked, like the "Ongoing notifications" bar. Material 3 Expressive is also coming to Wear OS 6, and the company is expanding the reach of Gemini by bringing it to its smartwatch platform, Android Auto and Google TV. Android is also amping up its scam-detection features and a refined Find Hub that will see support for satellite connectivity later in the year.Speaking of timing, Google has already confirmed the new operating system will arrive sometime before the second half of the year. Though it did not release a stable build of Android 16 today, Samat shared during the show that Android 16 (or at least part of it) is coming next month to Pixel devices. And though the company did cover some new features coming to Android XR, senior director for Android Product and UX Guemmy Kim said during the presentation that "we'll share more on Android XR at I/O next week."It clearly seems like more is still to come, and not just for Android XR. We didn't get confirmation on the Android Authorityreport that Google could add a more robust photo picker, with support for cloud storage solutions. That doesn't mean it won't be in Android 16, it might just be something the company didn't get to mention in its 30-minute showcase. Plus, Google has been releasing new Android features in a quarterly cadence lately, rather than wait till an annual update window to make updates available. It's possible we see more added to Android 16 as the year progresses.One of the best places to get an idea for what's to come in Android 16 is in its beta version, which has already been available to developers and is currently in its fourth iteration. For example, we learned in March that Android 16 will bring Auracast support, which could make it easier to listen to and switch between multiple Bluetooth devices. This could also enable people to receive Bluetooth audio on hearing aids they have paired with their phones or tablets.Android XRRemember Google Glass? No? How about Daydream? Maybe Cardboard? After sending (at least) three XR projects to the graveyard, you would think even Google would say enough is enough. Instead, the company is preparing to release Android XR after previewing the platform at the end of last year. This time around, the company says the power of its Gemini AI models will make things different. We know Google is working with Samsung on a headset codenamed Project Moohan. Last fall, Samsung hinted that the device could arrive sometime this year.Whether Google and Samsung demo Project Moohan at I/O, I imagine the search giant will have more to say about Android XR and the ecosystem partners it has worked to bring to its side for the initiative. This falls in line with what Kim said about more on Android XR being shared at I/O.AI, AI and more AIIf Google felt the need to split off Android into its own showcase, we're likely to get more AI-related announcements at I/O than ever before. The company hasn't provided many hints about what we can expect on that front, but if I had to guess, features like AI Overviews and AI Mode are likely to get substantive updates. I suspect Google will also have something to say about Project Mariner, the web-surfing agent it demoed at I/O 2024. Either way, Google is an AI company now, and every I/O moving forward will reflect that.Project AstraSpeaking of AI, Project Astra was one of the more impressive demos Google showed off at I/O 2024. The technology made the most of the latest multi-modal capabilities of Google's Gemini models to offer something we hadn't seen before from the company. It's a voice assistant with advanced image recognition features that allows it to converse about the things it sees. Google envisions Project Astra one day providing a truly useful artificial assistant.However, after seeing an in-person demo of Astra, the Engadget crew felt the tech needed a lot more work. Given the splash Project Astra made last year, there's a good chance we could get an update on it at I/O 2025.A Pinterest competitorAccording to a report from The Information, Google might be planning to unveil its own take on Pinterest at I/O. That characterization is courtesy ofThe Information, but based on the features described in the article, Engadget team members found it more reminiscent of Cosmos instead. Cosmos is a pared-down version of Pinterest, letting people save and curate anything they see on the internet. It also allows you to share your saved pages with others.Google's version, meanwhile, will reportedly show image results based on your queries, and you can save the pictures in different folders based on your own preferences. So say you're putting together a lookbook based on Jennie from Blackpink. You can search for her outfits and save your favorites in a folder you can title "Lewks," perhaps.Whether this is simply built into Search or exists as a standalone product is unclear, and we'll have to wait till I/O to see whether the report was accurate and what the feature really is like.Wear OSLast year, Wear OS didn't get a mention during the company's main keynote, but Google did preview Wear OS 5 during the developer sessions that followed. The company only began rolling out Wear OS 5.1 to Pixel devices in March. This year, we've already learned at the Android Show that Wear OS 6 is coming, with Material 3 Expressive gracing its interface. Will we learn more at I/O? It's unclear, but it wouldn't be a shock if that was all the air time Wear OS gets this year.NotebookLMGoogle has jumped the gun and already launched a standalone NotebookLM app ahead of I/O. The machine-learning note-taking app, available in desktop browsers since 2023, can summarize documents and even synthesize full-on NPR-style podcast summaries to boot.Everything elseGoogle has a terrible track record when it comes to preventing leaks within its internal ranks, so the likelihood the company could surprise us is low. Still, Google could announce something we don't expect. As always, your best bet is to visit Engadget on May 20 and 21. We'll have all the latest from Google then along with our liveblog and analysis.Update, May 5 2025, 7:08PM ET: This story has been updated to include details on a leaked blog post discussing "Material 3 Expressive."Update, May 6 2025, 5:29PM ET: This story has been updated to include details on the Android 16 beta, as well as Auracast support.Update, May 8 2025, 3:20PM ET: This story has been updated to include details on how to watch the Android Show and the Google I/O keynote, as well as tweak the intro for freshness.Update, May 13 2025, 3:22PM ET: This story has been updated to include all the announcements from the Android Show and a new report from The Information about a possible image search feature debuting at I/O. The intro was also edited to accurately reflect what has happened since the last time this article was updated.Update, May 14 2025, 4:32PM ET: This story has been updated to include details about other events happening at the same time as Google I/O, including Microsoft Build 2025 and Computex 2025.Update, May 19 2025, 5:13PM ET: Updated competing AI news from Microsoft, Meta and NVIDIA, and contextualized final rumors and reports ahead of I/O.This article originally appeared on Engadget at https://www.engadget.com/ai/google-io-2025-new-android-16-gemini-ai-and-everything-else-to-expect-at-tuesdays-keynote-203044742.html?src=rss
SAG-AFTRA says Fortnite's AI Darth Vader voice violates fair labor practices
SAG-AFTRA, the labor union representing performers in film, television and interactive media, has submitted an Unfair Labor Practice (ULP) filing against Epic Games for using an AI-generated version of Darth Vader's voice in the current season of Fortnite. Disney and Epic first announced on May 16 that Fortnite would feature a take on the character using an AI-generated version of James Earl Jones' voice.The issue in SAG-AFTRA's eyes is that the union is currently on strike while it negotiates a new contract with video game companies, and using an AI-generated voice represents Epic refusing to "bargain in good faith." The AI-powered version of Darth Vader is interactive, but that doesn't change the fact that the video game version of Darth Vader has frequently been played by actors other than Jones.Disney got permission from Jones and his family to use AI to replicate his voice for film and TV in 2022, so there is precedent for an AI performance of this kind. After Jones' death in September 2024, the AI route technically became the only way to use Darth Vader's "original voice," other than reusing clips of past performances. Unless of course Epic or Disney wanted to pay another actor to play Darth Vader, which would require coming to an agreement on a new contract for video game performers.ULP filings are reviewed by the National Labor Review Board and can lead to hearings and injunctive relief (a court ordering Epic to remove Darth Vader from the game until a settlement is reached, for example). They are also often used as a way for unions to provoke companies to come back to the bargaining table or respond with a more realistic offer. SAG-AFTRA's Interactive Media Strike has been ongoing since July 26, 2024. SAG-AFTRA members originally voted in favor of a strike in September 2023 for better wages and AI protections.Engadget has reached out to both Disney and Epic for comment on SAG-AFTRA's ULP filing. We'll update this article if we hear back.This article originally appeared on Engadget at https://www.engadget.com/gaming/sag-aftra-says-fortnites-ai-darth-vader-voice-violates-fair-labor-practices-202009163.html?src=rss
VR bop Thrasher is heading to PC and Steam Deck
Thrasher is coming to flat screens, with a launch on Steam and Steam Deck scheduled for later in 2025. The new platform releases follow the VR game's debut last summer on the Meta Quest and Apple Vision Pro. Devs Brian Gibson and Mike Mandel, collaborating under the moniker Puddle, announced the new hardware additions in a fittingly surreal trailer today.Both Gibson and Mandel have a history making music- and audio-driven interactive experiences. Mandel worked on Fuser, Rock Band VR and Fantasia: Music Evolved. Gibson's previous project was the VR title Thumper, which bills itself with the tagline "a rhythm violence game." (Imagine Tetris Effect if it was filled with aggression rather than transcendent joy. But in a really, really good way.)Thrasher follows their existing legacy of immersive and unsettling games with its strange concept of a cosmic eel doing battle against a space baby, all set to a throbbing soundtrack. The addition of a non-virtual reality option is an exciting development for fans of the title, and it should be interesting to see how well the pair adapts their VR control scheme to gamepads and mouse/keyboard setups.This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/vr-bop-thrasher-is-heading-to-pc-and-steam-deck-200753057.html?src=rss
Razer's new Blade 14 laptops are outfitted with RTX 5000 series cards
Razer is back with a refresh of the Blade 14 laptop and it's the thinnest 14-inch model in the company's history. It measures just 15.7mm at its slimmest point and weighs just over three pounds. This makes it an ideal computer for on-the-go gaming.To that end, these laptops feature the new NVIDIA GeForce RTX 5000 series GPUs. Buyers can spec the Blade 14 out with up to the RTX 5070. This pairs with NVIDIA DLSS 4 tech to provide "the highest quality gaming experience possible in a 14-inch" laptop.They are also outfitted with the AMD Ryzen AI 9 365 processor that can achieve up to 50 TOPS. It comes with a bunch of AI applications that take advantage of that processor, like Copilot+, Recall, Live Captions and Cocreate.The Blade 14 goes up to 64GB of RAM and includes a 72 WHr battery that should last around 11 hours before requiring a charge. That's a pretty decent metric for a laptop this powerful. The 3K OLED display offers a 120Hz refresh rate and a 0.2ms response time.RazerThere's a MicroSD slot, two USB-C ports and a traditional HDMI 2.1 port. The Razer 14 integrates with wireless standards like Bluetooth 5.4 and Wi-Fi 7. The laptop even includes a newly designed ventilation system for better performance. The exterior is made from T6-grade aluminum and features a sand-blasted texture and an anodized matte finish.It's available for purchase right now and comes in black and gray colorways. Pricing starts at $2,300, but that one comes with just 16GB of RAM and the RTX 5060 GPU.RazerThe Blade 16 laptop is now available in a new configuration that features the RTX 5060 GPU. The company also recently revealed the biggest sibling of the bunch, the Blade 18. That one goes all the way up to the RTX 5090.This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/razers-new-blade-14-laptops-are-outfitted-with-rtx-5000-series-cards-185517669.html?src=rss
Trump will sign the Take It Down Act criminalizing AI deepfakes today
President Donald Trump is set to sign the Take It Down Act today, according to CNN. The act is a piece of bipartisan legislation that criminalizes the publication of "nonconsensual intimate visual depictions," including AI deepfakes. The law made it through the US House of Representatives in April 2025, prompting concern from free speech advocates that believe parts of the law could be easily abused to curtail speech.The Take It Down Act was created to address the spread of nonconsensual, sexually exploitative images online. Laws exist addressing the issue at the state level, and some online platforms already have methods for requesting a nonconsensual image or video be taken down. This new law would set a federal standard, though, making taking down posts mandatory and directing companies to create a system for requesting images or videos be removed, under the supervision of the Federal Trade Commission.The issue with the law as written, according to the Electronic Frontier Foundation, is that its takedown provision "applies to a much broader category of content...than the narrower NCII [nonconsensual intimate image] definitions found elsewhere in the bill." The EFF also suggests that the short time-frame of 48 hours that the Take It Down Act requires means that smaller online platforms will probably just remove posts when they receive a complaint rather than verify that the post actually violates the law.Trump has expressed interest in taking advantage of the new law, as well. "I'm going to use that bill for myself, too, if you don't mind. There's nobody gets treated worse than I do online. Nobody," Trump said during a joint session of Congress in March. Given the lopsided composition of the current FTC and the Trump administration's already loose interpretation of existing laws, it's not hard to imagine how the original intentions of the Take It Down Act could be twisted.This article originally appeared on Engadget at https://www.engadget.com/big-tech/trump-will-sign-the-take-it-down-act-criminalizing-ai-deepfakes-today-184358916.html?src=rss
Bluesky is testing a 'live now' feature with streamers and the NBA
Bluesky doesn't have its own live streaming capabilities, but the service testing out a new feature to boost users' streams off the platform. The company is allowing "select" accounts to link their Twitch or YouTube accounts to their profiles, which will display a red indicator and "live" badge when they're actively streaming.In an update, Bluesky described the feature as an "early test" that will initially only be available to a "handful of accounts" before it's ready for a wider launch. "Bluesky is the place for breaking news and real-time updates," the company said. "This tool supports streamers, journalists, and anyone sharing live moments as they happen."The update comes one day after the service showed off a similar badge for the NBA's official Bluesky account. The league will apparently direct fans on Bluesky to "live content they are promoting," Bluesky COO Rose Wang said. Partnering with the NBA on the feature is an interesting move for Bluesky. Sports fans, and NBA fans in particular, have had an outsized impact on Twitter's culture. And the company now known as X has inked several high-profile deals with the NBA and other major sports leagues over the years to promote their content.Notably, Bluesky doesn't have advertising. It's using the "live" indicators to direct users to off-platform content, so it's unclear if there are any business opportunities for the upstart platform that come with this feature. But it shows that Bluesky wants to play a bigger role in the kinds of conversations that once shaped Twitter's culture, and make a name for itself as a destination to follow live events.This article originally appeared on Engadget at https://www.engadget.com/social-media/bluesky-is-testing-a-live-now-feature-with-streamers-and-the-nba-174443865.html?src=rss
23andMe bought by Regeneron in court auction
It has been nearly two months since 23andMe declared bankruptcy and the company has officially been sold. The US biotech company Regeneron has agreed to buy 23andMe and all of its assets for $256 million (even though it was valued at $50 million in March). This purchase marks the end of former 23andMe CEO Anne Wojcicki's bid to buy the company, which included resigning in order to make an independent offer.According to Mark Jensen, Chair and member of the Special Committee of the Board of Directors of 23andMe, Regeneron is offering to keep all of the former company's employees. This decision, "will allow us to continue our mission of helping people access, understand and gain health benefits through greater understanding of the human genome," he said in a release.The announcement also tries to emphasize data protection following 23andMe users' concerns about where their information might end up and, in some cases, deleting their data from the site. We are pleased to have reached a transaction that maximizes the value of the business and enables the mission of 23andMe to live on, while maintaining critical protections around customer privacy, choice and consent with respect to their genetic data," said Jensen.The sentiment was echoed by its soon to be new owner. "Through our Regeneron Genetics Center, we have a proven track record of safeguarding personal genetic data, and we assure 23andMe customers that we will apply our high standards for safety and integrity to their data and ongoing consumer genetic services," said George D. Yancopoulos, MD, PhD, co-founder, board co-chair, president and chief scientific officer of Regeneron in a statement. "We believe we can help 23andMe deliver and build upon its mission to help people learn about their own DNA and how to improve their personal health, while furthering Regeneron's efforts to improve the health and wellness of many."How exactly 23andMe will shake out after the Regeneron purchase is to be seen. The company has taken a dramatic fall in recent years, since going public. Hackers accessed the information of 6.9 million people in 2023 and 23andMe laid off over 200 people last year.This article originally appeared on Engadget at https://www.engadget.com/big-tech/23andme-bought-by-regeneron-in-court-auction-174003286.html?src=rss
Dell stuffed an enterprise-grade NPU into its new Pro Max Plus laptop
Dell just announced the new Pro Max Plus laptop, which the company has stuffed with an enterprise-grade NPU. This makes it a top-tier choice for on-device AI applications.The Pro Max Plus features the Qualcomm AI 100 PC Inference Card, making it the "world's first mobile workstation with an enterprise-grade discrete NPU." This NPU offers 32 AI-cores and 64GB of memory. This is enough to directly handle the type of large AI models that typically require the cloud to run.DellWe don't know anything else regarding traditional specs, but we do know that this will be one of many Pro Max Plus designs. The other models won't be quite as focused on advanced AI applications.Dell says this laptop is primarily intended for "AI engineers and data scientists," and so it's held off on announcing pricing. Given the specs, it's likely to be way too expensive for traditional consumers. It's coming out later this year.DellThe company also revealed new server designs and a new cooling system for these servers. Dell's PowerCool Enclosed Rear Door Heat Exchanger (eRDHx) is an alternative to standard rear door heat exchangers. Dell says it captures 100 percent of heat generated via a "self-contained airflow system." It also suggests it can reduce cooling energy costs by up to 60 percent when compared to current alternatives.This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/dell-stuffed-an-enterprise-grade-npu-into-its-new-pro-max-plus-laptop-170043379.html?src=rss
ASUS unveiled new ROG Zephyrus G14 and G16 laptops at Computex
ASUS has revealed several new laptops at Computex, including the impressive-looking ROG Zephyrus G14. This high-spec computer looks perfect for on-the-go gaming and just about anything else.First of all, this thing is pretty small. The 14-inch screen allows it to fit in just about any to-go bag. The 16:10 panel boasts a 3K display, a refresh rate of 120Hz, 500 nits of peak brightness and Dolby Vision integration. The bezels are slim, with an impressive 87 percent screen-to-body ratio.ASUSIt's also stylish, which is expected from the Zephyrus brand. The G14 features a CNC aluminum unibody that adds to the overall durability. It's available in two colorways. There's Eclipse Gray and Platinum White.As for specs, the G14 can be outfitted with up to an AMD Ryzen AI 9 HX 370 processor with 12 cores and 24 threads and an AMD XDNA NPU with up to 50 TOPS. The graphics card maxes out with the NVIDIA GeForce RTX 5080. RAM options go up to 64GB and on-board storage up to 2TB.The base model starts at $1,800, but won't be available until June 25. The other variations are already available for purchase.ASUSThe ROG Zephyrus G16 is bigger, obviously, but the screen ups the refresh rate up to 240Hz. These models boast the Intel Core Ultra 9 processor and up to the NVIDIA RTX 5070 GPU. It boasts a similar aluminum chassis to the G14 and a six-speaker system for increased immersion.The base model here costs $2,150 and will be available on June 25. Other versions are already available for purchase.ASUSThe company has also revealed refreshes of the ROG Strix G16 and G18. These gaming laptops can be outfitted with either AMD or Intel CPUs, with support for up to 32GB of RAM. Both models include the NVIDIA GeForce RTX 5060. All models include 1TB of on-board storage. The G18 starts at $1,700 and the G16 starts at $1,500.This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/asus-unveiled-new-rog-zephyrus-g14-and-g16-laptops-at-computex-163058281.html?src=rss
Sesame Street will air on Netflix and PBS simultaneously
PBS, alongside NPR, is facing an unprecedented attempt from the executive branch to cut its federal funding and potentially reduce what it can offer. But, in good news, one of its mainstays will be widely available - at least for the time being. New episodes of Sesame Streetare coming to both Netflix and PBS.The 56th season of Sesame Street will be available worldwide on Netflix and on PBS in the US. New episodes will come out the same day on the streamer, PBS Stations and PBS KIDS. "This unique public-private partnership will enable us to bring our research-based curriculum to young children around the world with Netflix's global reach, while ensuring children in communities across the US continue to have free access on public television to the Sesame Street they love," Sesame Workshop stated in a release.The deal also entitles Netflix to 90 hours of previously aired Sesame Street episodes. However, the new releases should look a bit different. Now, each episode will feature an 11-minute story, meant to allow for a deeper dive. There will also be a new animated bit called "Tales From 123," which takes place inside the characters' apartment building. Old segments will also return, like Elmo's World and Cookie Monster's Foodie Truck.
A new Microsoft 365 Copilot app starts rolling out today
Surprising no one, Microsoft's Build 2025 conference is mostly centered around its Copilot AI. Today, the company announced that it has begun rolling out its "Wave 2 Spring release," which includes a revamped Microsoft 365 Copilot app. It's also unveiled Copilot Tuning, a "low-code" method of building AI models that work with your company's specific data and processes. The goal, it seems, isn't to just make consumers reliant on OpenAI's ChatGPT model, which powers Copilot. Instead, Microsoft is aiming to empower businesses to make tools for their own needs. (For a pricey $30 per seat subscription, on top of your existing MS 365 subscription, of course.)Microsoft claims that Copilot Tuning, which arrives in June for members of an early adopter program, could let a law firm make AI agents that "reflect its unique voice and expertise" by drafting documents and arguments automatically without any coding. Copilot Studio, the company's existing tool for developing AI agents, will also exchange be able to "exchange data, collaborate on tasks, and divide their work based on each agent's expertise." Conceivably, a company could have its HR and IT agents collaborating together, rather than being siloed off in their own domains.With the new Microsoft 365 Copilot app, Microsoft has centered chatting with its AI to accomplish specific tasks. The layout looks fairly simple, and it appears that you'll also be able to tap into your existing agents and collaborative pages as well. As Microsoft announced in April, you'll also be able to purchase new agents in a built-in store, as well as build up Copilot Notebooks to collect your digital scraps. Like an AI version of OneNote or Evernote, Notebooks could potentially help you surface thoughts across a variety of media, and it can also produce two-person podcasts to summarize your notes. (It's unclear if they'll actually sound good enough to be useful, though.)This article originally appeared on Engadget at https://www.engadget.com/ai/a-new-microsoft-365-copilot-app-starts-rolling-out-today-160002322.html?src=rss
Microsoft Build 2025: How to watch and what to expect including Copilot AI, Windows 11 and more
Microsoft's annual Build developer conference kicks off today and, as always, it starts with a keynote. You can watch the opening event live starting at noon Eastern time right here, with the embedded YouTube stream below. (It's also available on Microsoft's website, though you'd have to register and sign in.) Just like last year, the event will be hosted by Microsoft CEO Satya Nadella, along with the company's Chief Technology Officer, Kevin Scott. According to the keynote page, the executives will be sharing how "Microsoft is creating new opportunity across [its] platforms in this era of AI."The company has been introducing new AI features at Build over the last few years, even as the company's close relationship with OpenAI continues to evolve. We expect Microsoft to add more AI agents to Windows 11 to automate more tasks for you on the operating system. It could also give us an in-depth look at Copilot Vision, a feature that allows the AI assistant to see what you're doing on your computer so it could talk you through various tasks. Microsoft likely wouldn't be announcing new hardware at the event, however, seeing as it has only recently launched a 12-inch Surface Pro tablet and a 13-inch Surface Laptop.The Build conference will also have a day 2 keynote streamed live that is scheduled to feature Scott Guthrie, Jay Parikh, Charles Lamanna and other key Microsoft executives, according to the summary on the event's YouTube page.Microsoft's Build conference will take place from May 19 to May 22. Two other tech events are also taking place around that time: Google's I/O conference from May 20 to 21 and the Computex computer expo in Taiwan from May 20 to 23.Update, May 19, 11:21AM ET: Updated to include link and basic information for the day 2 keynote.This article originally appeared on Engadget at https://www.engadget.com/big-tech/microsoft-build-2025-how-to-watch-and-what-to-expect-including-copilot-ai-windows-11-and-more-025928415.html?src=rss
NVIDIA and Foxconn are building an ’AI factory supercomputer’ in Taiwan
NVIDIA and Foxconn have teamed up to build what they are calling an AI factory supercomputer in Taiwan. The project, which NVIDIA announced at Computex, will "deliver state-of-the-art NVIDIA Blackwell infrastructure to researchers, startups and industries," according to the company. NVIDIA is building a new local headquarters in Taiwan as well.The supercomputer will be powered by 10,000 NVIDIA Blackwell GPUs. NVIDIA says the project will greatly increase the availability of AI computing and bolster local researchers and businesses. As it happens, the Taiwan National Science and Technology Council is investing in the project. It will offer the supercomputer's AI cloud computing resources to those in its tech ecosystem."Our plan is to create an AI-focused industrial ecosystem in southern Taiwan," Minister Wu Cheng-Wen of the council said in a statement. "We are focused on investing in innovative research, developing a strong AI industry and encouraging the everyday use of AI tools. Our ultimate goal is to create a smart AI island filled with smart cities, and we look forward to collaborating with NVIDIA and [Foxconn] to make this vision a reality."Foxconn, which is providing the supercomputer's AI infrastructure through its Big Innovation Company subsidiary, will also use the system to further its work in smart cities, electric vehicles and manufacturing. For instance, it aims to optimize connected transportation systems and other "civil resources" in smart cities, and develop advanced driver-assistance and safety systems.TSMC is looking to benefit from the project as well. The company's researchers will tap into the supercomputer's power in the hope of accelerating their R&D work.NVIDIA made the announcement on the same day that it released its GeForce RTX 5060 GPU. We gave the RTX 5060 Ti a score of 85 in our review.This article originally appeared on Engadget at https://www.engadget.com/ai/nvidia-and-foxconn-are-building-an-ai-factory-supercomputer-in-taiwan-145535818.html?src=rss
Get 66 percent off a two-year subscription to ProtonVPN
ProtonVPN subscriptions are available at a steep discount right now as part of an exclusive sale for Engadget readers. A 12-month subscription is down to $48, which is a discount of around $72 and works out to $4 per month. A 24-month plan now costs just $81. This represents a massive discount of $158 and works out to $3.39 per month.Proton topped our list of the best VPN services, and with good reason. It's incredibly powerful and easy to use, which is a boon for those new to the space. The end-to-end encryption is solid and everything's based on an open-source framework. This lets the company offer an official vulnerability disclosure program.A subscription includes an IP-masker, so websites can't track you online, and a built-in ad blocker. We found in our tests that browsing the web and watching streaming content were both speedy while using this VPN, which isn't always the case with this type of service.The only caveat? The company will automatically bill you at the normal price when the discounted subscription runs out. Be sure to cancel before that if you aren't vibing with the platform.Follow @EngadgetDeals on X for the latest tech deals and buying advice.This article originally appeared on Engadget at https://www.engadget.com/deals/get-66-percent-off-a-two-year-subscription-to-protonvpn-191045544.html?src=rss
LG 27 UltraGear OLED review: I finally get the 480Hz gaming hype
LG's 27-inch 1440p UltraGear OLED monitor (model 27GX790A) is as close to gaming nirvana as fps-hungry players can get - for now, anyway. It has a 480Hz refresh rate, allowing it to actually display up to 480 fps for insanely fast-paced shooters, along with a low 0.03ms response time. And it supports DisplayPort 2.1, which offers higher bandwidth than typical DisplayPort 1.4 ports, so it doesn't need to use Display Stream Compression (DSC) like most other gaming displays. Together with NVIDIA G-Sync and AMD FreeSync Premium Pro technology, both of which will help to reduce screen tearing, the UltraGear 27 has pretty much everything you'd want in a high-end gaming display. But given its high $1,000 retail price (though it's currently on sale for $800), the UltraGear 27 clearly isn't meant for most people. You'll absolutely need a powerful GPU and CPU to get close to seeing 480 fps in 1440p. And, let's be honest, very few people will even see the difference between 480Hz and more affordable 120Hz to 240Hz screens. The LG 27GX790A is for the true sickos. I've tested plenty of high refresh rate screens in my time, from gaming laptops to a wide variety of monitors. I distinctly remember the excitement around 120Hz LCDs at CES 2010, and I definitely noticed the difference between those screens and standard 60Hz displays at the time. Shooters just looked smoother and felt more responsive. Then there was the leap to 240Hz screens, which was noticeable but not nearly as impressive as the arrival of OLED gaming displays with better black levels and astounding contrast. Then came 360Hz screens, which, to be honest, didn't feel like a huge leap over 240Hz. Our eyes can only see so much after all, especially if you're moving beyond your peak gaming years. So I didn't really expect to be wowed by the UltraGear 27 - I figured it would be yet another solid OLED monitor, like the 27-inch 4K Alienware we recently reviewed. But after spending plenty of time with the UltraGear 27 on my gaming PC, powered by an NVIDIA RTX 5090 and AMD's Ryzen 9 9950X3D, I noticed something strange. While I couldn't really see a major difference between its 480Hz screen and my daily driver, the 240Hz Alienware 32-inch QD-OLED, I could feel it. Devindra Hardawar for Engadget What's good about the LG UltraGear 27? The first time the UltraGear 27 truly clicked for me - the point where I finally understood the hype around 480Hz displays - was during a Rocket League match.I noticed that the longer I played, the more I reached a flow state where I could easily read the position of the ball, re-orient the camera and zip off to intercept. It almost felt like there was a direct connection between what my brain wanted to do, and what was actually happening on the screen. I forgot about the Xbox Elite controller in my hand, and the desk clutter in my office. The real world melted away - I was fully inside Rocket League's absurdsoccer arena. When the match ended, it took me a few minutes to reacclimatize to reality. Rocket League's fast motion and lack of downtime made it the ideal introduction to super-high frame rates. I was also easily able to reach 480 fps in 1440p with my system's hardware, but you'll still easily be able to see upwards of 300 fps with older GPUs, especially if you bump down to 1080p. To be clear, this monitor is pretty much wasted on older and budget video cards. I noticed a similarly transcendent flow state as I got back into Overwatch 2, a game I gave up on years ago. The UltraGear 27 shined best when I was playing fast-paced characters like Tracer, Genji and Lucio, since I had a better sense of space during heated matches. But it also helped with more accurate shots when sniping with the likes of Hanzo and Widowmaker. Beyond the seemingly metaphysical benefits of its 480Hz screen, the UltraGear 27 is also simply a great OLED monitor. Black levels are wonderfully dark, and it can also achieve slightly brighter highlights (up to 1,300 nits) than most OLEDs in small areas. Graphically rich games like Clair Obscur: Expedition 33 practically leap off the screen, thanks to its excellent 98.5 percent DCI-P3 color accuracy. The UltraGear 27 doesn't use a QD-OLED screen like Alienware's latest models, but its color performance doesn't suffer much for it. Devindra Hardawar for Engadget If you've got a PlayStation 5 or Xbox Series X around, the UltraGear 27's two HDMI 2.1 ports will also let them perform at their best. While there are no built-in speakers, the display does include a headphone jack with support for DTS technology for spatial audio like most gaming monitors. It's also a 4-pole connection, so you can plug in headphones with microphones as well. For accessories, there are two USB 3.0 Type A ports, along with an upstream USB connection for your PC. The UltraGear 27 doesn't look particularly distinctive when it's turned off, but it's hard to ask for much flair when it does so much right. Its nearly borderless bezel makes the screen practically float in the air, and you can also easily adjust its height and angle to suit your needs. Devindra Hardawar for Engadget What's bad about the UltraGear 27? The biggest downside with the UltraGear 27 is its $1,000 retail price. While it's nice to see it already falling to $800, it's still absurdly high compared to most 27-inch 1440p monitors. If you want to save some cash, LG's 27-inch 240Hz UltraGear is still a very good option. But if you're in the market for a 480Hz display, you'll basically have to live with paying a ton. For example, ASUS's ROG Swift 27-inch OLED is still selling for $1,000. Should you buy the UltraGear 27? If you're an esports player, or a gamer who demands the highest framerates no matter the cost, the UltraGear 27 is an excellent OLED monitor. But I think most players would be perfectly fine with a cheaper 240Hz screen. Even if you can easily afford the UltraGear 27, it's also worth considering larger screens like the Alienware 32-inch 4K QD-OLED. You'll still get decently high frame rates, but you'll also get a screen that's more immersive for ogling the graphics in Clair Obscur. Devindra Hardawar for Engadget Wrap-up With Samsung teasing a 500Hz OLED gaming screen, there's clearly still a demand for insanely high refresh rates. If you absolutely must have that fix, the UltraGear 27 was made for you. It has all of the benefits of OLED, and with the right title, it might help you achieve a new level of gaming transcendence.This article originally appeared on Engadget at https://www.engadget.com/gaming/lg-27-ultragear-oled-review-i-finally-get-the-480hz-gaming-hype-123042162.html?src=rss
Samsung's 2025 OLED TVs are getting NVIDIA G-Sync compatibility
NVIDIA's G-Sync will soon work with the latest Samsung OLED TVs for a better gaming experience on the big screen. The S95F series TVs, which the company introduced at CES, will be the first to get the update, and the rest of the 2025 OLED models will follow later this year. G-Sync compatibility is meant to help games run more smoothly on the TVs, making their refresh rates match the GPU's frame rate.In the announcement, Kevin Lee, Executive VP of Samsung's Visual Display Customer Experience Team, said it'll bring "elite-level performance for even the most competitive players." Samsung started shipping its flagship S95F TVs in April alongside its other new OLED models, the S90F and S85F. Each comes in a handful of sizes, going up to 83 inches. The OLED lineup also offers AMD FreeSync Premium Pro support, Auto Low Latency Mode and AI Auto Game Mode, which is designed to tweak the picture and sound to best fit whatever game you're playing.The announcement comes as Computex 2025 gets underway in Taiwan. The expo runs from May 20-23, and will focus heavily on AI this year.This article originally appeared on Engadget at https://www.engadget.com/gaming/samsungs-2025-oled-tvs-are-getting-nvidia-g-sync-compatibility-120033237.html?src=rss
The best smart speakers for 2025
Smart speakers have come a long way from being simple housings for voice assistants. Today's best smart speakers can tee-up your favorite playlists, control your smart home devices, answer questions, set reminders and even act as intercoms around the house. Whether you're deep into the Alexa, Google Assistant, or Siri ecosystem, there's a speaker out there that fits your setup and lifestyle.
HP is bringing Snapdragon chips to its more affordable laptops
HP is giving a much-needed power-up to its OmniBook 5 Series laptop lineup. As part of Computex 2025, HP debuted the laptops equipped with Qualcomm's Snapdragon X and X Plus chips for better energy efficiency and performance. The 14-inch version of the new OmniBook 5 Series starts at $799, while the 16-inch variation costs at least $849.Previously, if you wanted the benefits of a Snapdragon chip in an HP laptop, you were stuck with the more expensive OmniBook X options that retail for at least $1,000. Now, there are more affordable options from HP that still have the benefits of ARM processors. The OmniBook 5 series may not be as powerful as the OmniBook X, but it still gets access to Copilot+'s AI features, like Recall, Click-to-Do and an improved Windows Search experience.All that comes in a laptop with a 2K OLED display that gets 34 hours of battery life and recharges up to 50 percent in 30 minutes. If the laptop's singular display isn't enough, the OmniBook 5 can hook up to a single external 5K monitor or two 4K displays. For all your virtual meeting needs, HP's newest laptops have an 1080p IR camera that's paired with its Audio Boost 2.0 feature to offer better sound quality and AI-powered noise removal.HP said the OmniBook 5 with Snapdragon in its 14-inch configuration will be available first on Amazon and Microsoft, starting in June. In July, the 14-inch OmniBook 5 will make its way to HP's own site, Best Buy and Costco. The 16-inch version will also be available in July through HP directly and its retail partners.This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/hp-is-bringing-snapdragon-chips-to-its-more-affordable-laptops-060019642.html?src=rss
...31323334353637383940...