![]() |
by Lawrence Bonk on (#6XEP9)
The upcoming Switch 2 launch title Mario Kart World was originally intended for the OG Switch console, according to an interview with the game's developers. This was the goal until the dev team realized that the console couldn't handle it."It was difficult for us to incorporate everything we wanted, so we were always conscious of what we were giving up in return," said programming director Kenta Sato. A big sticking point seemed to be that the original Switch would have had to run the game at 30FPS. Mario Kart games have always run at 60FPS, for obvious reasons. You can't simulate speed without, uh, simulating speed.Developers pecked away at the "tough situation" until finally deciding to create more DLC for Mario Kart 8 Deluxe as a way to bide time as the team figured out what to do."As we'd decided to release Mario Kart 8 Deluxe - Booster Course Pass, we thought that would give us a bit more time to continue development," said producer Kosuke Yabuki. "That's when the conversation of moving it to the Nintendo Switch 2 system came up, and this suddenly opened up a bunch of possibilities on what we could do. It was truly a ray of hope."This interview also revealed that the game has been in development since 2017, which is a heck of a long time. However, it makes a certain amount of sense given that the original Mario Kart 8 came out in 2014.The upcoming racer was always set in an open world and it was never going to be called Mario Kart 9. The developers wanted to "take the series to the next level." The big, connected world seems to do just that."I felt that in Mario Kart 8 Deluxe, we were able to perfect the formula that we'd been following in the series up to that point, where players race on individual courses," Yabuki continued. "That's why, this time, we wanted the gameplay to involve players driving around a large world, and we began creating a world map like this."I personally think the company made the right call by delaying this game until the Switch 2. Recent first-party Switch titles have experienced massive framerate issues, and there's no way the console could have handled races with 24 participants. In any event, we only have a couple of weeks until we get to play Mario Kart World, if you've successfully reserved the Switch 2 for the June 5 launch.This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/mario-kart-world-was-initially-planned-for-the-original-switch-174704456.html?src=rss
|
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Link | https://www.engadget.com/ |
Feed | https://www.engadget.com/rss.xml |
Copyright | copyright Yahoo 2025 |
Updated | 2025-06-19 21:17 |
![]() |
by Igor Bonifacic on (#6XEPA)
OpenAI is buying Jony Ive's startup, io, for $6.5 billion, as first reported by The New York Times. The company confirmed the news in a blog post on its website headlined by the photo you see above, which is apparently real and not AI generated. As part of the deal, Ive and his design studio, LoveForm, will continue to work independently of OpenAI. However, Scott Cannon, Evans Hankey and Tang Tan, who co-founded io with Ive, will become OpenAI employees, alongside about 50 other engineers, designers and researchers. In collaboration with OpenAI's existing teams, they'll work on hardware that allows people to interact with OpenAI's technologies.OpenAI has not disclosed whether the deal would be paid for in cash or stock. Per the Wall Street Journal, it's an all-equity deal. Open AI has yet to turn a profit. Moreover, according to reporting from The Information, OpenAI agreed to share 20 percent of its revenue with Microsoft until 2030 in return for the more than $13 billion the tech giant has invested into it. When asked about how it would finance the acquisition, Altman told The Times the press worries about OpenAI's funding and revenue more than the company itself. "We'll be fine," he said. "Thanks for the concern." The deal is still subject to regulatory approval.In an interview with The Times, OpenAI CEO Sam Altman and Ive, best known for his design work on the iPhone, said the goal of the partnership is to create "amazing products that elevate humanity." Before today, Altman was an investor in Humane, the startup behind the failed Humane AI Pin. HP bought the company earlier this year for $116 million, far less than the $1 billion Humane had reportedly sought before the sale."The io team, focused on developing products that inspire, empower and enable, will now merge with OpenAI to work more intimately with the research, engineering and product teams in San Francisco," OpenAI writes of the acquisition on its website. "As io merges with OpenAI, Jony and LoveFrom will assume deep design and creative responsibilities across OpenAI and io."According to The Times, OpenAI already had a 23 percent stake in io following an agreement the two companies made at the end of 2024. OpenAI is now paying approximately $5 billion to take full control of the startup. Whether this points towards physical OpenAI devices on the horizon, and if so what form they take, remains unclear. The description for the YouTube video you see above says, "Building a family of AI products for everyone." Whatever comes out of the acquisition could take years to hit the market, and some of what Ive and his team do may never see the light of day.This article originally appeared on Engadget at https://www.engadget.com/ai/openai-buys-jony-ives-design-startup-for-65-billion-173356962.html?src=rss
|
![]() |
by Will Shanklin on (#6XEJT)
Sony is opting out of its PlayStation Stars loyalty program. Starting today, you can no longer sign up for the program. If you're a member and cancel your membership, you won't be able to sign up again.Current members can still earn points and digital collectibles and level up their status until July 23 at 9:59 PM ET. After that, all campaigns and rewards will be kaput. The program will be entirely discontinued on November 2. But if you keep your membership until then, you can still redeem your points after that, provided they haven't expired.Sony launched PlayStation Stars in 2022. The company's first loyalty program lets you earn points by playing games and making purchases on the PlayStation Store. You can redeem points for items like PSN wallet funds and select store products.The company will now "refocus" its approach to rewards. (How, we don't know.) "We want to thank all of our players for supporting PlayStation Stars since the launch in 2022," Sony wrote on the PlayStation Blog. "As we explore new ways to evolve our loyalty program efforts for the future, we'll continue to celebrate all of our players through the various community activities we have planned."This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/sony-is-ending-its-playstation-stars-loyalty-program-164514310.html?src=rss
|
![]() |
by Lawrence Bonk on (#6XEJV)
Senua's Saga: Hellblade 2 is coming to PS5 this summer, though we don't have a concrete release date just yet. Ninja Theory says it will be optimized for both the standard-issue PS5 and the beefier PS5 Pro.The company also says it launches alongside a free update that'll be available for all platforms, including PS5, Xbox Series X/S and PC. We don't know much about this update, but the developer promises "new features." There's a trailer that discusses the update, but is also devoid of any real details.This is something of a homecoming for the franchise, as Hellblade: Senua's Sacrifice was originally released for PS4 back in 2017. Ninja Theory says it's "excited to bring Senua back to where her journey began and for PlayStation players to be able to experience the next chapter in her brutal saga of survival."For the uninitiated, Senua's Saga: Hellblade 2 is a brutal and gorgeous game that first hit the Xbox and PC in 2024. It's a third-person adventure set in Iceland during the 10th century. We called it an outstanding "interactive brutality visualizer" in our original review, going on to say it features "an extended, extremely anxious and violent vibe." Good times!The combat is decent, though not groundbreaking, and the puzzles are just average. The sheer violence, however, is epic. The protagonist Senua screams with each swing of the sword and every fight is close combat. This is for those who revel in simulated physical violence. We'll let you know when we have an actual summer release date for this gem.This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/senuas-saga-hellblade-2-is-coming-to-ps5-this-summer-163539229.html?src=rss
|
![]() |
by Kris Holt on (#6XDR3)
Today is one of the most important days on the tech calendar as Google kicked off its I/O developer event with its annual keynote. As ever, the company had many updates for a wide range of products to talk about.The bulk of the Android news was revealed last week, during a special edition of The Android Show. However, Tuesday's keynote still included a ton of stuff including, of course, a pile of AI-related news. We covered the event in real-time in our live blog, which includes expert commentary (and even some jokes!) from our team.If you're on the hunt for a breakdown of everything Google announced at the I/O keynote, though, look no further. Here are all the juicy details worth knowing about:AI Mode chatbot is coming to Search for all US usersQuelle surprise, Google is continuing to shove more generative AI features into its core products. AI Mode, which is what the company is calling a new chatbot, will soon be live in Search for all US users.AI Mode is in a separate tab and it's designed to handle more complex queries than people have historically used Search for. You might use it to compare different fitness trackers or find the most affordable tickets for an upcoming event. AI Mode will soon be able to whip up custom charts and graphics related to your specific queries too. It can also handle follow-up questions.The chatbot now runs on Gemini 2.5. Google plans to bring some of its features into the core Search experience by injecting them into AI Overviews. Labs users will be the first to get access to the new features before Google rolls them out more broadly.Meanwhile, AI Mode is powering some new shopping features. You'll soon be able to upload a single picture of yourself to see what a piece of clothing might look like on a virtual version of you.Also, similar to the way in which Google Flights keeps an eye out for price drops, Google will be able to let you know when an item you want (in its specific size and color) is on sale for a price you're willing to pay. It can even complete the purchase on your behalf if you want.1.5 billion people see AI Overviews each monthAI Overviews, the Gemini-powered summaries that appear at the top of search results and have been buggy to say the least, are seen by more than 1.5 billion folks every month, according to Google. The "overwhelming majority" of people interact with these in a meaningful way, the company said - this could mean clicking on something in an overview or keeping it on their screen for a while (presumably to read through it).Still, not everyone likes the AI Overviews and would rather just have a list of links to the information they're looking for. You know, like Search used to be. As it happens, there are some easy ways to declutter the results.Another look at Google's universal AI assistantWe got our first peek at Project Astra, Google's vision for a universal AI assistant, at I/O last year and the company provided more details this time around. A demo showed Astra carrying out a number of actions to help fix a mountain bike, including diving into your emails to find out the bike's specs, researching information on the web and calling a local shop to ask about a replacement part.It already feels like a culmination of Google's work in the AI assistant and agent space, though elements of Astra (such as granting it access to Gmail) might feel too intrusive for some. In any case, Google aims to transform Gemini into a universal AI assistant that can handle everyday tasks. The Astra demo is our clearest look yet at what that might look like in action.NotebookLM mobile appOn the NotebookLM front, Google has released an iOS and Android app for the tool. The company also took the opportunity at I/O to show off what NotebookLM can do.Google put together a notebook featuring the I/O keynote video from YouTube as well as associated blog posts, press releases and product demos. You can drill down into all of this information or just ask the AI questions about I/O. Of course, you'll be able to generate audio summaries as well as a mind map to structure all the info that's in the notebook.Other AI updatesGemini 2.5 is here with (according to Google) improved functionality, upgraded security and transparency, extra control and better cost efficiency. Gemini 2.5 Pro is bolstered by a new enhanced reasoning mode called Deep Think. The model can do things like turn a grid of photos into a 3D sphere of pictures, then add narration for each image. Gemini 2.5's text-to-speech feature can also change up languages on the fly. There's much more to it than that, of course, and we've got more details in our Gemini 2.5 story.You know those smart replies in Gmail that let you quickly respond to an email with an acknowledgement? Google is now going to offer personalized versions of those so that they better match your writing style. For this to work, Gemini looks at your emails and Drive documents. Gemini will need your permission before it plunders your personal information. Subscribers will be able to use this feature in Gmail starting this summer.Google Meet is getting a real-time translation option, which should come in very useful for some folks. A demo showed Meet being able to match the speaker's tone and cadence while translating from Spanish to English.Subscribers on the Google AI Pro and Ultra (more on that momentarily) plans will be able to try out real-time translations between Spanish and English in beta starting this week. This feature will soon be available for other languages.GoogleGemini Live, a tool Google brought to Pixel phones last month, is coming to all compatible Android and iOS devices in the Gemini app (which already has more than 400 million monthly active users). This allows you to ask Gemini questions about screenshots, as well as live video that your phone's camera is capturing. Google is rolling out Gemini Live to the Gemini iOS and Android app starting today.Google Search Live is a similar-sounding feature. You'll be able to have a "conversation" with Search about what your phone's camera can see. This will be accessible through Google Lens and AI Mode.A new filmmaking app called Flow, which builds on VideoFX, includes features such as camera movement and perspective controls; options to edit and extend existing shots; and a way to fold AI video content generated with Google's Veo model into projects. Flow is available to Google AI Pro and Ultra subscribers in the US starting today. Google will expand availability to other markets soon.Speaking of Veo, that's getting an update. The latest version, Veo 3, is the first iteration that can generate videos with sound (it probably can't add any soul or actual meaning to the footage, though). The company also suggests that its Imagen 4 model is better at generating photorealistic images and handling fine details like fabrics and fur than earlier versions.Handily, Google has a tool it designed to help you determine if a piece of content was generated using its AI tools. It's called SynthID Detector - naturally, it's named after the tool that applies digital watermarks to AI-generated material.According to Google, SynthID Detector can scan an image, piece of audio, video or text for the SynthID watermark and let you know which parts are likely to have a watermark. Early testers will be able to to try this out starting today. Google has opened up a waitlist for researchers and media professionals. (Gen AI companies should offer educators a version of this tech ASAP.)The new AI Ultra plan costs $250 per monthGoogleTo get access to all of its AI features, Google wants you to pay 250 American dollars every month for its new AI Ultra plan. There's really no other way to react to this other than "LOL. LMAO." I rarely use either of those acronyms, which highlights just how absurd this is. What are we even doing here? That's obscenely expensive.Anyway, this plan includes early access to the company's latest tools and unlimited use of features that are costly for Google to run, such as Deep Research. It comes with 30TB of storage across Google Photos, Drive and Gmail. You'll get YouTube Premium as well - arguably the Google product that's most worth paying for.Google is offering new subscribers 50 percent off an AI Ultra subscription for the first three months. Woohoo. In addition, the AI Premium plan is now known as Google AI Pro.A second Android XR device has been announcedAs promised during last week's edition of The Android Show, Google offered another look at Android XR. This is the platform that the company is working on in the hope of doing for augmented reality, mixed reality and virtual reality what Android did for smartphones. After the company's previous efforts in those spaces, it's now playing catchup to the likes of Meta and Apple.The initial Android XR demo at I/O didn't offer much to get too excited about for now. It showed off features like a mini Google Map that you can access on a built-in display and a way to view 360-degree immersive videos. We're still waiting for actual hardware that can run this stuff.XrealAs it happens, Google revealed the second Android XR device. Xreal is working on Project Aura, a pair of tethered smart glasses. We'll have to wait a bit longer for more details on Google's own Android XR headset, which it's collaborating with Samsung on. That's slated to arrive later this year.A second demo of Android XR was much more interesting. Google showed off a live translation feature for Android XR with a smart glasses prototype that the company built with Samsung. That seems genuinely useful, as do many of the accessibility-minded applications of AI. Gentle Monster and Warby Parker are making smart glasses with Android XR too. Just don't call it Google Glass (or do, I'm not your dad).Chrome's password manager is getting an upgradeGoogle is giving the Chrome password manager a very useful weapon against hackers. It will be able to automatically change passwords on accounts that have been compromised in data breaches. So if a website, app or company is infiltrated, user data is leaked and Google detects the breach, the password manager will let you generate a new password and update a compatible account with a single click.The main sticking point here is that it only works with websites that are participating in the program. Google's working with developers to add support for this feature. Still, making it easier for people to lock down their accounts is a definite plus. (And you should absolutely be using a password manager if you aren't already.)On the subject of Chrome, Google is stuffing Gemini into the browser as well. The AI assistant will be able to answer questions about the tabs you have open. You'll be able to access it from the taskbar and a new menu at the top of the browser window.Beam is the new name of Google's 3D video conferencing boothsIt's been a few years since we first heard about Project Starline, a 3D video conferencing project. We tried this tech out at I/O 2023 and found it to be an enjoyable experience.Now, Google is starting to sell this tech, but only to enterprise customers (i.e. big companies) for now. It's got a new name for all of this too: Google Beam. And it's probably not going to be cheap. HP will reveal more details in a few weeks.This article originally appeared on Engadget at https://www.engadget.com/ai/google-io-2025-recap-ai-updates-android-xr-google-beam-and-everything-else-announced-at-the-annual-keynote-175900229.html?src=rss
|
![]() |
by Cheyenne MacDonald on (#6XEJX)
You've got to hand it to the Tamagotchi team for continuing to find new ways to spin a toy that is now pushing 30 years old. We've seen a Tamagotchi with a built-in camera, a Tamagotchi watch with a touchscreen so you can pet your virtual pet and another one with its own Tamaverse. Sometimes these experiments don't work out as well as we'd like them to - the flat buttons introduced with Tamagotchi Pix were kind of terrible in practice - but they keep the franchise feeling alive. And alive seems like the best way to describe the newest member of the Tamagotchi family. Tamagotchi Paradise looks like it's absolutely bustling with life.Bandai first teased the upcoming Tamagotchi Paradise in a comic for Free Comic Book Day at the beginning of May, but it's now official: we're getting a Tamagotchi that's equipped with a zoom dial feature to observe the critters up close (like, even down to the cellular level) and from afar. It'll bring back gene-mixing, too, meaning you'll be able to create unique characters through breeding. Tamagotchi Paradise will also be able to physically connect to other devices with a docking port on the top of the egg.There is a lot of information to unpack in the Tamagotchi Paradise announcement. For one, instead of starting off by hatching a Tamagotchi character from an egg, players will hatch an entire planet in an Egg Bang (get it?) event. You'll be able to view the planet from space, and zoom in to observe what's going on down at the surface. Your mission is to "enrich your planet and make its Tamagotchi population flourish." The device will come in three shell designs - Pink Land, Blue Water and Purple Sky - and whichever shell you have will determine which location you start in. It appears that you'll be able to unlock all three areas eventually no matter what device you have.BandaiAs always, you'll have to raise Tamagotchi characters from babies to adults and do all the usual caretaking tasks, like feeding them and cleaning up poop. But for once, you'll be able to put all that poop to good use by turning it into biofuel for space travel. When a Tamagotchi gets sick, you'll use the dial to "zoom in and treat them at the cellular level." There are a total of 25 different care menus according to Bandai, including shops and mini-games.Tamagotchi Paradise introduces a ton of new, more animal-like characters than we've been seeing in recent years, and they're really cute. (Don't worry, Mametchi, Mimitchi and a few other existing favorites will still be there too). There are also three secret characters that haven't yet been revealed.BandaiIt looks like it's packed with activities, which would be really nice coming off of the Tamagotchi Uni, a device I've loved in the two years since it was released but still can't help but feel like it's a bit boring compared to others. Tamagotchi Paradise goes back to AAA batteries, which should be good for longevity. And it'll be cheaper than other recent flagship Tamagotchis, at $45. Pre-orders haven't opened in the US just yet, but the device will ship on July 12 according to the Japanese Amazon listing. The wait might actually kill me.Once Tamagotchi Paradise arrives, there will be pop-up Tamagotchi Labs in some as yet unannounced stores where you'll be able to connect your device to access exclusive items and experiences. Tamagotchi Uni owners will be able to get a taste of all this ahead of the release as well if they buy the Tamagotchi Lab Tamaverse ticket, which comes out on July 3.This article originally appeared on Engadget at https://www.engadget.com/gaming/tamagotchi-paradise-looks-like-the-most-exciting-virtual-pet-toy-in-years-155010892.html?src=rss
|
![]() |
by Anna Washenko,Will Shanklin on (#6XADH)
Coinbase has been betrayed from within. The cryptocurrency exchange said that cyber criminals bribed some of its support agents to share personal information about Coinbase customers. Attackers acquired data such as names, addresses, emails, phone numbers, images of government IDs, masked bank account numbers and masked sections of social security numbers. The perpetrators tricked some Coinbase users into sending them money and also demanded $20 million from the company to not publicly disclose the ill-gotten information.Coinbase has not paid the ransom and is cooperating with law enforcement to press charges. In the blog post, the company said it would offer a $20 million reward for information that could lead to arresting and convicting the remaining attackers.A Maine Attorney General filing (via TechCrunch) says the breach affected 69,461 customers. The hack began on December 26, 2024, and ran until May 11.Coinbase said that users' login credentials, two-factor authentication codes and private keys are still secure. It will reimburse customers who sent funds to the extortionists and will place additional safeguards on vulnerable accounts. According to an SEC filing, the incident is projected to cost Coinbase $180 million to $400 million.Update, May 21, 2025, 11:23 PM ET: This story has been updated with new info from the Maine Attorney General filing.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/extortionists-bribed-coinbase-employees-to-give-them-customer-data-174713732.html?src=rss
|
![]() |
by Lawrence Bonk on (#6XEJZ)
This is a big month for Xbox Game Pass, as there are some real standout titles hitting the service. Upcoming games include the sublime Metaphor ReFantazio, Tales of Kenzera: Zau and The Division 2, among others.Let's get to the games. You likely heard a whole lot about Metaphor ReFantazio last year. The JRPG was a bona-fide phenomenon, and it actually grabbed a nomination for game of the year. It also easily made our list of the best games of 2024. It's developed by Atlus and the game improves on the formula behind the Persona and Shin Megami Tensei franchises in nearly every way.The characters are great. The dungeons aren't procedurally generated. The world feels alive, with quests and objectives in nearly every nook and cranny. The story is perhaps the biggest reason why the game became such a sensation. It's grounded and feels like it was plucked from today's news, despite being set in a fantasy-laden kingdom. It'll be playable on May 29.Tales of Kenzera: Zau is a Metroidvania platformer that wears its heart on its sleeve. The story is extremely emotional and engaging, particularly for this genre. The graphics are lovely and the gameplay is fluid, with plenty of nifty upgrades as you advance. What's not to like? The game arrives on May 22.Tom Clancy's The Division 2 is something of a hybrid, with plenty of both tactical shooter and RPG mechanics. The game is set in an open world version of Washington DC, which is a pretty cool location. It's online-only, so there's a deep emphasis on multiplayer. It can be played solo, but you have to be connected to the game's servers. It'll be available on May 27.Other forthcoming games include Spray Paint Simulator (May 29) and Stalker 2 (May 22.) The deckbuilding roguelike Monster Train 2 is available right now.This article originally appeared on Engadget at https://www.engadget.com/gaming/mays-game-pass-additions-include-the-brilliant-metaphor-refantazio-and-the-division-2-151424740.html?src=rss
|
![]() |
by Mariella Moon on (#6XECZ)
Oura has rolled out activity updates for Gen3 and Ring 4 users, including a new trend view for active minutes so that they can get a better look at how active they are for the day, the week or even the whole month. They'll also be able to add their max heart rate to the activity setting, and Oura will adjust heart rate zones accordingly. Oura now allows users to add or edit activities for the past seven days manually, instead of just for that particular day, and it now displays heart rate data from activities imported from partner integrations via Apple HealthKit and Health Connect by Android. Its Automatic Activity Detection feature has also been updated to work all hours to track movements, even for activities between midnight and 4AM.In addition to those new features, Oura has upgraded its system to be able to count steps more accurately. The company uses an advanced machine-learning model to determine whether a movement is an actual step, and it says the technology slashes average daily step count error by 61 percent. It has upgraded its Active Calorie burn feature to be more accurate by taking heart rate into account during exercise, as well. Oura can now also use your phone's GPS data to show your runs and walks in more detail within its app. All these updates are now available on iOS, but the new fitness metrics and new trend view for active minutes won't be out on Android until June.Aside from announcing its upgraded features, Oura has revealed its new partnerships with third-party entities. Users can now link their smart ring with CorePower Yoga so that they can track their yoga activities, Sculpt Society, Technogym and Open, which uses a person's biometrics to create personalized recovery rituals for them.Update, May 21, 2025, 10:50AM ET: This story has been updated to clarify that the updates are available for Gen3 and Ring 4 users with Oura Membership.This article originally appeared on Engadget at https://www.engadget.com/wearables/ouras-smart-ring-gets-better-at-tracking-your-activities-130012859.html?src=rss
|
![]() |
by Sam Rutherford on (#6XEG1)
On paper, the idea of a PC gaming tablet doesn't really make sense. Anything with a screen larger than eight to ten inches is generally too big to hold for longer sessions. Their thin chassis don't leave much room for big batteries, ports or discrete graphics. But with the second-gen ROG Flow Z13, ASUS is turning that line of thought on its head with a surprisingly powerful system that can do more than just game - as long as you don't mind paying a premium for some niche engineering. Design and display: Not exactly stealthy For better or worse, the Z Flow 13 looks like someone tweaked a Surface Pro to accommodate the stereotypical gamer aesthetic. It has cyberpunky graphics littered across its body along with a small window in the back that's complete with RGB lighting. Unlike a lot of tablets, ASUS gave the Z13 a thicker-than-normal body (0.6 inches), which left space for a surprising number of ports. Not only do you get two USB 4 Type-C ports, there's also a regular USB-A jack, full-size HDMI 2.1, 3.5mm audio and even a microSD card reader. This instantly elevates the tablet from something strictly meant for playing games into something that can also pull double duty as a portable video editing station. ASUS' 13.4-inch 2.5 IPS display leans into that even more thanks to a 180Hz refresh rate, strong brightness (around 500 nits) and Pantone validation. Regardless of what you're doing, colors will be both rich and accurate. Rounding out the package are some punchy speakers, so you don't have to suffer from subpar sound. But there are limitations here, as deep bass is always tough to produce on smaller systems like this. Sam Rutherford for Engadget Finally, there are some pogo pins along the bottom of its display for connecting its folding keyboard. Sadly, this is one of the system's weak points. Because the Z13 is heavier than a typical tablet PC, its keyboard has to carry a hefty load. On a table, it's fine. But if you try to use this thing on your lap (or any uneven surface), I found that the keyboard can flex so much it can result in accidental mouse clicks. It's a shame because the bounce and travel of the keys generally feels pretty good. Nothing is more of a bummer than playing a game while relaxing on the couch and then having to fight with the tablet to avoid errant clicks. Performance Instead of relying on discrete graphics, ASUS opted for AMD's Ryzen AI Max 390 or Max+ 395 APUs, which feature up to 32 cores and a whopping 128GB of unified RAM. However, our review unit came with a more modest, but still ample, 32GB. Unsurprisingly, this makes mincemeat out of basic productivity tasks while having more than enough power to quickly edit videos on the go. Sam Rutherford for Engadget But without a proper graphics card, can it actually game? Yes, and rather well, I might add. In Cyberpunk 2077 at 1080p and Ultra settings, the Z13 hit an impressive 93 fps. And while numbers weren't quite as high in Control at 1080p on Epic presets, 70 fps is still very playable. The one wrinkle is that when I tested Cyberpunk 2077 a second time on Ultra with ray tracing enabled, the Flow's performance was cut in half to just 45 fps. Unless you're playing a brand new AAA title that requires RT support (of which there are a growing number), the Z13 is a shockingly good portable gaming companion for frequent travelers. You just have to be careful about how you configure its power settings. That's because if you're out in public or a quiet room, high performance (especially turbo) can result in a fair bit of fan noise, which may draw some unwanted attention. Or in my case, it got much harder to talk to someone sitting next to me on the couch. Battery life Sam Rutherford for Engadget When it comes to longevity, you'll get one of two outcomes. In normal use and on PCMark's Modern Office productivity battery life test, the Z13 fared quite well, finishing with a time of six hours and 54 minutes. That's not quite a full day's worth of work untethered, but it's good enough for most folks. You'll just want to keep its chunky power adapter nearby. However, if you plan on gaming without plugging this thing into the wall, just be prepared for the Z13 to conk out after two hours at best. When I played League of Legends' Teamfight Tactics, I only made it through two games (about 30 to 40 minutes each) before its battery got dangerously low (around 10 percent). And suffice it to say, TFT isn't a very demanding title. Wrap-up The right side of the ROG Flow Z13 features a customizable button that can be programmed to launch an app of your choice. Sam Rutherford for Engadget The Flow Z13 is a niche device that's more of an all-rounder than it might seem at first glance. This system fills an interesting gap between ASUS' gaming machines and more creatively-focused PCs from its ProArt family. In a lot of ways, slapping an ROG badge on it doesn't really do this thing justice. It's got more than enough performance to breeze through general productivity or video edits, and its built-in microSD card reader makes transferring footage to the tablet a breeze. Its screen is bright and vibrant, while also offering accurate colors and a decently high refresh rate. And even without a discrete GPU, the Z13 didn't have much trouble rendering games with lots of graphical bells and whistles turned on. However, this tablet's issues boil down to a couple of major sticking points. Its detachable keyboard is simply too flimsy, to the point where if you use it anywhere besides a table or desk, you risk fighting with it just to ensure your mouse clicks are correct. But the bigger hurdle is price. Starting at $2,100 (or around $2,300 as tested), the Flow Z13 costs the same or more as a comparable ROG Zephyrus G14 with a proper RTX 5070. Not only does it have worse performance, it's less stable too due to its tablet-style design. For people trying to get the most value out of their money, that proposition is a hard sell. Sam Rutherford for Engadget Deep down, I want to like the ROG Flow Z13. And I do, to a certain extent. It's got a funky build and unapologetically aggressive styling. But unless you have a very particular set of requirements, it doesn't fit neatly into most people's lives as an equivalent laptop. And that's before you consider how much it costs.This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/asus-rog-flow-z13-2025-review-when-a-traditional-gaming-laptop-just-wont-do-133510833.html?src=rss
|
![]() |
by Mariella Moon on (#6XEAZ)
Google used news from its I/O developer conference this year to show what NotebookLM can do. The AI-powered research and note-taking tool has been around for years, but the company has infused it with more and more features as its AI tech improved. To demonstrate those features, Google created a notebook filled with news from I/O 2025, including a YouTube video of the keynote (complete with a transcript of the whole event), press releases, blog posts and even product demonstrations. You can visit all of those one by one, since the company uploaded them as sources to the notebook, but you can also use the AI tool to digest all the information for you.You can ask NotebookLM anything you want about the event in the chat box, so that you can quickly find details for whatever it is you want to know. When I asked it what is NotebookLM, for instance, it gave me a response that aligned with what was announced during the event. "According to Google's announcements at I/O," the tool responded, "...NotebookLM becomes an 'expert' by grounding its responses in the provided material and offering creative ways to transform information."Under the Studio section of its interface, you'll be able to generate audio overviews that can give you a quick or a more comprehensive spoken summary of the information you've uploaded. You can also create a Mind Map, which visually summarizes uploaded sources, showing one main topic branching towards several smaller topics and relevant ideas. Mind Maps are meant to structure information in a way that's easier to understand and remember. Google added a reminder to its announcement, however, that "like all AI, NotebookLM can generate inaccuracies," which is something to keep in mind while using the tool.Google has released an official app for the tool in time for I/O 2025, which you can now download on Android or iOS. To see the company's I/O 2025 notebook, you'll have to be signed into a Google account.This article originally appeared on Engadget at https://www.engadget.com/ai/google-talks-up-notebooklm-upgrades-by-making-it-talk-up-google-io-2025-114240186.html?src=rss
|
![]() |
by Steve Dent on (#6XEB0)
Before augmented reality was ever a thing, there was Google Glass: a much hyped experiment that was ultimately a failure over issues like privacy (and just looking like a dork). At an I/O session yesterday with Deepmind CEO Demis Hassabis, Google co-founder Sergey Brin admitted that he made "mistakes" with Google Glass in several areas."I just didn't know anything about consumer electronic supply chain chains, really, and how hard it would be to build that and have it it at a reasonable price point and managing all the manufacturing and so forth," he said during the session.Brin said that he's still a believer in the form factor, though, adding that Xreal's latest device looks like "normal glasses" without "that thing in front." He noted that rather than going it alone as before, Google now has "great partners" in Samsung (the Project Moohan headset) and Xreal (Project Aura glasses) as part of the Android XR extended reality program.There was also a "technology gap" when Google Glass came along in 2013 that no longer exists, according to Brin. "Now in the AI world, the things that these glasses can do to help you out without constantly distracting you, that capability is much higher," he saidGoogle Glass wasn't a complete flop. It's easy to forget that the product soldiered on for many years after its debut, largely as an enterprise device, and was only fully discontinued in 2023. It also paved a path for future VR and AR wearables like the Oculus Rift, HTC Vive, Meta Quest and Apple Vision Pro. Come to think of it, though, none of those projects have exactly set the world on fire, either.This article originally appeared on Engadget at https://www.engadget.com/ar-vr/google-co-founder-sergey-brin-admits-to-mistakes-over-google-glass-110659349.html?src=rss
|
by Amy Skorheim on (#6BC7B)
There are just two models of Apple laptops: the MacBook Air and the MacBook Pro. In a nutshell, those who need a computer for productivity, work and everyday use - in other words, most people - will be happy with a MacBook Air. People who do intense video and audio editing and other high-demand tasks may want to spring for a Pro model. Within the Air and Pro categories, there are a few other choices to make, including screen size, chip type and memory capacity. This guide recaps our reviews and explains the specs to help pick the best MacBook for you. Table of contents
![]() |
by Ian Carlos Campbell on (#6XE7A)
Following the announcement that Gemini is coming to cars, Volvo is using I/O 2025 to announce a new expanded partnership with Google. The companies' new deal makes Volvo's cars reference hardware for future Android Automotive OS development, and means Volvo drivers will be "among the first to benefit" when Gemini fully replaces Google Assistant in cars.Volvo describes itself as Google's "lead development partner for new features and updates," making the company's cars the first to receive new updates to the Android Automotive OS. Google offers Android Auto as its CarPlay-like solution for beaming a software interface from your phone to in-car displays, but its Automotive OS is more complete, running on your vehicle locally and connected to car controls for A/C and more. You can already experience Android Automotive OS in Volvo's EX90, for example.Google's current vision for Android in cars is, perhaps unsurprisingly, focused on getting drivers to talk to Gemini. In a car with the assistant, you'll be able to ask Gemini to send a message, pull up directions, or answer the more open-ended, natural language questions that Gemini Live is designed to handle. If it works as advertised, it seems better than pecking at a screen, and Volvo notes it could "help reduce your cognitive load so that you can stay focused on driving."There's no release date for when you can expect Gemini to show up as your driving copilot, but at the very least this new partnership means it'll be in Volvos first.This article originally appeared on Engadget at https://www.engadget.com/transportation/volvo-expands-its-google-partnership-to-bring-new-features-like-gemini-to-cars-sooner-070020853.html?src=rss
|
![]() |
by Andre Revilla on (#6XE3C)
AMD has unveiled its 9060 XT GPU at Computex 2025. The midrange GPU will be the clear competitor to Nvidia's 5060 Ti and goes toe-to-toe with it on almost every spec. Built on AMD's 4-nanometer RDNA 4 silicon, the 9060 XT will pack 32 compute units, along with 64 dedicated AI accelerators and 32 ray-tracing cores.AMDNotably, the RX 9060 XT will ship in 8GB and 16GB GDDR6 versions, whereas Nvidia's RTX 5060 Ti uses faster 28 Gb/s GDDR7, delivering roughly 40 percent more bandwidth (448 GB/s vs. approximately 322 GB/s) on the same 128-bit bus. We'll have to wait for some side-by-side performance comparisons before drawing any strong conclusions from those specs.AMD has listed the 9060 XT's boost clock at speeds up to 3.13 GHz. The GPU boasts 821 TOPS for AI workloads and will draw a modest 150 to 182 watts from the board. The card will connect via PCIe 5.0 x16 and supports the now-standard DisplayPort 2.1a and HDMI 2.1b. Based on these initial specs, the 9060 XT should be a solid entry for games running at 1080p and a decent option for those at 1440p. Those wishing to play at 4K should still opt for the Radeon RX 9070 or 9070 XT.Pricing and exact release timelines have not yet been announced.This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/amd-unveils-radeon-rx-9060-xt-at-computex-2025-030021776.html?src=rss
|
![]() |
by Will Shanklin on (#6XE3D)
Not many people need a 96-core processor. But for creative professionals, engineers and AI developers who do, AMD has a new batch of chips on display at Computex 2025. The company announced its new Ryzen Threadripper 9000 series on Tuesday, with bonkers specs to power pro-level workstations and ultra-high-end prosumer desktops.At the top of the line in the series is the AMD Threadripper Pro 9995WX. This chip has a staggering 96 cores and 192 threads, matching the highest-end model from 2023's Threadripper Pro 7000 line. But the new 9000 series tops out with a higher maximum boost speed of 5.4GHz. That's up from 5.1GHz in the premiere 7000 Pro chip.AMD's new batch includes six processors in the Threadripper Pro WX series, designed for pro-level workstations. (In addition to the 96-core 9995WX, options include 12-, 16-, 24-, 32- and 64-core models.) Moving past the Pro series, the standard Threadripper 9000 line for high-end desktops maxes out with the 64-core, 128-thread 9980X.AMD hasn't yet announced pricing or specific retail models carrying the chips. But the 7000 Pro series offers a hint. The top-shelf model from that line costs a cool $10,000. (Yep, that's for the processor alone.) So, unless your work involves extremely demanding AI development, 3D modeling or ultra-high-res video editing, you can slowly step away and make your way back to the consumer aisle.This article originally appeared on Engadget at https://www.engadget.com/computing/amds-ryzen-threadripper-9000-chips-have-up-to-96-cores-just-like-the-last-bunch-030003537.html?src=rss
|
![]() |
by Ian Carlos Campbell on (#6XE0J)
Fortnite is back in the US App Store. Epic CEO Tim Sweeney announced that he intended to relaunch the game in late April, following a court order that demanded Apple stop collecting a 27 percent fee on app transactions that happen outside of its in-app purchase system. The company finally amending its rules to remove that additional commission is why Epic moved forward with the relaunch.The origins of this conflict can be traced all the way back to 2020, when Epic added its own method for collecting payments for in-game items in Fortnite and encouraged players to circumvent Apple's system. Fortnite was removed from the App Store (and the Google Play Store for that matter), Epic sued and the rest is history.Epic didn't win its entire case against Apple, but it did secure a permanent injunction allowing developers to include in-app text that makes users aware of payment options other than the App Store. According to the latest court order, Apple allowed that text, but was still demanding developers pay it a fee for those non-App Store transactions. That prompted the judge overseeing the companies' case to demand Apple stop and remove even more obstacles from the payment process.
|
![]() |
by Karissa Bell on (#6XDYE)
One of the biggest reveals of Google I/O was that the company is officially back in the mixed reality game with its own prototype XR smart glasses. It's been years since we've seen anything substantial from the search giant on the AR/VR/XR front, but with a swath of hardware partners to go with its XR platform it seems that's finally changing.Following the keynote, Google gave me a very short demo of the prototype device we saw onstage. I only got a few minutes with the device so my impressions are unfortunately very limited, but I was immediately impressed with how light the glasses were compared with Meta's Orion prototype and Snap's augmented reality Spectacles. While both of those are quite chunky, Google's prototype device was lightweight and felt much more like a normal pair of glasses. The frames were a bit thicker than what I typically wear, but not by a whole lot.Karissa Bell for EngadgetAt the same time, there are some notable differences between Google's XR glasses and what we've seen from Meta and Snap. Google's device only has a display on one side - the right lens, you can see it in the image at the top of this article - so the visuals are more "glanceable" than fully immersive. I noted during Google's demo onstage at I/O that the field of view looked narrow and I can confirm that it feels much more limited than even Snap's 46-degree field of view. (Google declined to share specifics on how wide the field of view is on its prototype.)Instead, the display felt a bit similar to the front display of a foldable phone. You can use it to get a quick look at the time and notifications and small snippets of info from your apps, like what music you're listening to.Gemini is meant to play a major role in the Android XR ecosystem, and Google walked me through a few demos of the AI assistant working on the smart glasses. I could look at a display of books or some art on the wall and ask Gemini questions about what I was looking at. It felt very similar to multimodal capabilities we've seen with Project Astra and elsewhere.There were some bugs, though, even in the carefully orchestrated demo. In one instance, Gemini started to tell me about what I was looking at before I had even finished my question to it, which was followed by an awkward moment where we both paused and interrupted each other.One of the more interesting use cases Google was showing was Google Maps in the glasses. You can get a heads-up view of your next turn, much like Google augmented reality walking directions, and look down to see a little section of map on the floor. However, when I asked Gemini how long it would take to drive to San Francisco from my location it wasn't able to provide an answer. (It actually said something like "tool output," and my demo ended very quickly after.)EngadgetI also really liked how Google took advantage of the glasses' onboard camera. When I snapped a photo, a preview of the image immediately popped up on the display so I could see how it turned out. I really appreciated this because framing photos from a camera on smart glasses is inherently unintuitive because the final image can vary so much depending on where the lens is placed. I've often wished for a version of this when taking photos with my Ray-Ban Meta Smart Glasses, so it was cool to see a version of this actually in action.I honestly still have a lot of questions about Google's vision for XR and what eventual Gemini-powered smart glasses will be capable of. As with so many other mixed reality demos I've seen, it's obviously still very early days. Google was careful to emphasize that this is prototype hardware meant to show off what Android XR is capable of, not a device it's planning on selling anytime soon. So any smart glasses we get from Google or its hardware partners could look very different.What my few minutes with Android XR was able to show, though, was how Google is thinking about bringing AI and mixed reality together. It's not so different from Meta, which sees smart glasses as key to long-term adoption of its AI assistant too. But now that Gemini is coming to just about every Google product that exists, the company has a very solid foundation to actually accomplish this.This article originally appeared on Engadget at https://www.engadget.com/ar-vr/google-xr-glasses-hands-on-lightweight-but-with-a-limited-field-of-view-213940554.html?src=rss
|
![]() |
by Anna Washenko on (#6XDYF)
The Solar Energy Industries Association released an assessment of how the budget reconciliation bill currently under review in Congress would have a negative impact on the economy. The legislation cuts incentives around solar power investment and adoption, such as the Section 25D residential tax credit.The group's analysis found that the bill, as it stands, would lead to the loss of nearly 300,000 current and future jobs in the US. It also said removal of incentives could mean a loss of $220 billion in investment in the sector by 2030. It also pointed to a future energy shortage, claiming that solar was on course to be responsible for about 73 percent of the 206.5 GW of new energy capacity needed in the country by 2030.Passing this bill would create a catastrophic energy shortfall, cede AI and tech leadership to China, and damage some of the most vital sectors of the U.S. economy," SEIA President and CEO Abigail Ross Hopper said.It's the type of reaction we expect to see when an industry is under threat from federal action. It's also the type of researched data that doesn't seem to have much influence on the current administration, particularly when it comes to the environment and sustainability.This article originally appeared on Engadget at https://www.engadget.com/science/solar-trade-association-warns-of-devastating-energy-shortages-if-incentives-are-cut-214607526.html?src=rss
|
![]() |
by Anna Washenko on (#6XDVH)
Google has dug back into its past and introduced its latest take on smart glasses during I/O 2025. Glasses with Android XR brings Gemini AI to smart glasses thanks to an expanded partnership between Google and Samsung. These smart glasses can sync with a smartphone to access apps, and they're equipped with speakers and an optional in-lens display for privately viewing information.And for those that remember the less-than-stylish old Google Glass frames, this iteration seems more focused on real world wearability and style. Google is also working with Gentle Monster and Warby Parker as inaugural partners for providing the frames. In an indicator of how seriously Google is taking this project, the tech giant is committing up to $150 million as part of the Warby Parker deal. Half of that is for product development and the other half is for potential equity investment into Warby Parker.The highlight of the I/O presentation of the glasses attempted to do a live translation. Shahram Izadi and Nishtha Bhatia spoke Farsi and Hindi to each other as the XR frames provided real-time translation into English. The demo fell victim to the curse of AI misbehaving during a live show, but there was a brief moment where each of their glasses did successfully work as hoped.In addition to that demo, Bhatia also showcased how the Gemini assistant could work with the XR glasses, asking it questions about images she was seeing backstage at the theater and calling up information about the cafe where she got coffee before the show.Update, May 20, 2025, 5:14PM ET: Added financial details about the Warby Parker partnership.This article originally appeared on Engadget at https://www.engadget.com/wearables/google-demos-android-xr-glasses-at-io-live-translation-191510280.html?src=rss
|
![]() |
by Ian Carlos Campbell on (#6XDYG)
The French government has forbidden Telegram CEO Pavel Durov from leaving the country without official authorization, according to a report from Politico. Durov was arrested in France in August 2024 and later indicted for being complicit in illegal activity that occurrs on Telegram, like money laundering and the distribution of CSAM (child sexual abuse material).Durov was attempting to travel to the US for "negotiations with investment funds," Politico writes, something that French officials decided "did not appear imperative or justified." In March, Durov received permission to travel to the United Arab Emirates, where he maintains citizenship.Following Durov's arrest, Telegram shared that it abided by EU laws, including the Digital Services Act, and that "its moderation is within industry standards and constantly improving." As evidence of that constant improvement, Telegram decided it would provide user IP addresses and phone numbers in response to legal requests in September 2024, something it originally made a point of avoiding. The messaging platform later partnered with the International Watch Foundation in December 2024 to use the organization's tools to block links to CSAM in Telegram. Both moves could be seen as attempts to appease authorities who might want the messaging platform to answer for the criminal activity it's seemingly enabled.This article originally appeared on Engadget at https://www.engadget.com/apps/telegram-ceo-pavel-durov-is-banned-from-leaving-france-without-permission-following-his-arrest-210401130.html?src=rss
|
![]() |
by Will Shanklin on (#6XDYH)
In the latest episode of How to Dismantle Public Services in 12 Easy Steps, a Trump executive order targeting libraries has real-world consequences. The AP reported over the weekend that libraries across the country are cutting programs that offer ebooks, audiobooks and other loan programs. These initiatives exploded in popularity following the pandemic, with over 660 million people globally borrowing them in 2023 - a 19 percent annual increase.The cuts and slashing of grants followed a Trump executive order issued on March 14 targeting the Institute of Museum and Library Services (IMLS). His appointee to helm the agency, Keith E. Sonderling, quickly signaled that he was there to do the president's bidding. He placed the IMLS's entire staff on administrative leave, sent termination notices to most of them, canceled grants and contracts and fired everyone on the National Museum and Library Services Board.Federal judges have temporarily blocked the administration from further gutting the IMLS. But while lawsuits from 21 states and the American Library Association make their way through the courts, the agency's federal funding remains frozen. And libraries are scrambling to adjust.If you've ever used your library to borrow an ebook or audiobook through an app like Libby or Hoopla, there's a good chance federal funding made that possible. Libraries purchase digital leases for ebooks and audiobooks from publishers, enabling them to lend titles to patrons. The leases typically cost much more than physical copies and must be renewed after a set period or number of checkouts.With library digital borrowing surging, those federal funds went a long way toward keeping the programs afloat. Mississippi has indefinitely suspended its Hoopla-based lending program.The IMLS was created in 1996 by a Republican-controlled US Congress. The agency has an annual budget of under $300 million, with nearly half of that amount allocated to state libraries, which, in turn, help fund local libraries' digital lending programs. "The small library systems are not able to pay for the ebooks themselves," Rebecca Wendt, California's state library director, told the AP.This article originally appeared on Engadget at https://www.engadget.com/mobile/us-libraries-cut-ebook-and-audiobook-lending-programs-following-trump-executive-order-205113868.html?src=rss
|
![]() |
by Anna Washenko on (#6XDYJ)
The latest video game to be getting the TV show treatment is a pair of hugely popular mobile titles. Developer Supercell is partnering with Netflix for an animated series based on the world of its games Clash of Clans and Clash Royale. Fletcher Moules, who directed the original Clash of Clans animated videos on YouTube, will be the showrunner for the Netflix project and Ron Weiner, who has worked on Silicon Valley, 30 Rock, Futurama and Arrested Development, will be the head writer.Clash of Clans debuted in 2012 and the casual strategy game got a deck battler sequel in Clash Royale, which launched in 2016. According to the show announcement, the pair of games have more than 4 billion downloads and more than 180 billion gameplay hours logged by players. The Netflix show will center on the Barbarian character from this game universe as he tries to "rally a band of misfits to defend their village and navigate the comically absurd politics of war." The series is in pre-production, and no additional casting or release info has been shared at this stage.Netflix has hosted several animated shows based on video games, from Arcane to Devil May Cry.This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/an-animated-clash-of-clans-series-is-coming-to-netflix-204104822.html?src=rss
|
![]() |
by Anna Washenko on (#6XDVF)
Whether you're traveling for a vacation or just relaxing in the sunshine, a tablet is one of the easiest ways for you and your family to stay entertained while out and about during the summer. If you're looking for a new tablet, Amazon is selling the most recent Apple iPad (A16) for $50 off. It's an 11-inch model powered by the A16 chip. You can buy the 128GB tablet in any of the four available colors - silver, blue, pink or yellow - of the tablet for $299. If you need more storage, you can opt for the 256GB model for $399 or the 512GB version for $595. All of these discounts are for the WiFi only models and do not include any time under the Apple Care protection plan. Apple has a bunch of different iPads for sale these days, and the A16 one is our favorite budget option for this brand. That's because although the A16 chip is notably less powerful than the M3 or M4 you'll find in higher-end tablets, this model still performs well on the basic tasks that you'd use an iPad for. This iPad has a liquid Retina display with a resolution of 2360x1640. Again, it's not flashy, but plenty serviceable. One additional caveat for the A16 is that it can't run Apple Intelligence, so this isn't the iPad for you if you're looking to experiment with lots of AI tools. But for about $300, it's a great starter option if you want to have an easy way to play games, watch shows or read on a larger screen. Follow @EngadgetDeals on X for the latest tech deals and buying advice.This article originally appeared on Engadget at https://www.engadget.com/deals/apples-latest-ipad-is-on-sale-for-50-off-ahead-of-memorial-day-195749473.html?src=rss
|
![]() |
by Ian Carlos Campbell on (#6XDVG)
Google originally launched SynthID, its digital watermark for AI-generated content, as a way to detect whether an image was created using the company's Imagen model in 2023. Now, at Google I/O 2025, the company is introducing a public-facing tool called SynthID Detector that claims to detect those watermarks in just about anything you upload.SynthID Detector will be available as a web portal where you can upload images, video, audio and text to be scanned. Once uploaded, SynthID Detector Google claims the portal can tell you whether your upload contains AI-generated material and even "highlight specific portions of the content most likely to be watermarked." For audio, the tool is supposed to be able to identify a specific portion of a track that contains the watermark, too.GoogleSynthID was designed to mark content from Google's models but Google hopes other companies will adopt the watermark for their own AI output. An open source version of SynthID is already available for text watermarking, and as part of the rollout of SynthID Detector, Google is partnering with NVIDIA to mark media its NVIDIA Cosmos model generates. SynthID Detector won't be the only tool that can spot Google's watermark, either. The company says GetReal Security will also be able to verify if media contains SynthID.Considering the sheer number of ways Google hopes people will using AI to create images, video, text and audio, from the Audio Overviews in NotebookLM to short films made with its new Flow tool, it makes sense that it would offer a way to know if any of those things are real. Until models from one company produces the vast majority of content or a digital watermark reaches widespread adoption, though, a tool like SynthID Detector can only be so useful.Journalists, researchers and developers can join a waitlist to try SynthID Detector through Google's online form.This article originally appeared on Engadget at https://www.engadget.com/ai/synthid-detector-can-check-media-to-see-if-it-was-generated-with-googles-ai-tools-194002070.html?src=rss
|
![]() |
by Igor Bonifacic on (#6XDR6)
Google has begun rolling out AI Mode to every Search user in the US. The company announced the expansion during its I/O 2025 conference. Google first began previewing AI Mode with testers in its Labs program at the start of March. Since then, it has been gradually rolling out the feature to more people, including in recent weeks regular Search users. At its keynote today, Google shared a number of updates coming to AI Mode as well, including some new tools for shopping, as well as the ability to compare ticket prices for you and create custom charts and graphs for queries on finance and sports.For the uninitiated, AI Mode is a chatbot built directly into Google Search. It lives in a separate tab, and was designed by the company to tackle more complicated queries than people have historically used its search engine to answer. For instance, you can use AI Mode to generate a comparison between different fitness trackers. Before today, the chatbot was powered by Gemini 2.0. Now it's running a custom version of Gemini 2.5. What's more, Google plans to bring many of AI Mode's capabilities to other parts of the Search experience."AI Mode is where we'll first bring Gemini's frontier capabilities, and it's also a glimpse of what's to come," the company wrote in a blog post published during the event. "As we get feedback, we'll graduate many features and capabilities from AI Mode right into the core search experience in AI Overviews."Looking to the future, Google plans to bring Deep Search, an offshoot of its Deep Research mode, to AI Mode. Google was among the first companies to debut the tool in December. Since then, most AI companies, including OpenAI, have gone on to offer their take on Deep Research, which you can use to prompt Gemini and other chatbots to take extra time to create a comprehensive report on a subject. With today's announcement, Google is making the tool available in a place where more of its users are likely to encounter it.Another new feature that's coming to AI Mode builds on the work Google did with Project Mariner, the web-surfing AI agent the company began previewing with "trusted testers" at the end of last year. This addition gives AI Mode the ability to complete tasks for you on the web. For example, you can ask it to find two affordable tickets for the next MLB game in your city. AI Mode will compare "hundreds of potential" tickets for you and return with a few of the best options. From there, you can complete a purchase without having done the comparison work yourself."This will start with event tickets, restaurant reservations and local appointments," says Google. "And we'll be working with companies like Ticketmaster, StubHub, Resy and Vagaro to create a seamless and helpful experience."AI Mode will also soon include the ability to generate custom charts and graphics tailored to your specific queries. At the same time, AI Mode will be more personalized in the near future, with Google introducing an optional feature allowing the tool to draw their past searches. The company will also give people the option to connect their other Google apps to AI Mode, starting with Gmail, for even more granular recommendations.As mentioned above, Google is adding a suite of shopping features to AI Mode. Engadget has a separate post dedicated to the Shopping features Google announced today, but the short of it is that AI Mode will be able to narrow down products for you and complete purchases on your behalf - with your permission, of course.All of the new AI Mode features Google previewed today will be available to Labs users first before they roll out more broadly.Update, May 20 2025, 2:45PM ET: This story has been updated to preview in the intro some of the updates coming to AI Mode.This article originally appeared on Engadget at https://www.engadget.com/ai/google-is-rolling-out-ai-mode-to-everyone-in-the-us-174917628.html?src=rss
|
![]() |
by Igor Bonifacic on (#6XDR2)
Google has just announced a new $250 per month AI Ultra plan for people who want unlimited access to its most advanced machine learning features. Yes, you read that right. It means the new subscription is $50 more expensive than the already pricey ChatGPT Pro and Claude Max plans from OpenAI and Anthropic.For $250, you're getting early access to new models like Veo 3, and unlimited usage of features like Flow (the new AI film-making app the company announced today) and the compute-intensive Deep Research. In the coming weeks, Google will also roll out Deep Think to AI Ultra users, which is the new enhanced reasoning mode that is part of its Gemini 2.5 Pro model. Subscribers can also look forward to access to Project Mariner, Google's web-surfing agent, and Gemini within Chrome, plus all the usual places where you can find the chatbot like Gmail and Docs.Google is partly justifying the high cost of AI Ultra by touting the inclusion of YouTube Premium and 30TB of cloud storage across Google Photos, Drive and Gmail. On its own, a YouTube Premium subscription would cost you $14 per month, and Google doesn't offer 30TB of cloud storage separately. The closest comparison would be Google One, which includes a Premium tier that comes with 2TB of storage for $10 per month. As another incentive to sign up for AI Ultra, Google is giving new subscribers 50 percent off their first three months.As of today, Google is also revamping its existing AI Premium plan. The subscription, which will be known as Google AI Pro moving forward, now includes the Flow app and early access to Gemini in Chrome. Google says the new benefits will come to US subscribers first, with availability in other countries to follow.This article originally appeared on Engadget at https://www.engadget.com/ai/google-wants-250-per-month-in-return-for-its-new-ai-ultra-plan-180248513.html?src=rss
|
![]() |
by Mat Smith on (#6XDR4)
At I/O today, Google pitched creators on a new app for "AI filmmaking": Flow. Combining all of Google's recent announcements and developments across AI-powered services, including Veo (video), Imagen (images) and Gemini, the company bills Flow as a storytelling aid "built with creatives." If it sounds familiar, this is the advanced version of VideoFX, previously a Google Labs experiment.The company says Flow is aimed at helping storytellers to explore ideas and create clips and scenes, almost like storyboards and sketches in motion. Google's generally impressive Veo 2 model seems to form the core of Flow, able to extend footage and create video that excel(s) at physics and realism", although I'm not sure many agree with that.You can use Gemini's natural language skills to construct and tweak the video output, and creatives can pull in their own assets or create things with Imagen through simple text input. What's notable is the ability to integrate your creations and scenes into different clips and scenes with consistency. While the early demo footage we saw was impressive, it still had a not-so-faint AI-slop aroma.There are further film-making tools, too. Flow will also feature direct control over the movement of your camera', and even choose camera angles. You can also edit and extend shots, adding different transitions between AI-generated videos. Creating video with Veo is often a piecemeal process, but Flow will have its own asset management system to organize assets and even your prompts. These richer controls and editing abilities could make for more compelling creations in time. Let's not forget: It's been less than a year since that very weird Toys R'Us ad.Google buddied up with several notable filmmakers to attempt to legitimize collaborate on these still-early steps into AI video creation, including Dave Clark, Henry Daubrez and Junie Lau. It says it offered creatives early access to the tools, and folded in their insights and feedback into what is now called Flow.Flow is now available to AI Pro and AI Ultra subscribers in the US, and will roll out to other countries soon. Pro users will get Flow tools outlined so far and 100 generations each month. With the Ultra sub, you'll get unlimited generation and early access to Veo 3, with native audio generation.This article originally appeared on Engadget at https://www.engadget.com/ai/google-filmmaking-tool-flow-ai-generated-video-175212520.html?src=rss
|
![]() |
by Sam Chapman on (#6XDR5)
Google Chrome has announced a feature for its built-in password manager that it claims will let users instantly change passwords compromised in data breaches. Google Password Manager already alerts you when your credentials have appeared in a data breach, and partially automates the process of changing your password, but - until now - you still had to go through the steps manually for each of your online accounts.The Automated Password Change feature, announced at today's Google I/O keynote presentation, goes a step farther. It will apparently let you generate a new password and substitute it for the old one with a single click, without ever seeing a "Create New Password" page. The feature only works on participating websites. Google is currently in talks with developers to expand the range of sites that will support one-click password changes, with plans for a full rollout later in 2025.Automated Password Change was discovered as far back as February by eagle-eyed software diggers, but was limited to the early developer-only builds made public as Chrome Canary. At that time, it was located in the "AI Innovations" settings menu, though it's not yet clear how AI figures in the process.This feature builds on password health functionality that Google has been steadily incorporating into Chrome since it released the Password Checkup extension in 2019, recognizing that compromised credentials are a common vector for cybercrime. People often reuse the same short, memorable password on multiple websites. If hackers steal a credential database from one weakly defended site and dump it on the dark web, other cybercriminals can try the leaked usernames and passwords on more secure sites - like online banks and cash apps - until one fits.The best way to prevent this is to use a password manager to generate and save a different strong password for every account you make, even ones you don't think will handle sensitive information. If you haven't done this, the second-best prevention is to monitor password data breaches and immediately change any password that gets leaked. If Automated Password Change works as advertised, it'll make that crisis response a lot more convenient.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/google-chrome-previews-feature-to-instantly-change-compromised-passwords-175051933.html?src=rss
|
![]() |
by Karissa Bell on (#6XDR7)
Google's Chrome browser is the latest major product from the company to get its own built-in Gemini features. Today at Google I/O, the company detailed its plans to bring its AI assistant to Chrome.While Gemini can already distill information from websites, having the assistant baked into Chrome allows it to provide insights and answer questions about your open tabs without ever having to move to a different window or application. Instead, Gemini lives in a new menu at the top of your browser window as well as in the taskbar.The company envisions its assistant as being able to help out with tasks that may normally require switching between several open tabs or scrolling around to different parts of a web page. For example, Google showed off how Gemini can give advice about potential modifications for dietary restrictions while looking at a recipe blog. Gemini in the browser could also come in handy while shopping as it can answer specific questions about products or even summarize reviews.To start, Gemini will only be able to answer queries about a single open tab, but the company plans to add multi-tab capabilities in a future update. This would allow the assistant to synthesize info across multiple open tabs and answer even more complex questions. Gemini in Chrome will also have Gemini Live capabilities, for anyone more comfortable conversing with the assistant using their voice. The company also teased a future update that will allow Gemini to actually scroll through web pages on your behalf, like asking it to jump to a specific step in a recipe. (Notably, all this is separate from Google's other web-browsing AI, Project Mariner, which is still a research prototype.)Gemini is starting to roll out to Chrome users on Mac and Windows today, beginning with AI Pro and AI Ultra subscribers in the United States. The company hasn't indicated whether it plans to bring similar features to Chromebooks or Chrome's mobile app.This article originally appeared on Engadget at https://www.engadget.com/ai/google-is-bringing-gemini-to-chrome-so-it-can-answer-questions-about-your-open-tabs-174903787.html?src=rss
|
![]() |
by Cherlynn Low on (#6XDR8)
As part of its announcements for I/O 2025 today, Google shared details on some new features that would make shopping in AI Mode more novel. It's describing the three new tools as being part of its new shopping experience in AI Mode, and they cover the discovery, trying on and checkout parts of the process. These will be available "in the coming months" for online shoppers in the US.The first update is when you're looking for a specific thing to buy. The examples Google shared were searches for travel bags or a rug that matches the other furniture in a room. By combining Gemini's reasoning capabilities with its shopping graph database of products, Google AI will determine from your query that you'd like lots of pictures to look at and pull up a new image-laden panel.It's somewhat reminiscent of Image search results, except these photos take up the right half or so of the page and are laid out vertically in four columns, according to the screenshots the company shared. Of course, some of the best spots in this grid can be paid for by companies looking for better placement for their products.As you continue to refine your search results with Gemini, the "new righthand panel dynamically updates with relevant products and images," the company said. If you specify that the travel bag you're looking for should withstand a trip to Oregon, for example, the AI can prioritize weatherproof products and show you those images in this panel.The second, and more intriguing part of the shopping updates in AI Mode, is a change coming to the company's virtual try-on tool. Since its launch in 2023, this feature has gotten more sophisticated, letting you pick specific models that most closely match your body type and then virtually reimagine the outfit you've found on them. At Google I/O today, the company shared that it will soon allow users to upload a single picture of themselves and its new image generation model that has been designed for fashion will overlay articles of clothing on your AI-imagined self.According to Google, the custom image generation model "understands the human body and nuances of clothing - like how different materials fold, stretch and drape on different bodies." It added that the software will "preserve these subtleties when applied to poses in your photos." The company said this is "the first of its kind working at this scale, allowing shoppers to try on billions of items of clothing from our Shopping Graph." The Try It On with an upload of your photo is rolling out in Search Labs in the US today, and when you're testing it, you'll need to look for the "try it on" icon on compatible product listings.GoogleFinally, when you've found what you want, you might not want to purchase it immediately. Many of us know the feeling of having online shopping carts packed and ready for the next upcoming sale (Memorial Day in the US is this weekend, by the way). Google's new "agentic checkout feature" can keep an eye on price drops on your behalf. You'll soon see a "track price" option on product listings similar to those already available on Google Flights, and after selecting it you'll be able to set your desired price, size, color and other options. The tracker will alert you when those parameters are met, and if you're ready to hand over your money, the agentic checkout tool can also simplify that process if you tap "buy for me."According to Google, "behind the scenes, we'll add the item to your cart on the merchant's site and securely complete the checkout on your behalf with Google Pay." The agentic checkout feature will be available "in the coming months" for product listings in the US.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-ai-mode-lets-you-virtually-try-clothes-on-by-uploading-a-single-photo-174820693.html?src=rss
|
![]() |
by Mariella Moon on (#6XDR9)
As part of this year's announcements at its I/O developer conference, Google has revealed its latest media generation models. Most notable, perhaps, is the Veo 3, which is the first iteration of the model that can generate videos with sounds. It can, for instance, create a video of birds with an audio of their singing, or a city street with the sounds of traffic in the background. Google says Veo 3 also excels in real-world physics and in lip syncing. At the moment, the model is only available for Gemini Ultra subscribers in the US within the Gemini app and for enterprise users on Vertex AI. It's also available in Flow, Google's new AI filmmaking tool.Flow brings Veo, Imagen and Gemini together to create cinematic clips and scenes. Users can describe the final output they want in natural language, and Flow will go to work making it for them. The new tool will only be available to Google AI Pro and Ultra subscribers in the US for now, but Google says it will roll out to more countries soon.While the company has released a brand new video-generating model, it hasn't abandoned Veo 2 just yet. Users will be able to give Veo 2 images of people, scenes, styles and objects to use as reference for their desired output in Flow. They'll have access to camera controls that will allow them to rotate scenes and zoom into specific objects for Flow, as well. Plus, they'll be able to broaden their frames from portrait to landscape if they want to and add or remove objects from their videos.Google has also introduced its latest image-generating model, Imagen 4, at the event. The company said Imagen 4 does fine details like intricate fabrics and animal fur with "remarkable clarity" and excels at generating both photorealistic and abstract images. It's also significantly better at rendering typography than its predecessors and can create images in various aspect ratios with resolutions of up to 2K. Imagen 4 is now available via the Gemini app, Vertex AI and in Workspace apps, including Docs and Slides. Google said it's also releasing a version of Imagen 4 that's 10 times faster than Imagen 3 "soon."Finally, to help people identify AI-generated content, which is becoming more and more difficult these days, Google has launched SynthID Detector. It's a portal where users can upload a piece of media they think could be AI-generated, and Google will determine if it contains SynthID, its watermarking and identification tool for AI art. Google had open sourced its watermarking tool, but not all image generators use it, so the portal still won't be able to identify all AI-generated images.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-veo-3-ai-model-can-generate-videos-with-sound-174541183.html?src=rss
|
![]() |
by Anna Washenko on (#6XDN9)
Apple has sent the invites for its in-person WWDC 2025 festivities on Monday, June 9, featuring the keynote session at 1PM ET/10AM PT. Attendees will be able to watch the keynote presentation at the company's Cupertino campus, as well as meet with developers and participating in special activities. For everyone who hasn't received an invite to Apple Park, the keynote will stream online. Developers can also participate in the rest of WWDC's programming online for free.We've already got pretty high hopes for the keynote announcements, with a lot of potential news expected about the upcoming redesign for iOS 19. We've heard that the operating system could have features including AI-powered battery management and improved public Wi-Fi sign ins, and our own Nathan Ingraham has penned an impassioned plea for a normal letter "a" in the Notes app. The full WWDC conference runs from June 9-13.This article originally appeared on Engadget at https://www.engadget.com/big-tech/apples-wwdc-2025-keynote-will-be-june-9-at-1pm-et-150621700.html?src=rss
|
![]() |
by Ian Carlos Campbell on (#6XBDF)
Ready to see Google's next big slate of AI announcements? That's precisely what we expect to be unveiled today at Google I/O 2025, the search giant's developer conference that kicks off today at 1PM ET / 10AM PT. Engadget will be covering it in real-time right here, via a liveblog and on-the-ground reporting from our very own Karissa Bell.Ahead of I/O, Google already gave us some substantive details on the updated look and feel of its mobile operating system at The Android Show last week. Google included some Gemini news there as well: Its AI platform is coming to Wear OS, Android Auto and Google TV, too. But with that Android news out of the way, Google can use today's keynote to stay laser-focused on sharing its advances on the artificial intelligence front. Expect news about how Google is using AI in search to be featured prominently, along with some other surprises, like the possible debut of an AI-powered Pinterest alternative.The company made it clear during its Android showcase that Android XR, its mixed reality platform, will also be featured during I/O. That could include the mixed reality headset Google and Samsung are collaborating on, or, as teased at the end of The Android Show, smart glasses with Google's Project Astra built-in.As usual, there will be a developer-centric keynote following the main presentation (4:30PM ET / 1:30PM PT), and while we'll be paying attention to make sure we don't miss out any news there, our liveblog will predominantly focus on the headliner.You can watch Google's keynote in the embedded livestream above or on the company's YouTube channel, and follow our liveblog embedded below starting at 1PM ET today. Note that the company plans to hold breakout sessions through May 21 on a variety of different topics relevant to developers.Update, May 20 2025, 9:45AM ET: This story has been updated to include a liveblog of the event.Update, May 19 2025, 1:01PM ET: This story has been updated to include details on the developer keynote taking place later in the day, as well as tweak wording throughout for accuracy with the new timestamp.This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-io-2025-live-updates-on-gemini-android-xr-android-16-updates-and-more-214622870.html?src=rss
|
![]() |
by Ian Carlos Campbell on (#6XDHT)
Amazon is updating Amazon Music with a a new "AI-powered search experience" that should make it easier to discover music based on the albums and artists you're already looking for. The company says the new beta feature "includes results for many of your favorite artists today," which is to say, not everyone, but it'll continue to expand to include more over time.A traditional search uses a search term - an artist's name, a song or an album title - and tries to pull up results that are as close to whatever you entered as possible. You'll still be able to make those kinds of searches in Amazon Music, but now under a new "Explore" tab in the iOS Amazon Music app, you'll also be able to see new AI-powered recommendations. These include "curated music collections," an easy jumping-off-point for creating an AI-generated playlists and more.AmazonAmazon suggests these results will vary depending on what you search you do. Looking up Bad Bunny's "Debi Tirar Mas Fotos" will show the album, but also "influential artists who influenced his sound" and other musicians he's collaborated with, the company says. A search for BLACKPINK, meanwhile, would highlight the K-pop group's early hits before surfacing solo work from members like Lisa or Jennie. It all sounds like a more flexible and expansive version of the X-Ray feature Amazon includes in Prime Video, which provides things like actors' names, trivia and related movies and TV shows with a button press.This new search experience was built using Amazon Bedrock, Amazon's cloud service for hosting AI models. It's one of several ways the company is trying to incorporate more AI features into its products. Earlier this year, Amazon started rolling out Alexa+, a version of the popular voice assistant rebuilt around generative AI, to select Echo devices.AI search in Amazon Music is available today on iOS for a select number of Amazon Music Unlimited subscribers in the US. If you're not included in this beta, you could be included in future tests.This article originally appeared on Engadget at https://www.engadget.com/apps/amazon-music-gets-ai-powered-search-results-in-new-beta-140055428.html?src=rss
|
![]() |
by Billy Steele on (#6XDHV)
When a company enters a new product category, it might as well swing for the fences. That's exactly what Marshall is doing with its first soundbar. The Heston 120 is a $1,000 Dolby Atmos and DTS-X living room speaker, equipped with 11 drivers to power that spatial audio. Like the company's headphones and speakers, there's plenty of the iconic guitar amplifier aesthetic to go around. Inside, two subwoofers, two mid-range units, two tweeters and five full-range drivers produce the Heston 120's sound. There are also 11 Class D amplifiers (two 50W and nine 30W) inside and the soundbar has a total power output of 150 watts. Bluetooth (5.3) and Wi-Fi are also onboard, which means AirPlay 2, Google Cast, Spotify Connect and Tidal Connect are all available. For wired connectivity, there are two HDMI 2.1 ports (1 eARC) for your TV and other home theater gear, plus an RCA input allows you to hook up a turntable or other audio devices. Marshall The Heston 120 takes design cues from Marshall's line of guitar amps. This has been the case for the company's headphones, earbuds and speakers, and it will continue with soundbars. To that end, there's a mix of leather and metal, complete with the trademark gold script logo. There are also tactile controls you typically don't see on a soundbar, like the gold knobs and preset buttons akin to those that adorn an amplifier. This soundbar doesn't come with a subwoofer, but Marshall says a standalone option is on the way. What's more, that Heston Sub 200 and a smaller Heston 60 are both due to arrive "at a later date." Lots of companies are bundling at least a sub with their high-end soundbars, so it's disappointing that Marshall didn't do the same. I look forward to getting a review unit to see if the company's promise of "bass rumbling from below like never before" from the soundbar itself hold true. The Heston 120 will be available for purchase from Marshall's website on June 3. This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/the-first-marshall-soundbar-is-the-1000-heston-120-with-dolby-atmos-140041873.html?src=rss
|
![]() |
by Jeff Dunn on (#6XDHW)
We think the iPad Air is the best blend of price, features and performance in Apple's tablet lineup, and the 13-inch version in particular is a fine buy if you want a roomier display for multitasking or streaming video without paying the iPad Pro's extravagant prices. If you've been waiting for a sale on the jumbo-sized slate, good news: The device is $100 off Apple's list price and back down to $699 at Amazon and B&H. That's a deal we've seen for much of the last few weeks, but it still matches the lowest price we've tracked for the most recent model, which was released in March and runs on Apple's M3 chip. This offer applies to the base model with 128GB of storage. If you need more space, the 256GB and 512GB variants are also $100 off at $799 and $999, respectively. The former is another all-time low, while the latter only fell about $25 lower during a brief dip at the start of the month. The one catch is that these discounts only apply to the Space Gray colorway. We gave the newest 13-inch iPad Air a score of 89 in our review. This year's model is a straightforward spec bump, with the only major upgrade being the faster chip. So if you're coming from a prior M2 or M1 model and are still happy with its performance, there's no real need to upgrade. The M2 version in particular is still worth buying if you see it on sale - right now Target has the 256GB version of that slate down to $699, so feel free to grab that instead if you don't mind buying something slightly less futureproof. Either way, the Air remains a fairly definitive upgrade over the entry-level iPad (A16). It's certainly more expensive, but its laminated display doesn't look as cheap, holds up better against glare and can pump out slightly bolder colors. Its speakers sound less compressed, and it works with superior Apple accessories like the Pencil Pro stylus and latest Magic Keyboard. The M3 chip is noticeably faster for more involved tasks like exporting high-res videos or playing new games as well. More importantly, it sets the Air up better going forward, as features like Apple Intelligence and the Stage Manager aren't available on the lower-cost model at all. Plus, the base model is only available with an 11-inch display; if you want that bigger screen, this is the most affordable way to get it. Check out our coverage of the best Apple deals for more discounts, and follow @EngadgetDeals on X for the latest tech deals and buying advice.This article originally appeared on Engadget at https://www.engadget.com/deals/apples-13-inch-ipad-air-m3-is-100-off-for-memorial-day-133034317.html?src=rss
|
![]() |
by Tim Stevens on (#6XDF4)
The pool of electric vehicles currently available on the North American market keeps getting wider and deeper. But, since the beginning, there's been something of a hole right in the middle. A big hole, as it turns out. The three-row SUV, one of the most popular segments in American motoring, has been woefully underserved. The only real options come on the high-end, with things like the Rivian R1S or the Mercedes-Benz EQS SUV.Kia added a new and more attainable option last year with the EV9, and now it's time for the other side of the corporate family to enter the fray with its own option, the Hyundai Ioniq 9. The latest American-made electric SUV from the Korean giant bears sharp styling and impressive performance. After a day piloting one through the countryside around the Savannah, Georgia factory where it'll be built, it's hard to argue against its $58,955 starting price.Economy-SizedTim Stevens for EngadgetThere's no denying that Hyundai's new Ioniq is huge. At 199 inches long, it's three inches bigger than the Hyundai Palisade, the company's now second-biggest three-row SUV. However, Hyundai's designers have done a stellar job of giving its new biggest baby a very compelling shape.Many SUVs with that much space resort to acres of flat sheet metal just to cover the distance between the bumpers, but the Ioniq 9 has a subtle, sophisticated and, equally importantly, aerodynamic shape. I confess I'm not a massive fan of the nose and its bland curves, but I absolutely love the subtle taper at the rear. That not only helps with the coefficient of drag (which measures at 0.269), but also helps make this thing look much smaller than it is.The Ioniq 9 has a stance more like a Volvo station wagon than a gigantic family hauler, but make no mistake, it's the latter. That's immediately evident as soon as you climb into the third row. It's a bit of a slow process thanks to the power second-row seats, but once your path is clear, access to the rear is easy, and I was shocked to find generous headroom back there. There's even a tolerable amount of legroom for an adult.Even better are the 100-watt USB-C outlets that are present even in the way-back. All three rows have access to high power outputs that'll keep just about anything short of a portable gaming rig juiced on the go. Second-row seating is far more comfortable, especially if you opt for the Ioniq 9 Limited or Calligraphy trims with a six-seat configuration. These give you a set of heated and ventilated captain's chairs. (A seven-seat, bench configuration is also available.)The seats up front are quite similar, also heated and ventilated, with the driver's seat adding massage. Extending leg rests also make the Ioniq 9 an ideal space for a nap during a charging stop. It'll need to be a quick one, though.Power and ChargingTim Stevens for EngadgetThe Ioniq 9 is built on Hyundai's E-GMP platform, which also underpins the Ioniq 5 and Ioniq 6, among others. That includes an 800-volt architecture and a maximum charging speed of 350 kW. Find a charger with enough juice and it'll go from 10 to 80 percent in 24 minutes.Yes, it has a Tesla-style NACS plug, which means you can use Superchargers without an adapter. Still, sadly, Tesla's current suite of chargers isn't fast enough to support that charging rate. That means you'll have to use a CCS adapter, which is included.All those electrons get shoved into a 110.3-kWh battery pack, with roughly 104 kWh usable. Maximum range depends on which trim you choose, from 335 miles for a base, rear-drive model, dropping to 311 miles for a top-shelf Performance model with dual-motor AWD. Naturally, that upgrade gets you more power, either 303 or 422 horsepower, depending on which dual-motor variant you choose. Still, even the single motor has 215 hp.I sadly was not able to sample the single-motor flavor, but the Performance Calligraphy Design I drove was plenty snappy. Even in Eco, the most relaxed of the available on-road drive modes, the Ioniq 9 had plenty of response to make impromptu passes or simply to satisfy my occasional need for G-forces. There's also a selection of off-road drive modes for various types of terrain, but that's clearly not a focus for this machine. While it'll do just fine on unpaved surfaces and some light off-roading, given the sheer dimensions of this thing, I wouldn't point it down any particularly tricky trails.Behind the WheelTim Stevens for EngadgetMuch of my time spent driving the Ioniq 9 I was sitting in traffic, cruising on metropolitan streets or casually motoring between rest stops over broken rural roads. I'd say that's close to the average duty cycle for a vehicle like this, and the Ioniq 9 was a treat over most of it.At slower speeds, the suspension proved a bit rough, possibly due to the 21-inch wheels on the Calligraphy trim. But, over 30 mph or so, everything smoothed out nicely. This three-row SUV is calm and quiet at speed, helped by sound-isolating laminated glass in the first and second rows, plus active sound canceling akin to your headphones, but on a significantly larger scale.The only place where you hear any road noise is back in the third row. There's noticeably more wind noise and a bit more whine from the rear motor, too, but I'd gladly take that over the drone of an average SUV's exhaust out the back.Behind those rear seats, there's 21.9 cubic feet of cargo space, or a whopping 86.9 if you fold both rows down. Yes, there is a frunk, but it's tiny and it's fully occupied by the charging cable, CCS adapter and flat tire kit.All the TechTim Stevens for EngadgetThose 100-watt USB-C ports are definitely the tech highlight on the inside of the machine. Still, you'll also find Hyundai's standard infotainment experience here, including both wireless Android Auto and Apple CarPlay. They're experienced through a pair of 12.3-inch displays joined at the bezel to form one display, sweeping from behind the wheel out to the middle of the dashboard. On the Ioniq 5 and Ioniq 6, this looks impressive. On the Ioniq 9, it honestly looks a bit Lilliputian given the giant scale of everything else here.The Ioniq 9 features some lovely styling touches, subtle RGB LED mood lighting and generally nice-feeling surfaces - so long as your fingers don't wander too far down. Harsh plastics covering the lower portions of the interior feel less than premium for a machine that otherwise looks this posh.But it at least carries a fair price. You can get in an Ioniq 9 for as little as $58,955, if you don't mind the single-motor version. You can also subtract the $7,500 federal incentive for as long as that lasts. There are six trims to choose from, with the top-shelf Performance Calligraphy Design AWD model you see pictured here costing $79,540 after a $1,600 destination charge.Yes, that's a lot, entering into Rivian R1S territory. But, where the Rivian is quicker and certainly more capable off-road, the Ioniq 9 is roomier, more practical and honestly more comfortable for the daily grind.You can also save a few thousand by going with a Kia EV9, but I feel like the extra presence and features of the Hyundai will woo many. Either way, you're getting a winner, which is yet more proof that our current slate of EV options is the best yet, and only getting better.This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/hyundais-ioniq-9-is-a-big-electric-suv-with-big-style-130050754.html?src=rss
|
by Lawrence Bonk on (#6XDF5)
The iconic instrument and amp maker Fender is diving deep into the digital domain. The company just announced Fender Studio, an all-in-one music-creation software platform. It's basically a digital audio workstation (DAW) but one that's intended for newbies. Think GarageBand and not Pro Tools. Just like GarageBand, Fender Studio is free.The software looks perfect for going straight into an audio interface without any complications. Players can select from a wide variety of digital amp recreations. These include some real icons, like the '65 Twin Reverb guitar amp, the Rumble 800 bass amp, the '59 Bassman, the Super-Sonic, the SWR Redhead and several more. More amp models are likely on the way.Along with the amp models, the software comes with a bunch of effects inspired by iconic Fender pedals. There's a vintage tremolo, a stereo tape delay, a small hall reverb, a triangle flanger, a compressor and, of course, overdrive and distortion. There's an integrated tuner and plenty of effects presets for those who don't want to fiddle with virtual knobs.The software includes several dedicated effects for vocalists. There's a de-tuner, a vocal transformer and a vocoder, in addition to standard stuff like compression, EQ, reverb and delay.FenderThere's also a cool feature for those who just want to practice. Fender Studio offers "remixable jam tracks" that lets folks play along with songs in a wide variety of genres. These let players mute or delete an instrument, for playing along. To that end, users can slow everything down or speed things up. Fender promises that new songs will be added to this platform in regular intervals.As for the nuts and bolts of recording, the arranger can currently handle up to 16 tracks. Despite the track limitation, the software offers some real pro-grade features. There are various ruler formats, a global transpose, input monitoring, looping abilities, time stretching and even a simple pitch-shifting tool. Tracks allow for fades, FX sends and more.FenderThe mobile version of the app includes a pinch-to-zoom feature, which is always handy with recording software. All of those squiggly lines can get tough on the old eyeballs.Fender Studio is available on just about everything. There's a version for Mac, Windows, iOS, Android and Linux. It should even run well on Chromebooks. Again, this software is free, though some features do require signing up for a Fender account.This is certainly Fender's biggest push into digital audio, but not its first. The company has long-maintained the Mustang Micro line of personal guitar amplifiers. These plug straight into a guitar or bass and offer models of various amps and effects. The company also released its own audio interface, the budget-friendly Fender Link I/O, and a digital workstation that emulates over 100 amps.This article originally appeared on Engadget at https://www.engadget.com/audio/fender-just-launched-its-own-free-daw-software-for-recording-music-130007067.html?src=rss
![]() |
by Steve Dent on (#6XDF6)
Nintendo hired Samsung to build the main chips for the Switch 2, including an 8-nanometer processor custom designed by NVIDIA, Bloomberg reported. That would mark a move by Nintendo away from TSMC, which manufactured the chipset for the original 2017 Switch. Nintendo had no comment, saying it doesn't disclose its suppliers. Samsung and NVIDIA also declined to discuss the matter.Samsung has previously supplied Nintendo with flash memory and displays, but building the Switch 2's processor would be a rare win for the company's contract chip division. Samsung can reportedly build enough chips to allow Nintendo to ship 20 million or more Switch 2s by March of 2026.NVIDIA's new chipset was reportedly optimized for Samsung's, rather than TSMC's manufacturing process. Using Samsung also means that Nintendo won't be competing with Apple and others for TSMC's resources. During Nintendo's latest earnings call, President Shuntaro Furukawa's said that the company didn't expect any component shortages with its new console - an issue that plagued the original Switch.Nintendo said in the same earnings report that it was caught by surprise with 2.2 million applications for Switch 2 pre-orders in Japan alone. Despite that, the company projected sales of 15 million Switch 2 units in its first year on sale to March 2026, fewer than analyst predictions of 16.8 million - likely due to the impact of Trump's tariffs.This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/nintendo-is-reportedly-using-samsung-to-build-the-main-switch-2-chips-120006403.html?src=rss
|
![]() |
by Mat Smith on (#6XDCY)
If you've been holding out for the latest 2025 PC models and graphics card loadouts, Computex is usually when you have to check your bank balance. The PC-centric tech show in Taiwan has kicked off with a barrage of new laptops from the likes of Razer, ASUS and Acer.ASUS has revealed the new ROG Zephyrus G14, with a 14-inch (of course) screen at 3K resolution, a refresh rate of 120Hz, 500 nits of peak brightness and Dolby Vision support. The G14 can be outfitted with up to an AMD Ryzen AI 9 HX 370 processor with 12 cores and 24 threads and an AMD XDNA NPU with up to 50 TOPS. The graphics card maxes out with the NVIDIA GeForce RTX 5080, while RAM options go up to 64GB and on-board storage up to 2TB.RazerMeanwhile, Razer's new Blade 14 laptops will arrive with RTX 5000 series cards, while still remaining thin, thin, thin. Those NVIDIA cards can tap into the company's DLSS 4 tech to provide the highest quality gaming experience possible in a 14-inch" laptop, according to Razer. The laptops have AMD Ryzen AI 9 365 processors that can achieve up to 50 TOPS. And if you're feeling even more lavish, there's also the bigger Blade 18, which you can load out with the RTX 5090. And then there's Acer, which is doing something special with thermal interface materials.- Mat SmithGet Engadget's newsletter delivered direct to your inbox. Subscribe right here!You might have missed:
|
![]() |
by Valentina Palladino on (#659Y8)
Chores are just a fact of life, but there may be some chores you detest more than others. If vacuuming comes to mind for you, consider a robot vacuum cleaner. These smart home gadgets have come a long way in recent years. Previously, you'd shell out hundreds for basic dirt-sucking capabilities. Now, the best robot vacuums have gotten so advanced that even affordable machines have good suction power, and maybe even a handful of extra features like obstacle avoidance and home mapping. Prices for models with self-emptying bases and mopping capabilities are also falling. Engadget has tested dozens of robot vacuums over the years and we continue to try out the latest models as they become available. Below, we've collected our top picks for the best robot vacuums you can get right now. Table of contents
|
![]() |
by Ian Carlos Campbell on (#6XD6C)
ASUS is updating both its ProArt laptop and its Chromebooks with the latest internals for Computex 2025, and giving both families of laptops a more premium look, with new colors and tasteful finishes.The ASUS ProArt A16 stands out as the most premium pick, with a black aluminum body, "stealth" hinge that bring the top half of the laptop nearly flush with the bottom and a smudge-resistant finish that should hopefully avoid fingerprints. Inside, ASUS is offering an AMD Ryzen AI 9 HX processor and a NVIDIA GeForce RTX 5070 Laptop GPU, both of which qualify the new ProArt as a Copilot+ PC. That means you'll get access to Windows' growing list of AI features, and ASUS is also including to apps - StoryCube and MuseTree - that can run generative AI models entirely locally. All packed into a laptop that's around half-an-inch thick and has a 16-inch 4K OLED.AsusIn terms of Chromebooks, ASUS is offering both normal models and Chromebook Plus versions that support Google's AI tools. The ASUS Chromebook Plus CX34 has a 14-inch display that can fold flat and a 1080p webcam, alongside up to an Intel Core i5 and 8GB of LPDDR5 RAM. That's enough to offer Gemini features locally, and you'll get priority access to Gemini Advanced. The only real disadvantage is the giant ASUS logo that still looks awkward next to the similarly prominent Chromebook logo, and the limited color options: You can only pick between white or grey.AsusThe ASUS Chromebook CX14 and CX15 come with up to an Intel Core N355 processor, put to 8GB of LPDDR5 RAM and up to 256GB of storage. If you're curious about Google's AI features, you can also purchase a Plus version of the CX14. Whether you get the 14-inch or 15-inch model, both come with a respectable selection of ports, including HDMI for connecting to external displays. Either size also gets a variety of color options: blue, and a sliver-y grey or a greenish-grey in a either a matte or textured finish.AsusThe ASUS Chromebook CX34 is available now starting at $400 from both Walmart and Best Buy. Meanwhile, the rest of the above laptops won't be available until Q2 2025. The ProArt A16 starts at $2,500 from ASUS' online store and Best Buy. The Chromebook CX14 starts at $279 from Best Buy or Costco. The Chromebook Plus CX14 will be available for $429 from Best Buy. And finally the Chromebook CX15 starts at $220 and will be able to be purchased from Best Buy and Amazon.This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/the-asus-proart-a16-laptop-gets-you-the-latest-from-amd-and-a-giant-screen-013037587.html?src=rss
|
![]() |
by Anna Washenko on (#6XD42)
Spotify is continuing to add more ways for listeners to directly make purchases within its iOS app. Following on the streaming service's changes to make purchasing subscriptions easier earlier this month, there's now an an option for users to buy audiobooks in Spotify."Spotify submitted a new app update that Apple has approved: Spotify users in the United States can now see pricing, buy individual audiobooks and purchase additional 'Top Up' hours for audiobook listening beyond the 15 hours included in Premium each month," the company said in its updated blog post.The wave of changes stem from the ongoing court case between Apple and Epic Games surrounding fees for purchases made outside the App Store. While things appear to be swinging in favor of app and service providers, Apple is likely to continue challenging the rulings even as it makes changes to allow for external payment options.This article originally appeared on Engadget at https://www.engadget.com/entertainment/spotify-ios-users-can-now-buy-audiobooks-directly-from-the-app-230304105.html?src=rss
|
![]() |
by Will Shanklin on (#6XD43)
The Elgato Stream Deck is expanding into a hardware-agnostic platform. On Monday, the company unveiled a software version of the programmable shortcut device. Also on tap are a module for integration in third-party products and DIY projects, an Ethernet dock and an updated Stream Deck MK.2 with scissor-switch keys.Stream Deck MK.2 Scissor KeysThere's a new version of the popular Stream Deck MK.2. The only difference is that this version ditches membrane keys in favor of scissor-switch ones. Scissor keys (found on many laptops, like modern MacBooks) have a shorter travel distance and sharper actuation than the mushy-feeling ones on the (still available) legacy MK.2.The Stream Deck MK.2 Scissor Keys costs $150. Shipments begin around the beginning of June.Virtual Stream DeckVirtual Stream Deck (VSD) is a software-only counterpart of the classic devices. Like the hardware versions, the VSD includes a familiar grid of programmable shortcut buttons. Anything you'd configure for a device like the Stream Deck MK.2 or XL, you can also do for the VSD. Place the interface anywhere on your desktop, pin it for quick access or trigger it with a mouse click or hotkey.ElgatoPresumably to avoid cannibalizing its hardware business, Elgato is limiting the VSD to owners of its devices. Initially, it will only be available to people who have Stream Deck hardware or select Corsair peripherals (the Xeneon Edge and Scimitar Elite WE SE Mouse). The company says the VSD will soon be rolled out to owners of additional devices.The VSD has one frustrating requirement. It only works when one of those compatible accessories is connected to your computer. Unfortunately, that means you can't use it as a virtual Stream Deck replacement, mirroring your shortcuts while you and your laptop are on the go. That seems like a missed opportunity.Instead, it's more like a complement to Stream Deck hardware while it's connected - a way to get more shortcuts than the accessory supports. It's also a method for Corsair accessory owners to get Stream Deck functionality without buying one.Regardless, Virtual Stream Deck launches with the Stream Deck 7.0 beta software.Stream Deck ModulesElgatoStream Deck Modules can be built into hardware not made by Elgato. So, hobbyists, startups and manufacturers can incorporate the OLED shortcut buttons into their DIY projects or products. The only difference is their more flexible nature. Otherwise, they function the same as legacy Stream Deck products.Stream Deck Modules have an aluminum chassis that's "ready to drop straight into a custom mount, machine or product." They're available in six-, 15- and 32-key variants.The modules begin shipping today. You'll pay $50 for the six-key version, $130 for the 15-key one and $200 for the 32-key variant. (If you're providing them for an organization, Elgato offers volume discounts.)Elgato Network DockElgatoThe Elgato Network Dock gives Stream Deck devices their own Ethernet connections. This untethers the shortcuts from the desktop, allowing for "custom installations, remote stations and more."The Network Dock supports both Power over Ethernet (PoE) and non-PoE networks. You can set up its IP configuration on-device.The dock costs $80 and ships in August.This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/elgatos-stream-deck-breaks-free-from-the-companys-hardware-230052921.html?src=rss
|
![]() |
by Anna Washenko on (#6XD44)
New Orleans' police force secretly used constant facial recognition to seek out suspects for two years. An investigation by The Washington Post discovered that the city's police department was using facial recognition technology on a privately owned camera network to continually look for suspects. This application seems to violate a city ordinance passed in 2022 that required facial recognition only be used by the NOLA police to search for specific suspects of violent crimes and then to provide details about the scans' use to the city council. However, WaPo found that officers did not reveal their reliance on the technology in the paperwork for several arrests where facial recognition was used, and none of those cases were included in mandatory city council reports."This is the facial recognition technology nightmare scenario that we have been worried about," said Nathan Freed Wessler, an ACLU deputy director. "This is the government giving itself the power to track anyone - for that matter, everyone - as we go about our lives walking around in public." Wessler added that the is the first known case in a major US city where police used AI-powered automated facial recognition to identify people in live camera feeds for the purpose of making immediate arrests.Police use and misuse of surveillance technology has been thoroughly documented over the years. Although several US cities and states have placed restrictions on how law enforcement can use facial recognition, those limits won't do anything to protect privacy if they're routinely ignored by officers.Read the full story on the New Orleans PD's surveillance program at The Washington Post.This article originally appeared on Engadget at https://www.engadget.com/ai/new-orleans-police-secretly-used-facial-recognition-on-over-200-live-camera-feeds-223723331.html?src=rss
|
![]() |
by Anna Washenko on (#6XD1F)
The latest generation of Motorola Razr smartphones was slated to go on sale last week beginning May 15, but availability has been delayed for purchases through select carriers. 9to5Google reported that the launch was delayed to May 22 for Verizon, Straight Talk, Total Wireless and Visible. We've reached out to Motorola for additional comment on the situation.When a potential customer asked on X about availability after the phones were not seen at the expected May 15 date, a Verizon rep replied that the launch was "placed on hold." The Verizon blog post announcing the plans and pricing for the Razr models has been updated to show a May 22 release date.Razr phones are still listed as available to buy at other mobile carriers. However some customers have taken to Reddit, sharing that their orders have been delayed and speculating as to why. Most of them did not specify which channels or carriers they used for the purchases, so it's possible that all of the issues are centered on the four carriers mentioned in Motorola's statement, although there are posts claiming their phones' new ship date will be May 28.This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/motorola-has-mysteriously-delayed-its-new-razr-phones-but-only-for-some-carriers-211654192.html?src=rss
|
![]() |
by Igor Bonifacic,Cherlynn Low on (#6X1C4)
Google I/O, the search giant's annual developer conference, kicks off on Tuesday, May 20. The event is arguably the most important on the company's annual calendar, offering the opportunity for the company to share a glimpse at everything it has been working on over the past year - and contextualize its biggest priorities for the next twelve months.The dance card for Google I/O was apparently so packed that the company spun off a dedicated Android showcase a whole week earlier. (See everything that was announced at the Android Show or go to our liveblog to get a feel for how things played out.) With that event now behind us, Google can stay focused on its most important core competency: AI.Google's presentation will come on the heels of announcements from three big rivals in recent days. Further up the Pacific coast, Microsoft is hosting its Build developer conference, where it's already unveiled an updated Copilot AI app. Meanwhile, at the Computex show in Taiwan, NVIDIA CEO Jensen Huang highlighted a partnership with Foxconn to develop an "AI factory supercomputer" powered by 10,000 Blackwell AI chips. And Meta held its debut LlamaCon AI conference last month, but CEO Mark Zuckerberg's plans for AI dominance have reportedly since hit some snags. (Apple will share its updated AI roadmap on June 9 when its WWDC developers conference kicks off.)If you'd like to tune in from home and follow along as Google makes its announcements, check out our article on how to watch the Google I/O 2025 keynote. We'll also be liveblogging the event, so you can just come to Engadget for the breaking news.Android 16The presentation featured Android Ecosystem President Sameer Samat, who took over for Burke in 2024. We saw Samat and his colleagues show off years, Android hasn't had much of a spotlight at Google's annual developer conference. Thankfully, last week's Android Show breakout let Google's mobile operating system take the spotlight for at least a day.The presentation featured Android Ecosystem President Sameer Samat, who took over for Burke in 2024. We saw Samat and his colleagues show off the new Material 3 Expressive design, and what we learned confirmed some of the features that were previously leaked, like the "Ongoing notifications" bar. Material 3 Expressive is also coming to Wear OS 6, and the company is expanding the reach of Gemini by bringing it to its smartwatch platform, Android Auto and Google TV. Android is also amping up its scam-detection features and a refined Find Hub that will see support for satellite connectivity later in the year.Speaking of timing, Google has already confirmed the new operating system will arrive sometime before the second half of the year. Though it did not release a stable build of Android 16 today, Samat shared during the show that Android 16 (or at least part of it) is coming next month to Pixel devices. And though the company did cover some new features coming to Android XR, senior director for Android Product and UX Guemmy Kim said during the presentation that "we'll share more on Android XR at I/O next week."It clearly seems like more is still to come, and not just for Android XR. We didn't get confirmation on the Android Authorityreport that Google could add a more robust photo picker, with support for cloud storage solutions. That doesn't mean it won't be in Android 16, it might just be something the company didn't get to mention in its 30-minute showcase. Plus, Google has been releasing new Android features in a quarterly cadence lately, rather than wait till an annual update window to make updates available. It's possible we see more added to Android 16 as the year progresses.One of the best places to get an idea for what's to come in Android 16 is in its beta version, which has already been available to developers and is currently in its fourth iteration. For example, we learned in March that Android 16 will bring Auracast support, which could make it easier to listen to and switch between multiple Bluetooth devices. This could also enable people to receive Bluetooth audio on hearing aids they have paired with their phones or tablets.Android XRRemember Google Glass? No? How about Daydream? Maybe Cardboard? After sending (at least) three XR projects to the graveyard, you would think even Google would say enough is enough. Instead, the company is preparing to release Android XR after previewing the platform at the end of last year. This time around, the company says the power of its Gemini AI models will make things different. We know Google is working with Samsung on a headset codenamed Project Moohan. Last fall, Samsung hinted that the device could arrive sometime this year.Whether Google and Samsung demo Project Moohan at I/O, I imagine the search giant will have more to say about Android XR and the ecosystem partners it has worked to bring to its side for the initiative. This falls in line with what Kim said about more on Android XR being shared at I/O.AI, AI and more AIIf Google felt the need to split off Android into its own showcase, we're likely to get more AI-related announcements at I/O than ever before. The company hasn't provided many hints about what we can expect on that front, but if I had to guess, features like AI Overviews and AI Mode are likely to get substantive updates. I suspect Google will also have something to say about Project Mariner, the web-surfing agent it demoed at I/O 2024. Either way, Google is an AI company now, and every I/O moving forward will reflect that.Project AstraSpeaking of AI, Project Astra was one of the more impressive demos Google showed off at I/O 2024. The technology made the most of the latest multi-modal capabilities of Google's Gemini models to offer something we hadn't seen before from the company. It's a voice assistant with advanced image recognition features that allows it to converse about the things it sees. Google envisions Project Astra one day providing a truly useful artificial assistant.However, after seeing an in-person demo of Astra, the Engadget crew felt the tech needed a lot more work. Given the splash Project Astra made last year, there's a good chance we could get an update on it at I/O 2025.A Pinterest competitorAccording to a report from The Information, Google might be planning to unveil its own take on Pinterest at I/O. That characterization is courtesy ofThe Information, but based on the features described in the article, Engadget team members found it more reminiscent of Cosmos instead. Cosmos is a pared-down version of Pinterest, letting people save and curate anything they see on the internet. It also allows you to share your saved pages with others.Google's version, meanwhile, will reportedly show image results based on your queries, and you can save the pictures in different folders based on your own preferences. So say you're putting together a lookbook based on Jennie from Blackpink. You can search for her outfits and save your favorites in a folder you can title "Lewks," perhaps.Whether this is simply built into Search or exists as a standalone product is unclear, and we'll have to wait till I/O to see whether the report was accurate and what the feature really is like.Wear OSLast year, Wear OS didn't get a mention during the company's main keynote, but Google did preview Wear OS 5 during the developer sessions that followed. The company only began rolling out Wear OS 5.1 to Pixel devices in March. This year, we've already learned at the Android Show that Wear OS 6 is coming, with Material 3 Expressive gracing its interface. Will we learn more at I/O? It's unclear, but it wouldn't be a shock if that was all the air time Wear OS gets this year.NotebookLMGoogle has jumped the gun and already launched a standalone NotebookLM app ahead of I/O. The machine-learning note-taking app, available in desktop browsers since 2023, can summarize documents and even synthesize full-on NPR-style podcast summaries to boot.Everything elseGoogle has a terrible track record when it comes to preventing leaks within its internal ranks, so the likelihood the company could surprise us is low. Still, Google could announce something we don't expect. As always, your best bet is to visit Engadget on May 20 and 21. We'll have all the latest from Google then along with our liveblog and analysis.Update, May 5 2025, 7:08PM ET: This story has been updated to include details on a leaked blog post discussing "Material 3 Expressive."Update, May 6 2025, 5:29PM ET: This story has been updated to include details on the Android 16 beta, as well as Auracast support.Update, May 8 2025, 3:20PM ET: This story has been updated to include details on how to watch the Android Show and the Google I/O keynote, as well as tweak the intro for freshness.Update, May 13 2025, 3:22PM ET: This story has been updated to include all the announcements from the Android Show and a new report from The Information about a possible image search feature debuting at I/O. The intro was also edited to accurately reflect what has happened since the last time this article was updated.Update, May 14 2025, 4:32PM ET: This story has been updated to include details about other events happening at the same time as Google I/O, including Microsoft Build 2025 and Computex 2025.Update, May 19 2025, 5:13PM ET: Updated competing AI news from Microsoft, Meta and NVIDIA, and contextualized final rumors and reports ahead of I/O.This article originally appeared on Engadget at https://www.engadget.com/ai/google-io-2025-new-android-16-gemini-ai-and-everything-else-to-expect-at-tuesdays-keynote-203044742.html?src=rss
|
![]() |
by Ian Carlos Campbell on (#6XD1G)
SAG-AFTRA, the labor union representing performers in film, television and interactive media, has submitted an Unfair Labor Practice (ULP) filing against Epic Games for using an AI-generated version of Darth Vader's voice in the current season of Fortnite. Disney and Epic first announced on May 16 that Fortnite would feature a take on the character using an AI-generated version of James Earl Jones' voice.The issue in SAG-AFTRA's eyes is that the union is currently on strike while it negotiates a new contract with video game companies, and using an AI-generated voice represents Epic refusing to "bargain in good faith." The AI-powered version of Darth Vader is interactive, but that doesn't change the fact that the video game version of Darth Vader has frequently been played by actors other than Jones.Disney got permission from Jones and his family to use AI to replicate his voice for film and TV in 2022, so there is precedent for an AI performance of this kind. After Jones' death in September 2024, the AI route technically became the only way to use Darth Vader's "original voice," other than reusing clips of past performances. Unless of course Epic or Disney wanted to pay another actor to play Darth Vader, which would require coming to an agreement on a new contract for video game performers.ULP filings are reviewed by the National Labor Review Board and can lead to hearings and injunctive relief (a court ordering Epic to remove Darth Vader from the game until a settlement is reached, for example). They are also often used as a way for unions to provoke companies to come back to the bargaining table or respond with a more realistic offer. SAG-AFTRA's Interactive Media Strike has been ongoing since July 26, 2024. SAG-AFTRA members originally voted in favor of a strike in September 2023 for better wages and AI protections.Engadget has reached out to both Disney and Epic for comment on SAG-AFTRA's ULP filing. We'll update this article if we hear back.This article originally appeared on Engadget at https://www.engadget.com/gaming/sag-aftra-says-fortnites-ai-darth-vader-voice-violates-fair-labor-practices-202009163.html?src=rss
|
![]() |
by Anna Washenko on (#6XD1H)
Thrasher is coming to flat screens, with a launch on Steam and Steam Deck scheduled for later in 2025. The new platform releases follow the VR game's debut last summer on the Meta Quest and Apple Vision Pro. Devs Brian Gibson and Mike Mandel, collaborating under the moniker Puddle, announced the new hardware additions in a fittingly surreal trailer today.Both Gibson and Mandel have a history making music- and audio-driven interactive experiences. Mandel worked on Fuser, Rock Band VR and Fantasia: Music Evolved. Gibson's previous project was the VR title Thumper, which bills itself with the tagline "a rhythm violence game." (Imagine Tetris Effect if it was filled with aggression rather than transcendent joy. But in a really, really good way.)Thrasher follows their existing legacy of immersive and unsettling games with its strange concept of a cosmic eel doing battle against a space baby, all set to a throbbing soundtrack. The addition of a non-virtual reality option is an exciting development for fans of the title, and it should be interesting to see how well the pair adapts their VR control scheme to gamepads and mouse/keyboard setups.This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/vr-bop-thrasher-is-heading-to-pc-and-steam-deck-200753057.html?src=rss
|