Feed engadget Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Favorite IconEngadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Link https://www.engadget.com/
Feed https://www.engadget.com/rss.xml
Copyright copyright Yahoo 2025
Updated 2025-12-22 01:32
Cruise's robotaxis are heading to Houston and Dallas
Cruise's robotaxis are continuing their push across the Lone Star State. The self-driving car company has announced it plans to begin supervised testing in two more Texan cities, Houston and Dallas, joining its earlier move into Austin (yes, the home of still robotaxi-less Tesla). For now, the expansion is focused on familiarizing the car with the areas, rather than picking up passengers. Residents of the two cities can expect to start seeing Cruise's robotaxis cruising down the streets with a safety driver inside.In a tweet sharing the news, Cruise said supervised testing in Houston should start in a matter of days while Dallas will be "shortly thereafter." Cruise's robotaxis are already available on a limited basis overnight in Austin and Phoenix and all day in certain areas of San Francisco.The speed General Motors-owned Cruise is advancing has brought some concerns. In January, San Francisco's Transportation Authority asked regulators to limit or temporarily pause Cruise and competitor Waymo's expansion, citing repeated cases of their cars inexplicably stopping in traffic and blocking emergency vehicles. As of yet, things have done anything but slow down. Since the request, Cruise celebrated one million fully driverless miles on top of making its robotaxis available at all times in San Francisco — though full access is only for employees.Right now, there's no set date for when the public will have access to rides in Houston or Dallas. Going off the timeline of other Cruise expansions, it will likely take at least a few months until anyone can hail a self-driving car in either city. Even then, it will probably start with a small group of people and only at night. Anyone interested in taking one of Cruise's robotaxis has to sign up for a waiting list and be accepted to create an account. The company says its limited available cars will keep its services invite-only for the time being.
Amazon includes a $50 gift card when you order the Google Pixel 7a
Google's excellent Pixel 7a just hit the market for an already solid price of $499, but you can now save more thanks to a deal at Amazon. If you order now, you get a free $50 Amazon gift card that can be used for other purchases, effectively bringing the price down to $449 if you plan to order other things from Amazon.The Pixel 7a not only received praise in our Engadget review, but instantly became the best midrange Android smartphone in our latest roundup. Google has nailed the balance between price and performance, offering the same Tensor G2 chip as the Pixel 7, along with a 90Hz display, wireless charging and a higher-resolution rear camera.The two big changes over the Pixel 6a are a new high-res 64-MP main cam in back, along with a front 13-MP selfie camera can record videos in 4K. The Pixel 7a beats other smartphone cameras in its price range so handily for photography and video that it actually needs to be compared to flagship devices like the Pixel 7 Pro and Samsung's S23 Ultra.The extra resolution goes a long way to eliminating any concerns about the lack of a telephoto, as you can zoom in four times and still get a 16-megapixel image. And Google's Night Sight mode remains the best in the business, even though it does add a little more noise than we expected.In sum, the Pixel 7a delivers 95 percent of what you get from the regular Pixel 7, but for $100 less. The deal gives you a $50 Amazon card on top of that, which could be spent on accessories like a protective case. More importantly, you get a rare thing — a deal on a Google Pixel phone that just entered the market.This article originally appeared on Engadget at https://www.engadget.com/amazon-includes-a-50-gift-card-when-you-order-the-google-pixel-7a-091023063.html?src=rss
Disney+ and Hulu will merge into a single app later this year
A "one-app experience" that combines Disney+ and Hulu content will launch in late 2023, Disney CEO Bob Iger has announced during the company's latest earnings call. He said the company will continue offering Disney+, Hulu and ESPN+ as standalone options, but combining services "is a logical progression" of its direct-to-consumer offerings "that will provide greater opportunities for advertisers, while giving bundle subscribers access to more robust and streamlined content..."Since Comcast still owns 33 percent of Hulu, this announcement suggests that Disney could be thinking of buying the cable TV and media company's stake. Iger didn't elaborate on the company's plans, though, and only said that Disney has had "constructive" talks with Comcast about the future of Hulu.In addition to announcing the combined streaming app, Iger has also revealed that Disney+ is getting another price increase after adding $3 on top of its ad-free streaming tier's monthly fee in December. He didn't say when the company is raising the service's prices, but when it does, the ad-free and ad-supported tiers will cost more than $11 and $8, respectively.While Disney reported (PDF) a 26 percent decrease in operating losses for its streaming business, a $659 million loss is still massive. The price hike's announcement didn't come out of nowhere, seeing as the company promised investors that the business will be profitable by the end of the 2024 fiscal year. The question is whether the combined Disney+ and Hulu app could convince new users to pay for a subscription — or for old subscribers to come back. Disney+ lost 4 million subscribers in the first quarter of 2023 after shedding 2.4 million users in the previous quarter.This article originally appeared on Engadget at https://www.engadget.com/disney-and-hulu-will-merge-into-a-single-app-later-this-year-083536664.html?src=rss
Sony's Xperia I V phone is a photo and video powerhouse
Yes, Sony is still making smartphones, and its latest is the flagship Xperia 1 V designed for both photographers and vloggers. It features a new stacked, backside-illuminated (BSI) sensor along with features aimed at content creators found in its Alpha-series cameras.The Xperia 1 V has a new image sensor called "Exmor T for Mobile" designed to be faster and work better with computational (AI) photography, while offering "approximately double" the low-light performance of the Xperia 1 IV, Sony said. As you'd expect in a flagship, it offers other high-end features like a Snapdragon 8 Gen2 Mobile Platform, a 6.5-inch 4K 120Hz OLED HDR display, a 5,000 mAh battery that allows for up to 20 hours of continuous 4K playback, up to 12GB of RAM and more.With that, Sony is promising "best in class" gaming performance, thanks to a Game Enhancer function that provides visual and auditory support. It also lets players livestream their gaming directly to YouTube.The key feature is clearly the camera system, though. The main 24mm f/1.9 equivalent 52-megapixel camera features a Type 1/1.35-inch (about 12mm diagonally) Exmor T sensor that's 1.7 times larger than the Xperia 1 IV's sensor, Sony said. It also comes with an ultrawide 12-megapixel camera and an 85-125mm 12-megapixel optical telephoto zoom, like the one on the Xperia 1 IV. The front 12-megapixel camera has a Type 1/2.9-inch sensor.Purists will be able to shoot video and photos using the professional modes that allow for full manual control. Chief among those is the Photography Pro mode designed for creative control. It also allows live streaming while letting creators see viewer comments in real time.SonyIf you set it to Basic mode, though, you'll get a good dose of computational imaging seen in other Android phones. Those include a Night mode and color settings for subjects like flowers and a blue sky. It also delivers real-time eye autofocus and tracking, along with high-speed continuous shooting of up to 30 fps with auto-exposure and AF enabled.For vloggers and content creators, it now features the same Product Showcase setting found on Sony's vlogging cameras like the ZV-E1. The new sensor also promises improved skin tones, thanks to extra saturation available on the sensor. It also has a new voice priority mic placed near the rear camera that can pick up voices even in busy outdoor locations.SonyOne cool feature that might justify the price alone for many video shooters is the ability to use the phone as a monitor for select Sony Alpha cameras. The Xperia 1 IV could do that as well, but the new model offers multiple display options with waveforms, gridlines, and zebra lines normally only found on professional field monitors. You can also control settings and record content to phones, features that weren't available before. Meanwhile, the phone's microphones can capture sound while monitoring audio via the Xperia 1 V's headphone jack.As with past Xperia models, the catch here is the price. The Xperia 1 V starts at $1,400 (in khaki green or black) with 12GB of RAM and 256GB of storage (upgradeable via an microSD slot). That's a lot of money for most smartphone users (even flagship buyers) but might make sense for content creators, avid photographers and others.Along with the Xperia 1 V, Sony also unveiled a far more mainstream smartphone, the Xperia 10 V. It's powered by a Snapdragon 695 chipset and offers a 6.1-inch 1080p OLED display that's 50 percent brighter than before, but only refreshes at 60Hz. The camera system features a main 48-megapixel Type-1/2.0-inch sensor with a wide lens, along with a 2x telephoto and an ultrawide. Other features include a 5,000 mAh battery and up to 6GB of RAM. It's priced at €449 in Europe, with sales set to start in June. US pricing/availability is not yet available.This article originally appeared on Engadget at https://www.engadget.com/sonys-xperia-i-v-phone-is-a-photo-and-video-powerhouse-074625053.html?src=rss
Roland S-1 Tweak Synth is the most compelling member of the Aira Compact family
Last year during Superbooth Roland unveiled the Aira Compact series – its first true competitors to Korg’s wildly successful Volca line. Now, the company is back for Superbooth 2023 with a new addition to the family, the S-1 Tweak Synth. Like the T-8 and J-6, the S-1 uses Roland’s Analog Circuit Behavior (ACB) technology to recreate the sound of an iconic instrument from its past, the SH-101. While the core of the S-1 is ultimately quite familiar, in true Roland fashion there’s a lot of modern features packed in as well. And in even truer Roland fashion, many of them are buried in a bewildering array of indecipherable menus and button combinations.I’m going to get this out of the way right now, because it’s a recurring theme in almost every review I write of a Roland product. The interface here is truly mind boggling once you get beyond the immediate hands-on controls. Almost every knob and button has at least one shift function. Many of them aren’t labeled. And the only visual feedback you’re given for anything is via a four-character, seven-segment LED display. A seven-segment LED display in 2023! I dare anyone to tell me what the hell “Nod.d” means without looking it up in the manual. And what about the D-Motion button suggests that this is where the probability and substep options are located? I’m not trying to suggest that I should be able to figure out every feature on an instrument right away without reading a manual. But I also shouldn’t feel like a lost ball in the tall weeds. Especially not when we’re talking about an entry level $200 synth.The clunky interface here is particularly frustrating because the S-1 is otherwise kind of great. It actually has a decent amount of hands-on controls. It offers far more depth than any of the previous Aria Compact entries, even if you never touch the shift button. There’s an LFO with six different waveform options, including random. The oscillator section allows you to blend together a saw and square wave with pulse width modulation, as well as a sub oscillator and noise source. And, unlike the original SH-101 which was monophonic, the S-1 is polyphonic so you can play actual chords (up to four notes).And the oscillators sound great. I’m not always the hugest fan of Roland’s ACB sound engine, but it shines here. Thick bass, acid leads and 16-bit JRPG arps are all easily attainable and satisfying. This is easily the best sounding member of the Aira Compact family.The filter is excellent too. It stops just shy of self oscillation, but still gets pretty sharp and can certainly endanger your eardrums if you have your headphones up too loud. But at the lower end of the cutoff spectrum you get a surprising warmth and silkiness from this dirt cheap emulation of a classic analog circuit.Photo by Terrence O'Brien / EngadgetRoland even threw in a delay, seven reverbs and four chorus options. The reverbs are merely ok, but the delay is a perfectly solid digital effect. You can even set the delay time to 1/128 and crank the … Anb(?) reverb model to get howling metallic textures that are out of this world. The choruses, pulled from the Juno and JX-3P are truly excellent. It’s just a shame they’re buried in the arcane menu system because I want to turn them on for almost every patch I make.There’s a solid arpeggiator, and you can even record directly from the arpeggiator into the 64-step sequencer. That’s pretty handy for laying down glassy high notes then going back in and overdubbing some bass to accentuate the chord changes. I will say though, I haven’t quite figured out how to get to 64 steps. By default sequences are 16 steps, there is no obvious way to go beyond that and I was not provided a manual with my review unit.Roland also added motion sequencing to S-1, so you can tweak settings as you’re recording to slowly increase the amount of delay over the course of a pattern. Or even go into the menu and turn on and off the chorus, or change the sub-oscillator tuning on a per-step basis. It really opens up a lot of possibilities on an instrument this small and affordable. You can ratchet notes, set per step probability, and there’s even Step Loop for quickly mangling your sequence into new riffs on the fly, though that is far more useful on a drum machine.Photo by Terrence O'Brien / EngadgetIf the feature list for the S-1 ended here, that would be perfectly fine. But Roland added more. So. Much. More. Maybe too much more.There’s a draw and chop function, which allows you to create custom waveshapes for even wilder tones. Then use the multiplier on your freshly drawn waveshape, or the comb on your chopped wave for hard-synced and dissonant metallic noises. You can also turn the noise source into a sort of pulsing riser effect. Though, I was unaware of this when I accidentally activated it while messing around one afternoon and couldn’t for the life of me figure out what was going on. This is one of those many unlabeled features hidden behind a seemingly arbitrary button combination. (For the record, you hold down shift then press 1 and 2 simultaneously to cycle through a few different riser modes.)The one last feature worth mentioning (I think), is a bit of a head scratcher. D-Motion allows you to change parameters by picking up the synth and tilting it about. It’s a fun novelty for a few minutes, but it doesn’t feel practical. Though, at least it makes more sense on the small, battery-powered and portable S-1 than it does on the SH-4d.Photo by Terrence O'Brien / EngadgetBeyond that the S-1 resembles the rest of the Aira Compact line. It’s plastic, has a rechargeable battery built in and weighs next to nothing. There are 3.5mm MIDI, sync and audio jacks for connecting other gear. And USB-C for charging, but also for sending audio and MIDI to computers, phones and tablets. My one other minor gripe, physically at least, is that the mushy keys are painfully small. Playing chords on this thing is a bit of a headache. But not much more so than on any other instrument of this ilk, like the Modal Electronics Skulpt or a Volca Keys.Photo by Terrence O'Brien / EngadgetThe S-1 Tweak Synth is both the most compelling and most frustrating member of the Aira Compact series. It has plenty of hands-on controls, sounds great and is deceptively powerful for the price. But it is also, perhaps, too complex. It tries to do too many things and ends up feeling cluttered and confusing. Which is the exact opposite of what you want from what is essentially a $200 music toy.What made Korg’s Volcas so successful wasn’t their laundry list of features, it was their simplicity. They sounded good enough, were affordable, and unintimidating. Roland seems to have gotten the first two parts of the equation down. Now it just needs to work on the last ingredient.This article originally appeared on Engadget at https://www.engadget.com/roland-s-1-tweak-synth-is-the-most-compelling-member-of-the-aira-compact-family-070014423.html?src=rss
Watch the Google I/O 2023 keynote in under 18 minutes
Google's I/O event this year was jam-packed with new product launches and an in-depth introduction to its new generative AI offerings. The star of its new set of device was, perhaps, the new Pixel Fold, a veritable rival to Samsung's foldables powered by a Tensor G2 chip. Like the Samsung Galaxy Fold, it opens like a book so you can fully use its 7.6-inch display, though it also comes with a 5.8-inch external display. It's now available for pre-order and will set you back $1,799 when it starts shipping in June.The company has also unveiled its new mid-range phone, the Pixel 7a, that will cost you $499. In addition, the Pixel Tablet is now available for pre-order for the same price. You can use the 11-inch tablet as a smart home display with Google Assistant and Chromecast when it's attached to its speaker dock. On its own, it can last for 12 hours, and while it doesn't come with a stylus, it does support third-party pens.But the most important and relevant unveiling of the event was the company's PaLM 2 AI language model, which is the technology behind its Bard AI chatbot and which will power new features across its products. Bard will soon have the ability to decipher images in your queries and respond with images in turn — it's now available without a waitlist in 180 countries. Gmail will have the ability to craft responses to emails for you, while Photos is getting a Magic Editor that can move objects in your pictures. You can get a glimpse of all Google announcements in a condensed version of its I/O keynote above.This article originally appeared on Engadget at https://www.engadget.com/watch-the-google-io-2023-keynote-in-under-18-minutes-052059113.html?src=rss
Twitter's encrypted DMs are here — but only for verified users
Twitter is beginning to roll out its long-promised encrypted direct messaging feature. However, the initial rollout comes with some major limitations that could make it less than ideal for privacy-conscious Twitter users.Of note, the feature is currently only available to verified Twitter users, which includes Twitter Blue subscribers and those part of a “Verified Organization.” It’s not clear if this is just for the early rollout or if encryption will be added to the growing list of exclusive features for users with a checkmark. For now, an encrypted chat requires both users to be verified, according to the company.There are also some significant limitations to the feature itself. It doesn’t support group messages, or any kind of media other than links. The company also doesn’t allow users to report an encrypted message directly, advising on a help page that users should report accounts separately if they “encounter an issue with an encrypted conversation participant.”TwitterFinally, the level of encryption appears to be less secure than what other apps offer. For one, message metadata is not encrypted. Furthermore, Twitter notes that “currently, we do not offer protections against man-in-the-middle attacks” and suggests that the company itself is still able to access encrypted DMs without the participants knowing. “If someone–for example, a malicious insider, or Twitter itself as a result of a compulsory legal process—were to compromise an encrypted conversation, neither the sender or receiver would know,” the company explains on a help page. It added that it’s working on improvements that would make such exploits more “difficult.”That’s particularly notable because it falls far short of the standard Twitter owner Elon Musk has described when expressing his desire to add encryption for Twitter DMs. He has said he wants it to be impossible for the company to access users’ encrypted messages even if “someone puts a gun to our heads.”In a tweet, Twitter security engineer Christopher Stanley acknowledged the shortcoming. “We’re not quite there yet, but we’re working on it.”For those who are verified and want to try out the feature anyway, encrypted messaging can be accessed via the info menu (that’s the same menu you use to block or report a conversation) within a particular DM. Once encryption is enabled, the encrypted messages will appear as a separate message thread with labels at the top of the chat to indicate that the conversation is encrypted.This article originally appeared on Engadget at https://www.engadget.com/twitters-encrypted-dms-are-here--but-only-for-verified-users-234934842.html?src=rss
Scammers used AI-generated Frank Ocean songs to steal thousands of dollars
More AI-generated music mimicking a famous artist has made the rounds — while making lots of money for the scammer passing it off as genuine. A collection of fake Frank Ocean songs sold for a reported $13,000 CAD ($9,722 in US dollars) last month on a music-leaking forum devoted to the Grammy-winning singer, according toVice. If the story sounds familiar, it’s essentially a recycling of last month’s AI Drake / The Weeknd fiasco.As generative AI takes the world by storm — Google just devoted most of its I/O 2023 keynote to it — people eager to make a quick buck through unscrupulous means are seizing the moment before copyright laws catch up. It’s also caused headaches for Spotify, which recently pulled not just Fake Drake but tens of thousands of other AI-generated tracks after receiving complaints from Universal Music.The scammer, who used the handle mourningassasin, told Vice they hired someone to make “around nine” Ocean songs using “very high-quality vocal snippets” of the Thinkin Bout You singer’s voice. The user posted a clip from one of the fake tracks to a leaked-music forum and claims to have quickly convinced its users of its authenticity. “Instantly, I noticed everyone started to believe it,” mourningassasin said. The fact that Ocean hasn’t released a new album since 2016 and recently teased an upcoming follow-up to Blond may have added to the eagerness to believe the songs were real.The scammer claims multiple people expressed interest in private messages, offering to “pay big money for it.” They reportedly fetched $3,000 to $4,000 for each song in mid to late April. The user has since been banned from the leaked-music forum, which may be having an existential crisis as AI-generated music makes it easier than ever to produce convincing knockoffs. “This situation has put a major dent in our server’s credibility, and will result in distrust from any new and unverified seller throughout these communities,” said the owner of a Discord server where the fake tracks gained traction.This article originally appeared on Engadget at https://www.engadget.com/scammers-used-ai-generated-frank-ocean-songs-to-steal-thousands-of-dollars-222042845.html?src=rss
May's PS Plus Extra and Premium lineup includes 'Ratchet & Clank: Rift Apart'
Last Friday was the final day for PS5 owners to claim Sony’s PlayStation Plus Collection, a bundle that came with nearly 20 free games, including Bloodborne and God of War (2018). When Sony announced at the start of February the collection was going away, the company said it would instead focus on growing the PlayStation Plus library of monthly games. Unsurprisingly then, May’s PS Plus lineup is chockful of titles you can download to your console, provided you subscribe to PS Plus Extra or Premium. In all, Sony will add 19 titles to the service this month.
Google makes it easier to build sleek Android TV apps
Expect to see better looking Android TV apps, as well as more offerings from developers, in the future. At Google I/O today, the company announced the alpha version of Compose for TV, a framework that will make it easier to build attractive Android TV apps with less code and more intuitive tools. Google says developers will be able to bring over their existing code, and by moving to Compose it should be easier to update apps moving forward. The framework has direct access to the Android APIs — which most devs are already used to — and will support code from existing Android mobile and tablet apps. Google is also unveiling a set of TV design guidelines to help developers optimize their apps for big screens.Google has certainly come a long way when it comes to home entertainment. Its first Google TV platform, released in 2010 before the rise of streaming services, fizzled and died. It bounced back with the cheaper and far more popular Chromecast, which eventually led to Android TV, a platform that now houses a revived "Google TV" interface.When it comes to streaming platforms, Apple still has more tools for developers to build attractive TV apps, but it's nice to see Google making an effort. It's not like there's much competition from Roku or Amazon's Fire TV devices. Android TV's true power is its ubiquity, much like Android itself. According to Strategy Analytics, Android TV shipped on more devices than any other streaming platform last year. (Even my Formovie projector has Android TV built-in.)Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/android-tv-compose-for-tv-ui-framework-210056293.html?src=rss
Google’s Project Starline booth gave me a holographic meeting experience
It’s been two years since Google introduced its Project Starline holographic video conferencing experiment, and though we didn’t hear more about it during the keynote at I/O 2023 today, there’s actually been an update. The company quietly announced that it’s made new prototypes of the Starline booth that are smaller and easier to deploy. I was able to check out a demo of the experience here at Shoreline Park and am surprised how much I enjoyed it.But first, let’s get one thing out of the way. Google did not allow us to take pictures or video of the setup. It’s hard to capture holographs on camera anyway, so I’m not sure how effective it would have been. Due to that limitation, though, we’re not going to have a lot of photos for this post and I’ll do my best to describe the experience in words.After some brief introductions, I entered a booth with a chair and desk in front of the Starline system. The prototype itself was made up of a light-field display that looked like a mesh window, which I’d guess is about 40-inches wide. Along the top, left and right edges of the screen were cameras that Google uses to get the visual data required to generate the 3D model of me. At this point, everything looked fairly unassuming.Things changed slightly when Andrew Nartker, who heads up the Project Starline team at Google, stepped into frame. He sat in his chair in a booth next to mine, and when I looked at him dead on, it felt like a pretty typical 2D experience, except in what felt like very high resolution. He was life-sized and it seemed as if we were making eye contact and holding each other’s gaze, despite not looking into a camera. When I leaned forward or leaned closer, he did too, and nonverbal cues like that made the call feel a little richer.What blew me away, though, was when he picked up an apple (haha I guess Apple can say it was at I/O) and held it out towards me. It was so realistic that I felt as if I could grab the fruit from his fist. We tried a few other things later — fist bumping and high fiving, and though we never actually made physical contact, the positioning of limbs on the call was accurate enough that we could grab the projections of each other’s fists.The experience wasn’t perfect, of course. There were parts where, when Nartker and I were talking at the same time, I could tell he could not hear what I was saying. Every now and then, too, the graphics would blink or appear to glitch. But those were very minor issues, and overall the demo felt very refined. Some of the issues could even be chalked up to spotty event WiFi, and I can personally attest to the fact that the signal was indeed very shitty.It’s also worth noting that Starline was basically getting the visual and audio data of me and Nartker, sending it to the cloud over WiFi, creating a 3D model of both of us, and then sending it down to the light display and speakers on the prototype. Some hiccups are more than understandable.While the earliest Starline prototypes took up entire rooms, the current version is smaller and easier to deploy. To that end, Google announced today that it had shared some units with early access partners including T-Mobile, WeWork and Salesforce. The company hopes to get real-world feedback to “see how Project Starline can help distributed workforces stay connected.”We’re clearly a long way off from seeing these in our homes, but it was nice to get a taste of what Project Starline feels like so far. This was the first time media demos were available, too, so I’m glad I was able to check it out for myself and tell you about it instead of relying on Google’s own messaging. I am impressed by the realism of the projections, but I remain uncertain about how effectively this might substitute or complement in-person conversations. For now, though, we’ll keep an eye on Google’s work on Project Starline and keep you posted.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/googles-project-starline-booths-gave-me-a-holographic-meeting-experience-205804960.html?src=rss
Chipolo's new item trackers are basically AirTags for Android
Google doesn't have a direct equivalent to Apple's AirTags, but it might come close. Chipolo has teamed up with Google to introduce One Point (shown above) and Card Point (below) item trackers that work exclusively with Android's Find My Device network. They take advantage of the phone platform's ubiquity to not only increase the chances of locating your gear, but to find unknown trackers that might be used to spy on your whereabouts.Both trackers support Android's Fast Pair to speed through setup, and are water-resistant. The differences extend beyond their shapes. The One Point is the loudest with a 120dB ring, and lasts a year on its replaceable battery. The Card Point is quieter at 105dB and relies on a renewal program when the battery wears down, but it also lasts for two years.ChipoloChipolo is taking pre-orders for both devices now. The One Point sells for $28, and the Card Point is available for $35. Four-packs for each respectively cost $79 and $112, and you can get a One/Card bundle for $77. Orders should ship by the second half of July. You'll need a phone running at least Android 9 with Google Play Services. That covers many phones released in North America and Europe over the past five years.The Point trackers are really counterparts to Chipolo's iPhone-oriented One Spot and Card Spot. However, they also reflect Google's broader effort to flesh out the Android ecosystem. You don't have to rely on a third-party tracking network like Tile's or Samsung's to find missing items. Of course, this also locks you into Android — you'll have to replace your trackers if you ever switch platforms.This article originally appeared on Engadget at https://www.engadget.com/chipolos-new-item-trackers-are-basically-airtags-for-android-204801185.html?src=rss
You can now stream Android phone apps to your Chromebook
You won't have to install Android apps on your Chromebook when you need them in a pinch. After a preview at CES last year, Google has enabled app streaming through Phone Hub in Chrome OS Beta. You can quickly check your messages, or track the status of a food order without having to sign in again.Once Phone Hub is enabled, you can stream apps by either clicking a messaging app notification or browsing the Hub's Recent Apps section after you've opened a given app on your phone. Google doesn't describe certain app types as off-limits, although it's safe to say that you won't want to play action games this way.The feature works with "select" phones running Android 13 or newer. The Chromebook and handset need to be on the same WiFi network and physically close-by, although you can use the phone as a hotspot through Instant Tethering if necessary.Google is ultimately mirroring the remote Android app access you've had in Windows for years. However, the functionality might be more useful on Chromebooks. While app streaming won't replace native apps, it can save precious storage space and spare you from having to jump between devices just to complete certain tasks. This approach is also more manufacturer-independent where Microsoft's approach is restricted to Samsung and Honor phones.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/you-can-now-stream-android-phone-apps-to-your-chromebook-202830500.html?src=rss
Google opens up access to its text-to-music AI
AI-generated music has been in the spotlight lately, between a track that seemingly featured vocals from Drake and The Weeknd gaining traction to Spotify reportedly removing thousands of songs over concerns that people were using them to game the system. Now, Google is wading further into that space as the company is opening up access to its text-to-music AI, which is called MusicLM.Google detailed the system back in January when it published research on MusicLM. At the time, the company said it didn't have any plans to offer the public access to MusicLM due to ethical concerns related to copyrighted material, some of which the AI copied directly into the songs it generated.The generative AI landscape has shifted dramatically this year, however, and now Google feels comfortable enough to let the public try MusicLM. "We’ve been working with musicians like Dan Deacon and hosting workshops to see how this technology can empower the creative process," Google Research product manager Hema Manickavasagam and Google Labs product manager Kristin Yim wrote in a blog post.As TechCrunch points out, the current public version of MusicLM doesn't allow users to generate music with specific artists or vocals. That could help Google to avoid copyright issues and stop users from generating fake "unreleased songs" from popular artists and selling them for thousands of dollars.You can now sign up to try MusicLM through AI Test Kitchen on the web, Android and iOS. Google suggests that you can try prompts based on mood, genre and instruments, such as “soulful jazz for a dinner party” or "two nylon string guitars playing in flamenco style." The experimental AI will generate two tracks and you can identify your favorite by selecting a trophy icon. Google says doing so will help it to improve the model.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-opens-up-access-to-its-text-to-music-ai-202251175.html?src=rss
Google is bringing Zoom, Teams and Webex meetings to Android Auto
At I/O 2023 today, Google shared a few updates for both Android Auto and Android Automotive OS. Perhaps the biggest news is that Google is working with Zoom, Microsoft Teams and Cisco Webex to bring those virtual meeting apps to Android-equipped vehicles. If the thought of joining a video call in your car sounds like a driving hazard, don't worry: the meetings will be audio only with simplified controls on the infotainment display.Google is also rolling out Waze in the Google Play Store for all vehicles with Google built-in. This means the popular navigation app will be available outside of just Android Auto and beyond Volvo and Polestar models. What's more, the company is allowing developers to integrate the instrument cluster with their navigation apps. As you might expect, this will put turn-by-turn directions in the driver's line of sight. Plus, developers can access vehicle data like range, fuel level and speed to give drivers even more insight on their trips.Waze in the Chevrolet Blazer EVGoogleGoogle has added new app categories to the Android for Cars App Library. That repository now allows developers to add IoT and weather apps for use in vehicles. For example, The Weather Channel app will be available alongside existing software like Weather and Radar later this year. The company is also making it easier for media apps (music, audiobooks, podcasts, etc.) to port their software to Android Auto and Android Automotive OS.Additionally, the company has new categories for video and gaming apps in its library, with the goal of expanding to browsing apps soon. These are specifically designed for use when the car is parked or by passengers. YouTube is now available for all automakers to add to cars with Google built-in. Google says Polestar, Volvo and other "select partners" have committed to adding the video-streaming app via over-the-air updates. In terms of games, the initial slate includes Beach Buggy Racing 2, Solitaire FRVR, and My Talking Tom Friends. What's more, Google plans to add multi-screen support to Android Automotive OS 14, which will allow "shared entertainment experiences" between drivers and passengers.YouTube inside a Polestar vehicleGoogleGoogle says Android Auto will be available in almost 200 million cars by the end of 2023. The company also says that the number of cars with their infotainment systems powered entirely by Android Automotive OS with Google built-in should nearly double by the end of the year. That latter figure is spurred by adoption by automakers like Chevrolet, Volvo, Polestar, Honda, Renault and more. In March, GM announced it would phase out Android Auto and CarPlay in its EVs in favor of Android Automotive.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-is-bringing-zoom-teams-and-webex-meetings-to-android-auto-200029169.html?src=rss
Google I/O 2023: Everything announced at the event
To say the Google I/O 2023 keynote was packed would be an understatement. Google unveiled a flurry of new Pixel devices as well as the latest versions of Android and other platforms. It also won't surprise you to hear that AI was everywhere — this was Google's big chance to compete with OpenAI's ChatGPT. Don't worry if you missed something during the event, though, as we've got all the biggest announcements from the event.Pixel FoldPhoto by Sam Rutherford/EngadgetThere's no doubt that the (previously confirmed) Pixel Fold was the star of the show. Google's first foldable phone features the same Tensor G2 chip as the Pixel 7 Pro, but opens like a book to reveal a 7.6-inch display. There's a 5.8-inch external display, and the cameras are only a slight step back between the 48-megapixel main camera, 10.8MP telephoto and ultra-wide lenses and an 8MP internal shooter. This is also one of the thinner foldables at 0.48in when closed. It's available for pre-order today and will sell for $1,799 when it arrives in June.Pixel 7aPhoto by Sam Rutherford/EngadgetGoogle's budget (really, mid-range) phone just got a significant upgrade. The Pixel 7a sports the same Tensor G2 as its pricier Pixel 7 counterparts while adding features that were sorely missed on earlier A-series models, such as a smooth 90Hz display, a 64MP main camera and wireless charging. You can order it today for $499, or $50 more than its predecessor.Pixel TabletPhoto by Sam Rutherford/EngadgetGoogle first teased the Pixel Tablet a year ago, and it's finally ready to ship its return to Android-powered slates. As mentioned last fall, this is really a hybrid 11-inch smart display. It can sit in a speaker dock to serve as a Google Assistant hub and Chromecast device, but detaches when you're ready to watch videos or check your social feeds. It's powered by the same Tensor G2 as the Pixel 7 and offers a healthy 12 hours of battery life. You can expect pen support, too. It's available to pre-order today for $499.Android 14GoogleGoogle fully unveiled Android 14 at I/O. The major revision includes upgrades you saw in the previews, such as custom sharing features as well as stricter security, but also adds iOS 16-style lock screen customization complete with "cinematic" wallpaper that makes subjects stand out. You'll see likewise see AI-generated wallpapers that are cued to images and art styles. Google will release the new OS late this summer, and is expected to deliver the upgrade to Pixel users first.Wear OSGoogleGoogle is still committed to improving its Wear OS smartwatch platform following last year's overhaul. At I/O, the company introduced long-sought native watch apps for Gmail and Calendar that let you manage your messages and schedules from your wrist. WhatsApp is coming to Wear OS, too. Google also released its first developer preview for Wear OS 4, a major update that promises improved battery life and performance, simple watch backups and more accessibility. All the new software arrives later this year.PaLM 2 AI modelREUTERS/Dado RuvicIn some ways, the most important announcement at I/O is for tech that sits behind the scenes. Google has unveiled a PaLM 2 AI language model that will underpin more than two dozen of the company's products, including Bard. It's faster and more efficient, and can run locally on mobile devices. It's more adept at handling multiple languages and can generate JavaScript and Python code.Search Labs GoogleAmong Google's many, many AI-related introductions are three test features available through Search Labs. A Search Generative Experience provides automatically-generated overviews, exploration pointers and follow-ups. Code Tips will even offer programming snippets and advice. Add to Sheets, meanwhile, lets you plug search results into spreadsheets.BardMojahid Mottakin on UnsplashGoogle is rapidly expanding Bard's capabilities. On top of using PaLM2, the generative AI chatbot will soon let you include images in your queries, and bring pictures into its responses. It will also integrate Google apps (such as exporting to Docs and Gmail) as well as partner products like Adobe Firefly (for turning ideas into images). More importantly, it'll be much easier to use Bard in the first place — Google is dropping the waiting list and making Bard available in English to more than 180 countries, along with support for Japanese and Korean.AI in Photos and WorkspaceGoogleLike it or not, Google is putting generative AI in many of the apps you use. To start, Photos is getting a Magic Editor that can move subjects, add content and even replace the sky. The experimental feature will be available on some Pixel phones this fall.Generative AI will also be available across core Workspace apps through Duet AI. Gmail on mobile will help you write messages. Slides will help you create background images using text descriptions. AI in Sheets will analyze your data, while Docs will offer "assisted writing." Even Meet will use the technology to create unique video call backgrounds.Everything elseGoogleThere were numerous important updates across Google's other products. Project Tailwind is an AI-driven personal notebook. The redesigned Home app is now available to everyone, with Matter support coming to iOS users in the weeks ahead. Find My Device will soon support a wider range of hardware, and detect unknown trackers to help catch stalkers. Google Maps, meanwhile, is bringing Immersive View to routes.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-io-2023-everything-announced-at-the-event-193758196.html?src=rss
Google Play developers can now use generative AI to create store listings
Generative AI really is everywhere. It's used to make social media avatars. It can help debug code. It can even ask nosey neighbors to be a little more polite to each other. Now, Google is hoping to use it to encourage app developers to expand their use of custom store listings on the Google Play store. New features announced at Google I/O will give developers will access to AI-powered tools that will help them create new listings and convert their existing app listings into multiple languages.App developers could already create up to 50 custom store listings, but Google hopes these new tools will make managing them easier. To start, it's introducing a store listing groups feature that allows developers to craft a base listing for their app, and then modify specific elements to tailor it to a specific audience demographic or event. Potential users visiting your app's store listing from YouTube might see one set of screenshots, while visitors from another country might see a different series of images, as well as an app description in their native language.The new AI-powered features seem designed to make that easier. The AI helper, for example, can take developer prompts highlighting a key feature or marketing theme, and spit out ready-made text to help a user craft a targeted Google Play Store listing. There's also a new machine translation tool that can help developers quickly list their app in 10 different languages.Although most of these new features were built to help developers find and expand their audience, there's at least one new tool being rolled out to average users: AI-powered review summaries. Google says the feature should "help users learn from each other" about what makes an app "special at a glance." Even this is designed to help apps gain more reach, however: At launch it will only help summarize positive reviews in English.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-play-developers-can-now-use-generative-ai-to-create-store-listings-193011363.html?src=rss
Google Pixel Fold vs. Samsung Galaxy Z Fold 4: Battle of the foldables
After confirming its existence last week, Google has formally introduced the Pixel Fold, its first stab at a foldable phone. Like past foldables, the new Pixel has a vertical hinge that lets it unfurl like a book. When it's folded, you get a more traditional form factor with a 5.8-inch display. Open it up, and you get a wider 7.6-inch screen for multitasking or watching videos. Both OLED panels have 120Hz refresh rates, and the device runs on the same Tensor G2 chip found in last year's Pixel 7 line. Google is pushing the phone's thinness (12.1mm folded, 5.8mm when not), battery ("over 24 hours") and weight (10 ounces) in particular as selling points. It also claims that the near-gapless hinge is built to last over time.We'll have to review the Pixel Fold before we can speak to that. For now, though, we've laid out how the Fold compares on paper to the most prominent book-style foldable on the market today: Samsung's Galaxy Z Fold 4. No, specs can't tell the whole story with a form factor like this, and both Samsung and OnePlus are expected to launch new foldables in the coming months. But if you want a sense of what the Pixel Fold's $1,799 starting price will get you, here's a quick rundown. The phone is available to pre-order now and will ship in June. For more impressions, check out our initial hands-on.Google Pixel Fold vs. Samsung Galaxy Z Fold 4Google Pixel FoldSamsung Galaxy Z Fold 4Pricing (MSRP)$1,799 (256 GB), $1,919 (512 GB)$1,800 (256 GB), $1,920 (512GB), $2,160 (1TB)DimensionsFolded: 139.7 x 79.5 x 12.1mm (5.5 x 3.1 x 0.5 inches)Unfolded: 139.7 x 158.7 x 5.8mm (5.5 x 6.2 x 0.2 inches)Folded: 155.1 x 67.1 x 14.2-15.8 mm (6.11 x 2.64 x 0.56-0.62 inches)Unfolded: 155.1 x 130.1 x 6.3mm (6.11 x 5.12 x 0.25 inches)Weight283g (10 oz)263g (9.28 oz)Screen sizeExternal cover: 5.8 inches (146.7 mm)Unfolded: 7.6 inches (192.3mm)External cover: 6.2 inches (157mm)Unfolded: 7.6 inches (195mm)Screen resolutionExternal cover: 2,092 x 1,080 (408 ppi)Unfolded: 2,208 x 1,840 (380 ppi)External cover: 2,316 x 904 (402 ppi)Unfolded: 2,176 x 1,812 (374 ppi)Screen typeOLED (up to 120Hz)External cover: 17.4:9 aspect ratio, up to 1,550 nits peak brightnessUnfolded: 6:5 aspect ratio, up to 1,450 nits peak brightnessAMOLED (up to 120Hz)External cover: 23.1:9 aspect ratioUnfolded: 21.6:18 aspect ratio, up to 1,200 nits peak brightnessBattery4,821 mAh4,400 mAhInternal storage256 GB / 512 GB256 GB / 512 GB / 1TBExternal storageNoneNoneRear camera(s)Main: 48MP, f/1.7Ultrawide: 10.8MP, f/2.2Telephoto: 10.8MP, f/3.05, 5x optical zoom, 20x Super Res zoomMain: 50MP, f/1.8Ultrawide: 12MP, f/2.2Telephoto: 10MP, f/2.4, 3x optical zoom, 30x Digital zoomFront camera(s)9.5MP, f/2.210MP, f/2.2Inner camera(s)8MP, f/2.04MP, f/1.8Video captureRear camera: 4K at 30 fps, 60 fpsFront camera: 4K at 30 fps, 60 fpsInner camera: 1080p at 30 fpsRear camera: 8K at 24 fps, 4K at 60 fpsFront camera: 4K at 30 fps, 60 fpsSoCGoogle Tensor G2Qualcomm Snapdragon 8+ Gen 1CPUOcta-core (2x 2.85 GHz Cortex-X1, 2x 2.35 GHz Cortex-A78, 4x 1.80 GHz Cortex-A55)Octa-core (1x 3.19 GHz Cortex-X2, 3x 2.75 GHz Cortex-A710, 4x 1.80 GHz Cortex A-510)GPUARM Mali-G710 MP7Adreno 730RAM12 GB LPDDR512 GB LPDDR5WiFiWiFi 6EWiFi 6EBluetoothv5.2v5.2NFCYesYesOSAndroid 135 years of security updatesAndroid 12L, upgradeable to Android 13, One UI 5.14 years of OS updates5 years of security updatesColorsObsidian, PorcelainGraygreen, Phantom Black, Beige, BurgundyOther featuresUSB-C 3.2 Gen 2, Qi wireless charging, 30W charging, Titan M2 security chip, IPX8 water resistance, 1-year warrantyS Pen support, USB-C 3.2 Gen 1, Qi wireless charging, Reverse wireless charging, 25W charging, IPX8 water resistance, Samsung DeX, 1-year warrantyFollow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-pixel-fold-vs-samsung-galaxy-z-fold-4-battle-of-the-foldables-191551908.html?src=rss
Google I/O 2023 live updates: Pixel Fold, Bard AI, Android 14 and more
Google is hosting its first full-on in-person I/O developer conference since the pandemic and we expect the company to announce a biblical amount of news at breakneck pace. Engadget is here at the show and will bring you a liveblog of what's happening at the keynote as it happens. The show kicks off at 1pm ET today and we'll be starting our commentary as early as noon. Keep your browser open here for our coverage of everything from Mountain View, CA today!Google I/O 2023 in-depth coverage
Pixel Tablet vs. the competition: Google's latest stab at making Android tablets a thing
Google is ready to give Android tablets another go. Nearly five years after launching the ill-fated Pixel Slate, the company has fully taken the wraps off its latest large-screen device, the Pixel Tablet. Google had teased the device a couple of times over the past year, but now it's official: This is a 10.95-inch tablet that doubles as a Nest Hub-style smart display with an included speaker dock. That dock also charges the tablet, and the slate itself runs on the same Tensor G2 SoC you'd find in a Pixel 7 phone.The Pixel Tablet starts at $499 and is available to pre-order starting today, with shipping starting in June. We'll have a full review in the future, but for now, we've laid out how the device compares on the spec sheet to a couple popular alternatives in Apple's 10th-gen iPad and Samsung's Galaxy Tab A8. The $599 iPad Air and $630 Galaxy Tab S8 are notable options here, too, but since the Pixel Tablet is really two devices in one, we've stuck to sub-$500 options below. You can read our initial hands-on for more impressions.Google Pixel TabletApple iPad (10th gen)Samsung Galaxy Tab A8Pricing (MSRP)$499 (128 GB), $599 (256 GB)$449 (64 GB), $599 (256 GB)$230 (32 GB), $280 (64 GB), $330 (128 GB)Dimensions258 x 169 x 8.1mm (10.2 x 6.7 x 0.3 inches)248.6 x 179.5 x 7mm (9.79 x 7.07 x 0.28 inches)246.8 x 161.9 x 6.9mm (9.72 x 6.37 x 0.27 inches)Weight493g (17.4 oz)477g (16.8 oz)508g (17.9 oz)Screen size10.95 inches (278mm)10.9 inches (277mm)10.5 inches (267mm)Screen resolution2,560 x 1,600 (276 ppi)2,360 x 1,640 (264 ppi)1,920 x 1,200 (216 ppi)Screen typeLCD, 16:10 aspect ratio, 500 nits brightness (typical)IPS LCD, 23:16 aspect ratio, 500 nits brightness (typical)TFT LCD, 16:10 aspect ratioSoCGoogle Tensor G2Apple A14 BionicUnisoc Tiger T618RAM8 GB LPDDR54 GB LPDDR4X3 GB / 4 GBBattery27 Wh28.6 Wh (7,606 mAh)7,040 mAhInternal storage128 GB / 256 GB64 GB / 256 GB32 GB / 64 GB / 128 GBExternal storageNoneNonemicroSDXC up to 1 TBRear camera(s)8MP, f/2.012MP, f/1.8, 5x Digital zoom8MPFront camera(s)8MP, f/2.012MP, f/2.45MPVideo captureFront camera: 1080p at 30 fpsRear camera: 1080p at 30 fpsFront camera: 1080p at 25 fps, 30 fps, 60 fpsRear camera: 4K at 24 fps, 25 fps, 30 fps, 60 fps; 1080p at 25 fps, 30 fps, 60 fps, 120 fps, 240 fpsFront camera: 1080p at 30 fpsRear camera: 1080p at 30 fpsWiFiWiFi 6WiFi 6802.11acBluetoothv5.2v5.2v5.0OSAndroid 135 years of security updatesiPadOS 16.1, upgradeable to iPadOS 16.4.1Android 11, upgradeable to Android 13, One UI 5.1ColorsPorcelain, Hazel, RoseSilver, Blue, Pink, YellowGray, Silver, Pink GoldOther featuresComes with Charging Speaker Dock for 15W wireless charging, external speakers and smart home control; Google Cast support (in Hub Mode), stylus support, USB-C 3.2 Gen 1, Titan M2 security chip, 1-year warrantyApple Pencil (1st gen) support, Cellular models available, FaceTime, Center Stage, iMessage, landscape-oriented front camera, USB-C 2.0, 1-year warranty3.5mm headphone jack, Dolby Atmos tuning, 15W charging, USB-C 2.0Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/pixel-tablet-vs-the-competition-googles-latest-stab-at-making-android-tablets-a-thing-191008603.html?src=rss
How to pre-order the Google Pixel Fold
Prior to today's I/O keynote, Google confirmed the leaks and rumors about the existence of its first foldable smartphone with a teaser video on YouTube. Now we know the full specs and pre-order details for the $1,800 handheld. Starting today, you can pre-order the Google Pixel Fold through Google's storefront, and units should begin shipping sometime in June. And when you pre-order, Google will thrown in a free Pixel Watch too.Like the Pixel 7 series, the Pixel Fold will feature Google's Tensor G2 SOC and come with 12GB of RAM and either 256 or 512GB of storage. The claimed battery life extends beyond 24 hours and supports both wireless charging or 30W fast charging. Google says it's the thinnest foldable phone on the market, measuring a half-inch thick when folded.The exterior features an always-on, 5.8-inch OLED display with up to 1550 nits of brightness and 120Hz refresh rate. It's covered in the same Gorilla Glass Victus as the Pixel 7 and 7 Pro — but it's the interior screen that's getting most of the attention. The 7.6-inch, 120Hz folding display is facilitated by a custom, dual-axis steel hinge and foldable Ultra Thin Glass with a layer of protective plastic. There's just enough friction within the hinges to enable different views when propped up in tabletop mode.The Pixel Fold has a total of five cameras: an 8-megapixel inner camera, a 9.5MP selfie cam on the front screen, and three cameras across the rear bar, including a telephoto lens, an ultrawide lens and a 48MP camera with a half-inch sensor. The multiple screens and cameras will enable features like split screen productivity, tripod-free astrophotography and real-time translation during face-to-face conversations.We'll have full reviews of the foldable soon. In the meantime our senior reviewer, Sam Rutherford, was able to do a quick hands-on with the Pixel Fold and thinks it's a fitting rival for Samsung's foldables. Our comparison post compares the Pixel Fold specs with the competition. You can get it in either black or white and pre-orders placed now should ship in June.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/how-to-pre-order-the-google-pixel-fold-190517124.html?src=rss
Google Pixel 7a vs the competition: Pushing the boundaries of a budget phone
Google's announced the Pixel 7a — and made it available for immediate purchase — during its annual I/O conference. Like other A-model Pixel phones, this is a budget version of what came before, namely the Pixel 7 and 7 Pro. At $499, it's $100 cheaper than either of its siblings but manages to meet or exceed many of their specs. It has a similar design, uses the same Tensor G2 processing chip, and offers 8GB of RAM and 128GB of storage like the base model Pixel 7. The 7a also matches the water resistance and display refresh rate of that phone, but has a larger battery and higher-res cameras. One key difference is the smaller screen, measuring 6.1 inches versus the Pixel 7's 6.3-inch display.We know it stacks up nicely against other current-model Pixels, but how does it compare to other budget-model phones? It's a little pricier than either the iPhone SE or the Galaxy A54 and falls between the two on battery capacity, screen size and number of cameras. The Pixel 7a beats both of its competitors on base-level memory and is also the only budget model to use the same processor as its top-end, flagship counterpart. Here are the specs for each phone side-by-side so you can see which one makes the most sense for you.Google Pixel 7a vs. Apple iPhone SE vs. Samsung Galaxy A54 Google Pixel 7aApple iPhone SE (3rd gen)Samsung Galaxy A54PricingStarts at $499Starts at $429Starts at $450Release dateMay 10, 2023March 18, 2022March 24,2023Dimensions6.0 x 2.87 x 0.35 in(152.4 x 72.9 x 9.0 mm)5.45 x 2.65 x 0.29 in(138.4 x 67.3 x 7.3 mm)6.23 x 3.02 x 0.32 in(158.2 x 76.7 x 8.2 mm)Weight6.81 oz(193 g)5.09 oz(144 g)7.13 oz(202 g)Operating systemAndroidiOSAndroidScreen size6.1 in4.7 in6.4 inScreen resolution1080 x 2400 at 429ppi1334 x 750 at 326ppi2340 x 1080 at 403ppiScreen typeOLED (90Hz)Retina HD LCD (60 Hz)Super AMOLED (120Hz)ProcessorTensor G2A15 Bionic chipExynos 1380Water and dust resistanceIP67IP67IP67Battery4385 mAh2018 mAh5000 mAhRAM8GB4GB6GB / 8GBInternal storage128GB64GB / 128GB / 256GB128GB / 256GBRear camera(s)Two cameras:Main: 64MP, ƒ/1.89 apertureWide: 13MP, ƒ/2.2 apertureOne camera:Main: 12MP, ƒ/1.8 apertureThree cameras:Main: 50MP, ƒ/1.8 apertureWide: 12MP, ƒ/2.2 apertureMacro: 5MP, ƒ/2.4 apertureVideo capture4K 60 fps4K at 60 fps4K at 30 fpsFront camera13MP, ƒ/2.2 aperture7MP, ƒ/2.2 aperture32MP, ƒ/2.2 apertureWiFiWiFi 6EWiFi 6WiFi 6Charging18W fast charging, 7.5W wireless20W fast charging, 7.5W wireless25W fast chargingConnectorUSB-CLightningUSB-CFollow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-pixel-7a-vs-the-competition-pushing-the-boundaries-of-a-budget-phone-190029045.html?src=rss
Google’s Find My Device will soon detect unknown Bluetooth trackers
Android updates to its "Find My Device" network will alert users to unknown trackers, even if they aren't Google branded, the company announced during its I/O 2023 keynote on Wednesday. The updates will come in summer 2023, but the company did not give a specific date.Unknown tracker alerts happen when the network detects a Bluetooth tracker, such as an Apple AirTag or Tile device, registered to another user following you around. With this Android update, any tracker compatible with the Find My Device network will show up on your app.Other updates to the Find My Device app include a feature that pings compatible devices if you can't find them, ways to view location of those devices even if they're offline and new support for Tile, Chipolo, Pebblebee, Sony and JBL devices.The Android announcement comes after Google and Apple partnered up earlier this month to address unwanted tracking across devices. The companies submitted best practices and instructions to allow unauthorized tracking notifications across iOS and Android devices. Other brands like Tile and Samsung have shown support for the effort.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/android-findmy-bluetooth-tracker-google-airtag-tile-182832477.html?src=rss
Google's Project Tailwind is an AI-infused personal notebook
Project Tailwind is Google's latest foray into AI and it's aimed at helping students organize their notes. Google describes it as "your AI-first notebook," and the toolset is able to distill information from a personal set of notes, making it all searchable, suggesting questions and main themes, and otherwise organizing the subject matter in an interactive way. Project Tailwind is an experiment at the moment and it's available only in the US. The waitlist to try it out is accessible via Google Labs.Google revealed Project Tailwind during today's I/O developer conference, showing off a few minutes of the program in action. After selecting a subject — computer science history — and pulling up a few paragraphs of notes from Google Drive, the developer had Project Tailwind summarize the content, generate a specific glossary, and offer a quiz or study guide on the information, among other actions. Tailwind shows its work as it goes, with footnotes and citations pulled directly from the document.Project Tailwind has a wider potential audience than students alone, but Google is focusing on classroom-level note enhancement with its initial pitch.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/googles-project-tailwind-is-an-ai-infused-personal-notebook-182728079.html?src=rss
WhatsApp arrives on Wear OS this summer
For the first time, WhatApp is coming to smartwatches. At its I/O 2023 keynote on Wednesday, Google announced that the chat app will be available this summer on Wear OS 3 devices, including Samsung's Galaxy Watch 5 and the Pixel Watch. Among other features, the smartwatch version of WhatsApp allows you to record and send voice messages. You can also use the app to send text messages and see a list of your favorite contacts.A beta version of the software was spotted earlier in the week by 9to5Google. From that preview, we know adding a Wear OS device to your account will involve typing an eight-digit alphanumeric code provided to you through your phone. Additionally, the beta release features a circular complication that shows unread messages on your watch’s home screen. The complication also has two tiles for contacts and voice messages, allowing you to quickly send messages to your friends and family.The news that WhatsApp is heading to Wear OS devices comes after Meta announced at the end of last month it had redesigned WhatsApp’s multi-device functionality to make it possible to use one account on more than one phone.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/whatsapp-arrives-on-wear-os-this-summer-182644527.html?src=rss
Watch Google’s I/O keynote here at 1PM ET
It’s Google I/O time, which means the company is about to host a keynote that will likely be packed with announcements and updates. We’ll be covering all the news as it happens on our liveblog and Google I/O 2023 hub, but you can watch the event in full below. The livestream starts at 1PM ET.In terms of what to expect from the keynote, one thing that's for certain is we'll get more official details on Google's first foldable phone. The company finally announced the Pixel Fold last week after months of leaks and rumors. More information on Android 14 is also a dead cert. Get ready to hear the term "AI" a lot too, as Google is widely expected to make a ton of announcements on that front, perhaps including updates on its Bard AI chatbot.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/watch-googles-io-keynote-here-at-1pm-et-160030971.html?src=rss
Google unveils its multilingual, code-generating PaLM 2 language model
Google has stood at the forefront at many of the tech industry's AI breakthroughs in recent years, Zoubin Ghahramani, Vice President of Google DeepMind, declared in a blog post while asserting that the company's work in foundation models, are "the bedrock for the industry and the AI-powered products that billions of people use daily." On Wednesday, Ghahramani and other Google executives took the Shoreline Amphitheater stage to show off its latest and greatest large language model, PaLM 2, which now comes in four sizes able to run locally on everything from mobile devices to server farms.PaLM 2, obviously, is the successor to Google's existing PaLM model that, until recently, powered its experimental Bard AI. "Think of PaLM as a general model that then can be fine tuned to achieve particular tasks," he explained during a reporters call earlier in the week. "For example: health research teams have fine tuned PaLM with with medical knowledge to help answer questions and summarize insights from a variety of dense medical texts." Ghahramani also notes that PaLM was "the first large language model to perform an expert level on the US medical licensing exam."Bard now runs on PaLM 2, which offers improved multilingual, reasoning, and coding capabilities, according to the company. The language model has been trained far more heavily on multilingual texts than its predecessor, covering more than 100 languages with improved understanding of cultural idioms and turns of phrase.It is equally adept at generating programming code in Python and JavaScript. The model has also reportedly demonstrated "improved capabilities in logic, common sense reasoning, and mathematics," thanks to extensive training data from "scientific papers and web pages that contain mathematical expressions."Even more impressive is that Google was able to spin off application-specific versions of the base PaLM system dubbed Gecko, Otter, Bison and Unicorn."We built PaLM to to be smaller, faster and more efficient, while increasing its capability," Ghahramani said. "We then distilled this into a family of models in a wide range of sizes so the lightest model can run as an interactive application on mobile devices on the latest Samsung Galaxy." In all, Google is announcing more than two dozen products that will feature PaLM capabilities at Wednesday's I/O eventThis is a developing story. Please check back for updates.This article originally appeared on Engadget at https://www.engadget.com/google-unveils-its-multilingual-code-generating-palm-2-language-model-180805304.html?src=rss
Google adds more context and AI-generated photos to image search
Google is adding some new features to its image search function to make it easier to spot altered content, the company announced at its I/O 2023 keynote Wednesday. Photos shown in search results will soon include an "about this image" option that tells users when the image and ones like it were first indexed by Google. You can also learn where it may have appeared first and see other places where the image has been posted online. That information could help users figure out whether something they're seeing was generated by AI, according to Google.For example, you'll be able to see if the image has been on fact-checking websites that point out whether an image is real or altered. Vice President of search Cathy Edwards told Engadget that the tool doesn't currently tell you directly if an image has been edited or manipulated, though the company is researching effective ways of detecting such tweaks.The new feature will show up by clicking the three dots on an image in Google Image results. Google did not say exactly when the new feature will be available, besides that it will launch first in the United States sometime in the "coming months." Those images will include a markup in the original file to add context about its creation wherever its used. Image publishers like Midjourney and Shutterstock will also include the markup.Google Search seems to be going all-in on AI. The company announced its Search Generative Experience that gives an AI summary of results and other information on search, upcoming products like the "Perspectives" tab in search that highlights forum and social media posts and other developments that you can test out in Google's Search Labs. Google's efforts to clarify to users where its search results come from started earlier this year with efforts like "about this result."Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/generative-ai-google-image-search-context-175311217.html?src=rss
Google's Search Labs lets you test its AI-powered 'products and ideas'
It's fair to say that Google was caught flat-footed by Microsoft's launch of Bing search powered by ChatGPT, as it didn't have anything similar when it unveiled its own conversational AI, Bard. Now, Google has announced Search Labs, a new way for consumers to test "bold new ideas and ideas we're exploring" in search, the company said at its I/O 2023 keynote.There are three key features available for a limited time. The first is called Search Generative Experience (SGE), bringing generative AI directly into Google Search. "The new Search experience helps you quickly find and make sense of information," Google's Direct of Search wrote. "As you search, you can get the gist of a topic with AI-powered overviews, pointers to explore more, and ways to naturally follow up."GoogleAlso available from the Search prompt are Code Tips, that use large language models to provide snippets and "pointers for writing code faster and smarter," according to Google. You can get reponses about languages including Java, Go, Python, Javascript, C++, Kotlin, shell, Docker and Git.Finally, "Add to Sheets" lets you insert search results directly into a spreadsheet. For example, if you're planning a vacation on a Sheets document, you can easily add a link straight from Google Search.Google's Bard could potentially improve all of Google's products ranging from Maps to Drive. Search, however, is the company's core function and principal moneymaker, and was one of the first things it mentioned when announcing Bard. To that end, it will be interesting to see how it compares with what Microsoft's ChatGPT-powered Bing can do.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/googles-search-labs-lets-you-test-its-ai-powered-products-and-ideas-175254478.html?src=rss
Google’s Duet AI brings more generative features to Workspace apps
After OpenAI’s ChatGPT caught the tech world off guard late last year, Google reportedly declared a “code red,” scrambling to plan a response to the new threat. The first fruit of that reorientation trickled out earlier this year with its Bard chatbot and some generative AI features baked into Google Workspace apps. Today at Google I/O 2023, we finally see a more fleshed-out picture of how the company views AI’s role in its cloud-based productivity suite. Google Duet AI is the company’s branding for its collection of AI tools across Workspace apps.Like Microsoft Copilot for Office apps, Duet AI is an umbrella term for a growing list of generative AI features across Google Workspace apps. (The industry seems to have settled on marketing language depicting generative AI as your workplace ally.) First, the Gmail mobile app will now draft full replies to your emails based on a prompt. In addition, the mobile Gmail app will soon add contextual assistance, “allowing you to create professional replies that automatically fill in names and other relevant information.”GoogleDuet AI also makes an appearance in Google Slides. Here, it takes the form of image generation for your presentations. Like Midjourney or DALL-E 2, Duet AI can now turn simple text prompts into AI-generated images to enhance Slides presentations. It could help save you the trouble of scouring the internet for the right slide image while spicing them up with something original.In Google Sheets, Duet AI can understand the context of a cell’s data and label it accordingly. The spreadsheet app also adds a new “help me organize” feature to create custom plans: describe what you want to do in plain language, and Duet AI will outline strategies and steps to accomplish it. “Whether you’re an event team planning an annual sales conference or a manager coordinating a team offsite, Duet AI helps you create organized plans with tools that give you a running start,” the company said.GoogleMeanwhile, Duet AI in Google Meet can generate custom background images for video calls with a text prompt. Google says the feature can help users “express themselves and deepen connections during video calls while protecting the privacy of their surroundings.” Like the Slides image generation, Duet’s Google Meet integration could be a shortcut to save you from searching for an image that conveys the right ambiance for your meeting (while hiding any unwanted objects or bystanders behind you).Duet also adds an “assisted writing experience” in Google Docs’ smart canvas. Entering a prompt describing what you want to write about will generate a Docs draft. The feature also works in Docs’ smart chips (automatic suggestions and info about things like documents and people mentioned in a project). Additionally, Google is upgrading Docs’ built-in Grammarly-style tools. A new proofread suggestion pane will offer tips about concise writing, avoiding repetition and using a more formal or active voice. The company adds that you can easily toggle the feature when you don’t want it to nag you about grammar.Initially, you’ll have to sign up for a waitlist to try the new Duet AI Workspace features. Google says you can enter your info here to be notified as it opens the generative AI features to more users and regions “in the weeks ahead.”This is a developing story. Please check back for updates.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/googles-duet-ai-brings-more-generative-features-to-workspace-apps-173944737.html?src=rss
Google Bard transitions to PaLM 2 and expands to 180 countries
For the past two months, anybody wanting to try out Google's new chatbot AI, Bard, had to first register their interest and join a waitlist before being granted access. On Wednesday, the company announced that those days are over. Bard will immediately be dropping the waitlist requirement as it expands to 180 additional countries and territories. What's more, this expanded Bard will be built atop Google's newest Large Language Model, PaLM 2, making it more capable than ever before.Google hurriedly released the first generation Bard back in February after OpenAI's ChatGPT came out of nowhere and began eating the industry's collective lunch like Gulliver in a Lilliputian cafeteria. Matters were made worse when Bard's initial performances proved less than impressive — especially given Google's generally accepted status at the forefront of AI development — which hurt both Google's public image and its bottom line. In the intervening months, the company has worked to further develop PaLM, the language model that essentially powers Bard, allowing it to produce better quality and higher-fidelity responses, as well as perform new tasks like generating programming code.As Google executives announced at the company's I/O 2023 keynote on Wednesday, Bard has been switched over to then new PaLM 2 platform. As such, users can expect a bevy of new features and functions to roll out in the coming days and weeks. Features like a higher degree of visual responses to your queries, so when you ask for "must see sights" in New Orleans, you'll be presented with images of the sites you'd see, more than just a bullet list or text-based description. Conversely, users will be able to more easily input images to Bard alongside their written queries, bringing Google Lens capabilities to Bard.Even as Google mixes and matches AI capabilities amongst its products — 25 new offerings running on PaLM 2 are being announced today alone — the company is looking to ally with other industry leaders to further augment Bard's abilities. Google announced on Wednesday that it is partnering with Adobe to bring its Firefly generative AI to Bard as a means to counter Microsoft's BingChat-DallE2 offering.Finally, Google shared that it will be implementing a number of changes and updates in response to feedback received from the community since launch. Clicking on a line of generated code or chatbot answer and Bard will provide a link to that specific bit's source. There will be a new Dark theme. And, the company is working to add an export feature so that users can easily run generated programming code on Replit or toss their generated works into Docs or Gmail.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-bard-transitions-to-palm-2-and-expands-to-180-countries-172908926.html?src=rss
Google is incorporating Adobe's Firefly AI image generator into Bard
Back in March, Adobe announced that it, too, would be jumping into the generative AI pool alongside the likes of Google, Meta, Microsoft and other tech industry heavyweights with the release of Adobe Firefly, a suite of AI features. Available across Adobe's product lineup including Photoshop, After Effects and Premiere Pro, Firefly is designed to eliminate much of the drudge work associated with modern photo and video editing. On Wednesday, Adobe and Google jointly announced during the 2023 I/O event that both Firefly and the Express graphics suite will soon be incorporated into Bard, allowing users to generate, edit and share AI images directly from the chatbot's command line.According to a release from the company, users will be able to generate an image with Firefly, then edit and modify it using Adobe Express assets, fonts and templates within the Bard platform directly — even post to social media once it's ready. Those generated images will reportedly be of the same high quality that Firefly beta users are already accustomed to as they are all being created from the same database of Adobe Stock images, openly licensed and public domain content.Additionally, Google and Adobe will leverage the latter's existing Content Authenticity Initiative to mitigate some of the threats to creators that generative AI poses. This includes a "do not train" list, which will preclude a piece of art's inclusion in Firefly's training data as well as persistent tags that will tell future viewers whether or not a work was generated and what model was used to make it. Bard users can expect to see the new features begin rolling out in the coming weeks ahead of a wider release.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-is-incorporating-adobes-firefly-ai-image-generator-into-bard-174525371.html?src=rss
Google Photos will use generative AI to straight-up change your images
Google is stuffing generative AI into seemingly all its products, and that now includes the photo app on your phone. The company has previewed an "experimental" Magic Editor tool in Google Photos that can not only fix photos, but outright change them to create the shot you wanted all along. You can move and resize subjects, stretch objects (such as the bench above), remove an unwanted bag strap or even replace an overcast sky with a sunnier version.Magic Editor will be available in early form to "select" Pixel phones later this year, Google says. The tech giant warns that output might be flawed, and that it will use feedback to improve the technology.Google is no stranger to AI-based image editing. Magic Eraser already lets you remove unwanted subjects, while Photo Unblur resharpens jittery pictures. Magic Editor, however, takes things a step further. The technology adds content that was never there, and effectively lets you retake snapshots that were less-than-perfectly composed. You can manipulate shots with editors like Adobe's Photoshop, of course, but this is both easier and included in your phone's photo management app.The addition may be helpful for salvaging photos that would otherwise be unusable. However, it also adds to the list of ethical questions surrounding generative AI. Google Photos' experiment will make it relatively simple to present a version of events that never existed. It may be that much harder to trust someone's social media snaps, even though they're not entirely fake.Follow all of the news from Google I/O 2023 right here.This article originally appeared on Engadget at https://www.engadget.com/google-photos-will-use-generative-ai-to-straight-up-change-your-images-171014939.html?src=rss
‘Hollow Knight: Silksong’ delayed and there's no updated release window
Hollow Knight: Silksong, the long-awaited sequel to 2017’s indie blockbuster Hollow Knight, has been delayed, as announced by Team Cherry developer Matthew Griffin on Twitter. The sidescrolling metroidvania platformer was scheduled for release in the first half of 2023, but, well, it’s May, so that isn’t happening. Additionally, the company hasn’t provided an updated release window.“Development is still continuing. We're excited by how the game is shaping up, and it's gotten quite big, so we want to take the time to make the game as good as we can,” wrote Matthew Griffin.Silksong was originally announced in early 2019 and hands-on gameplay demos followed in June, so folks naturally assumed that the game was close to launch. Time marched on with no real updates until last year when it was announced that Silksong would launch on Xbox Game Pass, in addition to just about every other platform. At that time, Xbox corporate vice president Sarah Bond said that the sequel would release by June of this year.Like another long-delayed sequel, Silksong was originally conceived as a simple expansion to Hollow Knight with the game’s occasional antagonist Hornet acting as the main character. Since then, the game has apparently gotten much more ambitious. Team Cherry says it’ll release more details as we get closer to release.Whenever it finally graces us with its presence, Hollow Knight: Silksong will launch on Nintendo Switch, PlayStation 4, PlayStation 5, Windows PC, Xbox One and Xbox Series X, with Game Pass availability from day one. In the meantime, there is no shortage of metroidvania games out there to hold you over.This article originally appeared on Engadget at https://www.engadget.com/hollow-knight-silksong-delayed-and-theres-no-updated-release-window-164549946.html?src=rss
Anker charging accessories are up to 42 percent off on Amazon
If your charging gear is in need of a refresh, now might be a decent time to upgrade, as Anker has once again discounted a range of wall chargers, cables and power banks on Amazon. For more heavy-duty needs, a number of the company's portable power stations are also on sale.Among the noteworthy deals here, the Anker 735 Charger is down to $38.41, which is within a dollar of its all-time low. We've seen this discount a few times before, but normally, the wall charger retails closer to $50. This is a slightly older version of the "best 65-watt charger" pick in our guide to the best fast chargers. The newer device is also called the 735 Charger, confusingly, and features smarter temperature monitoring and power distribution, but the old model delivers the same 65W of power in a similarly travel-friendly frame. Generally speaking, that's enough power to charge many smartphones and tablets around full speed and refill some smaller laptops.Both of the charger's USB-C ports can reach that max charging rate, plus there's a USB-A port for topping up lower-power devices. Just note that the each port will output less power if you use multiple ports at once. The updated model is also on sale for $48 with an on-page coupon.A couple of hybrid chargers are discounted as well, with the 45W, 5,000 mAh Anker 521 Power Bank down to $42 and the 65W, 10,000 mAh Anker 733 Power Bank down to $70. (Clip the on-page coupon in both cases to see the discount.) These devices are on the larger side, but they can serve as both a portable power bank and a wall charger with fold-up plugs. The 733's discount matches the lowest price we've seen, while the 521 is about $18 below its usual street price.Beyond that, the company's six-foot PowerLine II USB-C to Lightning cable is down to a low of $9, while the ultracompact 20W Anker 511 Charger is within a dollar of its best price $12. Anker runs these kind of discounts fairly often, but we've found their charging gear to provide good value in severalbuyingguides, so this is a good chance to save.Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.This article originally appeared on Engadget at https://www.engadget.com/anker-charging-accessories-are-up-to-42-percent-off-on-amazon-153043864.html?src=rss
Arturia's MicroFreak gets sample playback, granular synthesis and gorgeous Stellar edition
I routinely state that the Arturia MicroFreak is one of my favorite budget synths. But honestly, I’m doing it a disservice. I think the MicroFreak is one of the best synths out there at any price. That has as much to do with the versatility of its sound engine as it does with Arturia’s relentless updates. Since being introduced in 2019 this little synth has seen countless updates, most of them adding fairly significant new features.The forthcoming update to firmware version 5.0 is arguably one of the biggest. It adds a sample playback engine and three different granular engines. That brings the total number of synth engines on the MicroFreak to a frankly absurd 22. Sure, not all of the engines are as usable as the others, and some are relatively similar to each other. But still, it’s a lot of versatility in a small $350 package.Engadget · MicroFreak V5 DemoWhen the new firmware drops around May 30th (unfortunately there are still some kinks being ironed out) there will also be an update to MIDI Control Center that will enable users to upload their own samples (up to 24 seconds in length) for playback through the four new engines. The total number of preset slots on the MicroFreak will expand to 512, and there will be two new modulation options added to the utility menu – random per-key and snap.Since the MicroFreak had a wavetable engine to start, and in 2021 Arturia added support for custom user wavetables, sample playback should be simple enough since the wavetables are just .wav files. And if there’s sample playback and wavetable support, then granular isn’t too much of a stretch since that’s just chopping up a .wav file and playing it back in little rearranged bits. Still, granular synthesis is pretty hard to come by in a hardware instrument so it’s worth celebrating here. (Side note: I would expect to see more in the near future as granular seems to be gaining popularity and processors are powerful enough now to make building them trivial.)Photo by Terrence O'Brien / EngadgetThe sample engine is relatively rudimentary. You can change the start and end point and set a loop point, but that’s it. There’s no time stretching or anything (that I’m aware of), pitch changes are achieved simply by speeding up and slowing down playback. But there’s something about the way that is handled on the MicroFreak that sounds incredible. It’s got an almost late ‘70s, early ‘80s digital vibe that speaks to my love of grit. And many of the included samples at least remain usable over a wide number of octaves. In fact, I’d say that at the lower extreme’s they’re not just usable, but fantastic sounding – especially that PGTS Keys sample. And then, obviously, you’ve got the rest of the synth at your disposal to add filtering and modulation.One trick I quickly fell in love with was using the new per-key random setting to slightly alter the pitch. Combining that with the lo-fi piano sample gets you something that’s never quite in tune, but never so far out that it sounds bad. It does require some minor menu diving, but it’s worth it. And then the key/arp row in the mod matrix becomes another source of randomization. And honestly, I didn’t really think I needed more randomization on a synth that has both sample-and-hold and random options for the LFO, plus the Spice and Dice parameters for injecting chaos into your arps and sequences, but here we are.I assume that instead of one granular engine, it’s broken down into three separate engines in part because of the limited controls on the front. Scan plays a sample from start to finish (mostly), but it allows you to control the speed at which the grain moves through the file so you can get really lo-fi time stretching effects. But if you modulate the speed at which it scans and set the density (number of grains) reasonably low and the chaos reasonably high, you get this sort of warped vinyl tremolo sound. I especially love the way it sounds on the Braam brass sample. The biggest con here is that samples won’t loop in the scan engine, so you can't get good drones out of it.Photo by Terrence O'Brien / EngadgetCloud, on the other hand, is built for drones. It plays back multiple overlapping grains, looping around the file to create alien textures perfect for scoring a retro sci-fi point and click adventure. This is the sort of thing that people immediately associate with granular synthesis. It has a strange character that can’t be mistaken for anything else.Lastly, the Hit engine is all about percussion. This is where you go to create clicky sound effects and stuttering glitchy rhythm tracks, even if the source material isn’t a drum sample. In fact, I’d say the results are often more interesting when working with what was originally a melodic sample. Though, I won’t pretend it’s not loads of fun to pop an 808 kick in there and just let it create relentless, skull-crushing avalanches of bass.Photo by Terrence O'Brien / EngadgetIt’s rare to see a company continue to add this many features to a product years after its release. Whether we’re talking about a synth, a phone or a camera. But Arturia deserves credit (Novation too, just for the record) for continuing to devote time and resources to even its entry level products. Perhaps the most shocking thing about the MicroFreak 5.0 update is that we haven’t seen its big brother, the MiniFreak, receive a single significant update. Despite costing nearly twice the price, it now has six fewer synth engine options, at least by my count.While you wait for the free update to drop on May 30th, take a few moments to ogle the new MicroFreak Stellar limited edition pictured at the top of this post. It swaps the original's birds and floral flourishes for a space themed design and swaps out the white keys for a monochromatic all black deck. The MicroFreak Stellar is available now for $399.This article originally appeared on Engadget at https://www.engadget.com/arturias-microfreak-gets-sample-playback-granular-synthesis-and-gorgeous-stellar-edition-150008075.html?src=rss
Nikon's Z8 mirrorless camera offers 8K60p RAW video and 20fps burst speeds
Nikon has announced the 45.7-megapixel Z8, a powerful full-frame mirrorless camera with up to 8K60p RAW video, 20fps RAW burst speeds and more. It's effectively a slimmed-down version of Nikon's Z9, and shares the latter's stacked, backside-illuminated (BSI) sensor and complete lack of a mechanical shutter. The Z8 is missing the Z9's unlimited video recording, but it's also $1,500 cheaper.Nikon is best known for photography, but the Z8's headline feature is the 8K60p N-RAW video. There's an interesting story there, as the cinema camera manufacturer RED has used its patents to stop other camera companies from using RAW video in the past. However, RED's lawsuit against Nikon was dismissed late last month, allowing Nikon to use N-RAW (a compressed 12-bit RAW codec developed in conjunction with a company called intoPIX) in any of its cameras. It can also capture 12-bit ProRes RAW video.NikonAlong with 8K60p, the Z8 supports 4K capture at up to 120fps and 10-bit ProRes, H.264 and H.265 formats. It also offers exposure tools like waveforms, customizable autofocus and more. As mentioned, the smaller body means it can't record all video formats for an unlimited time like the Z9. Rather, you're limited to 90 minutes for 8K30p and two hours for 4K60p without overheating. With the stacked sensor, rolling shutter should be very well controlled, just like on the Z9.In terms of photography, the Z9's burst speeds aren't restrained by a mechanical shutter, because there isn't one. As such, you can capture 14-bit RAW+JPEG images at up to 20 fps, mighty impressive for such a high-resolution camera. It comes with settings designed for portrait photographers like skin softening and human-friendly white balance.NikonIt offers face, eye, vehicle and animal detection autofocus, promising AF speeds at the same level as the (excellent) Z9. It can recognize nine types of subjects automatically, including eyes, faces, heads and upper bodies for both animals and people, along with vehicles and more.The Z8's magnesium-allow body may be smaller than the Z9, but it's equally as dust- and weather-resistant. It's also much the same in terms of controls, with a generous array of dials and buttons to change settings. Battery life is good at 700 shots max (CIPA) and two-plus hours of 4K video shooting, but if you need more, you can get the optional MB-N12 battery grip ($350).Other features include 6.0 stops of in-body stabilization with compatible lenses, which is good but not as good as recent Sony, Canon and Panasonic models. The electronic viewfinder (EVF) has a relatively low 3.69 million dots of resolution, but also very low lag and a high 120Hz refresh rate. Unfortunately, the 3.2-inch, 2,100K dot rear display only tilts up and doesn't flip out, so the camera won't be suitable for many vloggers — a poor decision on Nikon's part, in my opinion.It has one SD UHS-II and one CFexpress card slot that supports speeds up to 1,500 MB/s required for internal 8K RAW recording. That differs from the Z9, which has two CFexpress card slots. On top of the usual USB-C charging port, it has a super-speed USB communication terminal for rapid data transfers. It also comes with a full-sized HDMI connector for external video recording and monitoring, along with 3.5mm headphone and microphone parts.The Nikon Z8 goes on sale on May 25th, 2023 for $4,000. That's $1,500 less than the $5,500 Z9, $2,500 less than the Sony A1 and $700 more than Canon's R5 — with far less serious overheating issues.NikonThis article originally appeared on Engadget at https://www.engadget.com/nikons-z8-mirrorless-camera-offers-8k60p-raw-video-and-20fps-burst-speeds-141556946.html?src=rss
Vast and SpaceX plan to launch the first commercial space station in 2025
Another company is racing to launch the first commercial space station. Vast is partnering with SpaceX to launch its Haven-1 station as soon as August 2025. A Falcon 9 rocket will carry the platform to low Earth orbit, with a follow-up Vast-1 mission using Crew Dragon to bring four people to Haven-1 for up to 30 days. Vast is taking bookings for crew aiming to participate in scientific or philanthropic work. The company has the option of a second crewed SpaceX mission.Haven-1 is relatively small. It isn't much larger than SpaceX's capsule, and is mainly intended for science and small-scale orbital manufacturing for the four people who dock. Vast hopes to make Haven-1 just one module in a larger station, though, and it can simulate the Moon's gravity by spinning.As TechCrunchnotes, the 2025 target is ambitious and might see Vast beat well-known rivals to deploying a private space station. Jeff Bezos' Blue Origin doesn't expect to launch its Orbital Reef until the second half of the decade. Voyager, Lockheed Martin and Nanoracks don't expect to operate their Starlab facility before 2027. Axiom stands the best chance of upstaging Vast with a planned late 2025 liftoff.There's no guarantee any of these timelines will hold given the challenges and costs of building an orbital habitat — this has to be a safe vehicle that comfortably supports humans for extended periods, not just the duration of a rocket launch. However, this suggests that stations represent the next major phase of private spaceflight after tourism and lunar missions.This article originally appeared on Engadget at https://www.engadget.com/vast-and-spacex-plan-to-launch-the-first-commercial-space-station-in-2025-134256156.html?src=rss
Roku unveils a $99 smart home monitoring system
Roku is diving further into smart home equipment, and the price is once again a major draw. The company has unveiled a Home Monitoring System SE kit that includes two entry sensors, a motion sensor, a hub (with siren) and a keypad for $99. You might pay less for a whole monitor setup than you do individual parts from rival bundles.As with earlier products, Wyze co-developed the hardware. You can monitor activity on your phone, but Roku unsurprisingly touts integration with its media players and compatible TVs. You can get an entry alert while you're watching a show. You can expand the system as needed, and an optional $100 per year professional monitoring option uses Noonlight to provide immediate help from agents.The Home Monitoring System SE is available today through Roku and Walmart, and will reach Walmart stores in the US on May 19th alongside a currently-unpriced light strip and outdoor-oriented solar panel. A Roku OS update enabling TV monitoring notifications (plus camera history and voice control) is due in the "coming weeks."The expansion may seem odd for a company that's closely associated with streaming devices, but it comes at a critical moment for the company. It's still laying off workers as it grapples with a rough economy, and is concentrating on projects that it believes are more likely to pay off. Smart home products could help Roku supplement its core business while competing against Amazon and others that already have a wide range of home-oriented gear.This article originally appeared on Engadget at https://www.engadget.com/roku-unveils-a-99-smart-home-monitoring-system-130002352.html?src=rss
The best self-care gifts for graduates
Graduates have spent the past couple of years hustling. Between coursework, jobs, family responsibilities and, of course, the pandemic, they likely haven't had a ton of time to devote to themselves. Now that they've donned their cap and gown, it's time for them to enjoy some well deserved rest and relaxation. If you want to get the grad in your life a treat that can help them do that, we at Engadget have some ideas based on how we like to treat ourselves when we need a break.Theragun MiniWith many gyms and exercise facilities still closed, you might be dabbling in some workouts from home alongside working from home. You might also have overextended yourself, leading to tender shoulders, thighs and calves. I may have done just that (several times over) but have been able to ease some of the soreness — or at least make myself feel better — with my trusty Theragun Mini.We’ve already covered Theragun’s flagship percussive therapy “gun,” the Elite, but you might find the Theragun Mini does almost as much for far less. It’s also less bulky. The $200 gun is a solid triangle but is small enough to grasp with a single hand, directing the vibrations to that one part of your hammies that needs some TLC. There are three speed settings, and the Mini benefits from Theragun’s quieter motors so it doesn’t sound like you’re drilling a shelf to anyone nearby. It’s definitely not quiet, but you can still hear the TV or hold a conversation over the massaging.If you’re looking to upgrade, Theragun also offers a peripheral that doubles the number of heads on any of its massage guns. The Duo Adapter offers a wider spread, meaning it feels like it takes me less time to hammer out the aches. It does reduce the sheer force of a single massage head, but if you have any particularly knotty areas, you can easily take the adapter off to really hammer it out. This all being said, these devices aren’t a panacea to everything that aches after physical exertion. Don’t forget hydration, sleep and nutrition, which are all, sadly, sold separately. — Mat Smith, U.K. Bureau ChiefBearaby weighted blanketWeighted blankets have been proven to reduce anxiety-causing cortisol while increasing sleep-friendly serotonin, but most of them are filled with tiny glass beads. Not only did that scare me off, but I’ve also heard that the beads might shift over time, which could lead to uneven weight distribution down the road. Plus, most weighted blankets have a bland design.The Bearaby weighted blankets are different. Instead of filling a duvet with micropellets or beads, Bearaby blankets are handmade from a gorgeous chunky-knit material that’s more reminiscent of a cozy sweater than a comforter. It also comes in beautifully luscious colors like Cloud White to Evening Rose. After weeks of using one, I’ve found that I’m nodding off faster and staying asleep longer, which is a big deal for someone plagued by chronic insomnia. — Nicole Lee, Commerce WriterCosori Electric KettleI believe one of the best things anyone can do for themselves on a regular basis is pause. We're all busier than ever with kids, partners, jobs and more and it's worth taking a break during the day to do something for yourself — in my case, that's often making a cup of tea after lunch. Now, I'm no tea connoisseur but I've definitely upped my brewing game as I've tried more loose leaf teas (my current favorites are from Harney & Sons and Adagio). I bought a Cosori electric kettle to help with my tea- and coffee-making routines and it's become one of the most used items in my kitchen. It heats water to the precise temperature I need for a strong black tea or a subtle green, and it does so in a relatively speedy fashion. The "keep warm" function also helps keep water at the right temperature when I, inevitably, get distracted by Slack messages. — Valentina Palladino, Senior Commerce EditorHeadspace subscriptionYou might think that things are going to get easier and less stressful now that you've graduated. But, oh boy, are you wrong. Now that you're out in the so-called "real world," taking care of your mental health is going to be more important than ever. Headspace can be a great resource for a little self care. It has a ton of guided meditations and mindful exercises that claim to help you relax, build self control and boost your creativity. But there's also playlists of ambient music and soundscapes to help you focus, including some curated by big name artists like Arcade Fire, St. Vincent and Sudan Archives.You’ll also quickly learn that there’s plenty of other things to lose sleep over, beyond cramming for a final. And for that, Headspace has Sleepcasts. These combine guided relaxation exercises with soundscapes and soothing narration to help lull you to sleep. Honestly, the Sleepcasts alone are worth the price of an annual subscription. — Terrence O'Brien, Managing EditorDyson Hot & Cool Bladeless Fan and HeaterSometimes upgrading your living environment can be the best form of self care. I've learned how true that is since I began working from home most of the time (pre-pandemic). One of the best home improvements I've made as of late was investing in a Dyson Hot & Cool fan. The decision ultimately came down to necessity — the heating in our apartment isn't the greatest, and New York winters can be tough. I picked up the AM09 Hot & Cool Bladeless Fan and Heater when I found it on sale last fall and it made all the difference. It not only cut the chill in our bedroom, a space that often feels 10 degrees colder than other areas of the apartment, but it made the room enjoyable to be in even on the coldest days. And on the weekends when my partner and I parked ourselves in our living room, it was easy to pick up the Dyson and tote it to where we were. The remote control is super handy, too, letting us adjust the temperature, oscillation and timer functions without getting up from the couch. While you can't find the AM09 now, Dyson has upgraded most of its Hot & Cool fans to also be air purifiers, so you'll essentially get a 3-in-1 device. — V.P.MoonPodAs a self-care gift for your grad, consider getting them a MoonPod. It’s especially useful for those who might find themselves working from home, as it provides a break from sitting on a stiff office chair and is a more comfortable alternative to a couch. According to the company, sitting on the MoonPod can help reduce stress and anxiety as it mimics the sensations of flotation therapy.I’m no flotation expert, but I can definitely attest to the fact that it is extremely comfortable. I also love that the MoonPod is so malleable; you can stand it upright to use as a slouchy armchair or lay it flat so you can lie down on it. Your grad will appreciate that they can use it while working and for when they need a quick nap in between meetings. — N.L.This article originally appeared on Engadget at https://www.engadget.com/best-self-care-wellness-gifts-for-graduates-130039470.html?src=rss
The Morning After: Nintendo wants to put several Switches ‘in every home’
After selling 23 million Switches two years ago and 18 million in the last year, Nintendo expects demand for the aging console to continue to fall. It's forecasting sales of 15 million for next year and isn't even confident of that figure, according to its latest earnings report. "Sustaining the Switch’s sales momentum will be difficult in its seventh year," said president Shuntaro Furukawa in a call. "Our goal of selling 15 million units this fiscal year is a bit of a stretch." To achieve that, he added: "We try to not only put one system in every home but several in every home.” Well, at least the new Zelda game is just around the corner…– Mat SmithThe Morning After isn’t just a newsletter – it’s also a daily podcast. Get our daily audio briefings, Monday through Friday, by subscribing right here.The biggest stories you might have missedWhat to expect at Google I/O 2023Pokémon developer Game Freak is partnering with Private Division on a new action franchise Volvo’s compact electric SUV will be the EX30 Apple is bringing Final Cut Pro and Logic Pro to iPad on May 23rdThe best travel gear for graduates Spotify has reportedly removed tens of thousands of AI-generated songsUniversal Music claimed bots inflated the number of streams.Spotify has reportedly pulled tens of thousands of tracks from generative AI company Boomy. It's said to have removed seven percent of the songs created by the startup's systems, which underscores the swift proliferation of AI-generated content on music streaming platforms. Universal Music reportedly told Spotify and other major services that it detected suspicious streaming activity on Boomy's songs, to glean more money from Spotify, which pays out on a per-listen basis.Continue reading.VanMoof simplifies things for its new, cheaper S4 and X4 e-bikesPick from a typical and step-through frame.VanMoofVanMoof is trying to deliver premium e-bike features and build quality for substantially less money. At $2,498, that’s $1,000 less than the company’s top-of-the-range S5 and X5 bikes, but that doesn’t make them exactly cheap. VanMoof co-founder Ties Carlier said in a press release this was an attempt at a “more simple, more accessible and more reliable” e-bike. One major simplification is the transition to adaptive motor support and a two-speed gear hub. The SX5 series had a three-speed gear system, and while it had a torque sensor to assist, adaptive motor support is new for these cheaper e-bikes. The company expects the range to be equivalent to both the SA5 and older SX3 e-bikes, 37-62 miles (60-150 km), depending on conditions and rider. Both the VanMoof S4 and X4 are available to pre-order now.Continue reading.Apple Watch Series 9 may finally get a new processorThe watches have used the same one since 2020.The Apple Watch has effectively used the same processor since 2020's Series 6, but it's poised for a long-due upgrade. Bloomberg's Mark Gurman claims the Apple Watch Series 9 will use a truly "new processor." He believes the CPU in the S9 system-on-chip will be based on the A15 chip, which first appeared in the iPhone 13 family. Apple has historically introduced new Apple Watches in September, so it shouldn’t be too long a wait.Continue reading.Twitter is going to purge and archive inactive accountsElon Musk says it's important to 'free up abandoned handles.'Twitter owner Elon Musk has warned the social network’s users they may see a drop in followers because the company is purging accounts that have "had no activity at all" for several years. Musk's announcement was quite vague, so we'll have to wait for Twitter to announce more specific rules, such as how long "several years" actually is. At the moment, though, the website has yet to update its inactive account policy page, which only states users need to log in every 30 days to keep their account active.Continue reading.WhatsApp begins testing Wear OS supportThe beta lets you record voice messages or chat on Google-powered wearables.WhatsApp is now testing an app for Wear OS 3 on devices like the Galaxy Watch 5, Pixel Watch and others. It has much of the functionality of the mobile versions, showing recent chats and contacts, while allowing you to send voice and text messages. WhatsApp offers a circular complication that shows unread messages on your watch's home page. There are also two tiles for contacts and voice messages to let you quickly access people or start a voice message recording. It's a significant release for Wear OS 3, with an ultra-popular app that most people have on their phones, in turn fulfilling Google's aim of getting more developers on the platform.Continue reading.A robot puppet rolled through San Francisco singing Vanessa Carlton hitsOnly 951 miles to go!YouTubeTwenty-one years after Vanessa Carlton released her debut single, ‘A Thousand Miles,’ a team of hobbyist roboticists has brought Carlton’s music back to the public ear — this time, to the streets of San Francisco, with an animatronic performer and, thankfully, a disco ball.Continue reading.This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-nintendo-wants-to-put-several-switches-in-every-home-111515506.html?src=rss
UK citizen pleads guilty to 2020 Twitter hack and other cybercrimes
Joseph James O'Connor has pleaded guilty to playing a role in various cybercrime activities, including the July 2020 hack that took over hundreds of high-profile Twitter accounts. O'Connor, who's known by the name PlugwalkJoe online, was originally from Liverpool, but he was extradited from Spain to the US in April. If you'll recall, the perpetrators of the 2020 Twitter hack hijacked accounts owned by popular personalities, including Bill Gates, Barack Obama and Elon Musk, and promoted crypto scams under their names. In 2021, Graham Ivan Clark, the supposed teenage mastermind behind the breach, pleaded guilty in return for a three-year prison sentence.According to the Justice Department, O'Connor communicated with his co-conspirators in that Twitter breach regarding purchasing unauthorized access to Twitter accounts. He allegedly purchased access to at least one Twitter account himself for $10,000. In addition, he was also apparently involved in the hack of a TikTok account with millions of followers, as well as a Snapchat account, via SIM swapping. In both cases, O'Connor and his co-conspirators stole sensitive personal information from the victims and then threatened to release them to the public. While the DOJ didn't identify victims in those cases, The Guardian says they were named in press reports as TikTok star Addison Rae and actor Bella Thorne.From March 2019 until May 2019, O'Connor was also allegedly involved in the infiltration of a Manhattan-based crypto company to steal $794,000 worth of cryptocurrency. They used SIM swapping to target three of the company's executives and successfully pulled it off with one of them. Using the compromised executive's credentials, they were able to gain unauthorized access to the company's accounts and computer systems. They then laundered the stolen cryptocurrency by transferring them multiple times and using crypto exchanges.O'Connor has pleaded guilty to a lengthy list of charges, including conspiracy to commit wire fraud and conspiracy to commit money laundering, both of which carry a maximum penalty of 20 years in prison. He is now scheduled for sentencing on June 23rd.This article originally appeared on Engadget at https://www.engadget.com/uk-citizen-pleads-guilty-to-2020-twitter-hack-and-other-cybercrimes-102634567.html?src=rss
Uber starts offering flight bookings in the UK
Uber has started offering domestic and international flight bookings in the UK and will continue rolling it out across the whole region over the coming weeks, according to the Financial Times. The company's general manager for the UK, Andrew Brem, told the publication that this is "the latest and most ambitious step" it has taken to achieve its goal to become a wider travel booking platform.Uber first revealed its plans to add train, bus and flight bookings to its UK app in April last year and launched the first two options a few months later. Brem said train bookings have been "incredibly popular" so far and have grown 40 percent every month since they became available, though he didn't give the Times concrete ticket sales numbers.For its flights, the company has teamed up with travel booking agency Hopper. The Times says Uber will take a small commission from each sale and could add a booking fee on top of its offerings in the future. It's unclear how much the company's cut actually is, but it charges its partner drivers 25 percent on all fares. As the Times notes, offering flight bookings could also help grow Uber's main ride-hailing business even further, since users are likely to book rides to and from the airport through the service, as well.Although flight bookings are only available in the UK at the moment, the region — one of its biggest markets outside North America — only serves as a testing ground for Uber's plans. Brem told the publication that the company is hoping to expand flight offerings to more countries in the future, but it has no solid plans yet. Uber did offer $200 chopper rides in the US back in 2019, but that service was discontinued in the midst of pandemic-related lockdowns.This article originally appeared on Engadget at https://www.engadget.com/uber-starts-offering-flight-bookings-in-the-uk-074558236.html?src=rss
MediaTek's newest Dimensity chip is built for gaming phones
MediaTek has a simple answer to Qualcomm's Snapdragon 8 Gen 2 you see in many gaming phones: deliver an uprated version of last year's high-end hardware. The brand has unveiled a Dimensity 9200+ system-on-chip with improvements that will be particularly noticeable with games. You'll find higher clock speeds for the main Cortex-X3 core (up from 3.05GHz to 3.35GHz), three Cortex-A715 cores (from 2.85GHz to 3GHz) and four Cortex-A510 efficiency cores (1.8GHz to 2GHz). More importantly, the company says it has "boosted" the Immortalis-G715 graphics by 17 percent — games that were borderline playable before should be smoother.The Dimensity 9200+ is built using TSMC's newest 4-nanometer process, potentially extending battery life and allowing for cooler, slimmer phones. The WiFi 7 support, AI processing unit and image signal processor are unchanged, although there's not much room to complain. WiFi 7 still isn't a finished standard, for example, and routers that support it are still extremely rare.You won't have to wait long to see the first phones based on this chip. MediaTek expects the first Dimensity 9200+ phones to launch later this month, although it hasn't named customers as of this writing. The question is whether or not this refresh is enough. The Snapdragon 8 Gen 2 has only a slight edge over the regular 9200, so a higher-clocked 9200+ might emerge victorious. However, Qualcomm doesn't usually sit still — it likes to ship mid-cycle upgrades of its own.Nonetheless, this may be an important release if you're a mobile gamer. This gives Qualcomm fresh competition in the Android gaming world. That, in turn, could lead to both more variety in phones as well as more aggressive pricing.This article originally appeared on Engadget at https://www.engadget.com/mediateks-newest-dimensity-chip-is-built-for-gaming-phones-070007357.html?src=rss
WhatsApp bug is making some Android phones falsely report microphone access
Google and WhatsApp have confirmed they are aware of a bug that makes it appear as if WhatsApp is accessing phones’ microphones unnecessarily on some Android devices. The issue first cropped up a month ago, but gained new attention after a Twitter engineer tweeted about it in a post that was boosted by Elon Musk.An image shared by Twitter engineer Foad Dabiri appeared to show that the microphone had been repeatedly running in the background while he wasn’t using the app. He tweeted a screenshot from Android’s Privacy Dashboard, which tracks how often apps access a device’s microphone and camera.
Apple's new Beats Studio headphones could support personalized spatial audio
It has been more than five years since Beats last refreshed its top-end Studio headphones, but a new model could be on the way. According to 9to5Mac, Apple is “about” to launch a set of Beats Studio Pro headphones. The new model reportedly features a custom Beats chip that promises improved active noise cancellation and transparency mode performance. For the first time, the Studio line may also feature personalized spatial audio. Additionally, 9to5Mac speculates the new model will come with a USB-C port for fast charging.Visually, the headphones look similar to the current Studio3 model, though it appears Apple has done away with the “Studio” branding found on the side of those headphones. Based on codenames found by 9to5Mac in the internal files for iOS 16.5’s release candidate, Apple collaborated with fashion designer Samuel Ross, best known for starting the clothing label A-Cold-Wall, on the design of the Beats Studio Pro. Images the outlet found in those same files suggest Apple will offer the headphones in four colorways: blue, black, brown and white.It’s unclear if Apple intends for the Beats Studio Pro to replace the $349 Studio3 headphones, or if the company plans to market them as a more premium offering. According to 9to5, Apple is also working on a set of Studio Buds+. They will reportedly support audio sharing, automatic device switching and Hey Siri integration. The outlet suggests both products will arrive in stores soon.This article originally appeared on Engadget at https://www.engadget.com/apples-new-beats-studio-headphones-could-support-personalized-spatial-audio-200614057.html?src=rss
Pokémon developer Game Freak is partnering with Private Divison on a new action franchise
Japanese developer Game Freak is best known for a little franchise called Pokémon, but throughout the years it has dabbled in other genres, like the strategy title Little Town Hero and the rhythm-based platformer HarmoKnight. Now, the company is betting big on a brand-new action adventure IP codenamed Project Bloom.Game Freak is teaming up with a Take-Two Interactive publishing label called Private Division. You may not recognize the company by name, but it’s been behind a slew of well-regarded titles throughout the years, like Outer Worlds, Kerbal Space Program, Rollerdrome and OlliOlli World, among others. As for Game Freak, last year’s Pokémon Legends: Arceus proved it could handle open worlds and action-heavy gameplay. The company says Project Bloom is a “bold and tonally different” IP from its prior work.Not much is known about the game, other than some concept art and a short video announcing the partnership between the two developers. Also, we won’t be playing this anytime soon, as the “sweeping new action-adventure game” isn’t slated for release until Take-Two's 2026 financial year, which runs from April 1, 2025, to March 31, 2026.It might be surprising to hear that Take-Two Interactive is publishing the game, and not Nintendo, which has been Game Freak's partner on Pokémon for over 25 years. While Pokémon is co-owned by Nintendo, Game Freak is an independent developer. To that end, no gaming console has been mentioned as a home for the forthcoming title. As the game won't be released for three years, the home console landscape is likely to look different than it does now. At the very least, the long-rumored followup to the Switch should be out by then.This article originally appeared on Engadget at https://www.engadget.com/pokemon-developer-game-freak-is-partnering-with-private-divison-on-a-new-action-franchise-200045467.html?src=rss
IBM's Watson returns as an AI development studio
Years before everyone was being impressed with the human-like text output of ChatGPT and other generative AI systems, IBM's Watson was blowing our minds on Jeopardy. IBM's cognitive computing project famously dominated its human opponents, but the company had much larger long-term goals, such as using Watson's ability to simulate a human thought process to help doctors diagnose patients and recommend treatments. That didn't work out. Now, IBM is pivoting its supercomputer platform into Watsonx, an AI development studio packed with foundation and open-source models companies can use to train their own AI platforms.If that sounds familiar, it may be because NVIDIA recently announced a similar service with its AI Foundations program. Both platforms are designed to give enterprises a way to build, train, scale and deploy an AI platform. IBM says Watsonx will provide AI builders with a robust series of training models with an auditable data lineage — ranging from datasets focused on automatically generating code for developers or for handling industry-specific databases to climate datasets designed to help organizations plan for natural disasters.IBM has already built an example of what the platform can do with that latter dataset in collaboration with NASA, using the geospatial foundation model to convert satellite images into maps that track changes from natural disasters and climate change.Reimagining Watson as an AI development studio might lack the pizazz of a headline-grabbing supercomputer that can beat humans on a TV quiz show — but the original vision of Watson was out of reach for the average person. Depending on how companies use IBM's new AI training program, you may find yourself interacting with a part of Watson yourself sometime in the near future.Watsonx is expected to be available in stages, starting with the Watsonx.ai studio in July, and expanding with new features debuting later this year.This article originally appeared on Engadget at https://www.engadget.com/ibms-watson-returns-as-an-ai-development-studio-195717082.html?src=rss
Pokémon-inspired art collection comes to LA this summer
A Pokémon-inspired art collection is visiting Los Angeles this summer. The Pokémon × Kogei craftwork exhibition is a collaboration between The Pokémon Company and LA’s Japan House, a space dedicated to spotlighting Japanese culture. The collection includes more than 70 pieces of art crafted by 20 artists, each piece filled to the brim with pocket monsters.The exhibition focuses on crafts over drawings and paintings, as the art spans multiple mediums like lacquer, ceramics, textiles, metalwork and more. The artists involved with the collection include metalworks legend Morihito Katsura and sculptor Taiichiro Yoshida, among others. The collection is arranged into three sections.The “Stories” section emphasizes franchise mascot Pikachu. A much-anticipated piece here is called “Pikachu Forest,” which is made from more than 900 strands of lace suspended from the ceiling to create, well, a forest populated by electrified yellow rodents. The “Appearance” section includes many pieces featuring Eevee and its various evolutions, like Jolteon, Vaporeon and Flareon. The final section, called “Life”, seems to be a hodge-podge affair with many different Pokémon.The exhibition starts on July 25th and runs for five full months, all the way until January. Admission is free and the gallery is open seven days a week. Japan House also says they'll be hosting events related to the exhibition but has not announced any details. In any case, this seems like a great place to meet friends for some Pokémon Go action.This article originally appeared on Engadget at https://www.engadget.com/pokemon-inspired-art-collection-comes-to-la-this-summer-184301946.html?src=rss
Meta's open-source ImageBind AI aims to mimic human perception
Meta is open-sourcing an AI tool called ImageBind that predicts connections between data similar to how humans perceive or imagine an environment. While image generators like Midjourney, Stable Diffusion and DALL-E 2 pair words with images, allowing you to generate visual scenes based only on a text description, ImageBind casts a broader net. It can link text, images / videos, audio, 3D measurements (depth), temperature data (thermal), and motion data (from inertial measurement units) — and it does this without having to first train on every possibility. It’s an early stage of a framework that could eventually generate complex environments from an input as simple as a text prompt, image or audio recording (or some combination of the three).You could view ImageBind as moving machine learning closer to human learning. For example, if you’re standing in a stimulating environment like a busy city street, your brain (largely unconsciously) absorbs the sights, sounds and other sensory experiences to infer information about passing cars and pedestrians, tall buildings, weather and much more. Humans and other animals evolved to process this data for our genetic advantage: survival and passing on our DNA. (The more aware you are of your surroundings, the more you can avoid danger and adapt to your environment for better survival and prosperity.) As computers get closer to mimicking animals’ multi-sensory connections, they can use those links to generate fully realized scenes based only on limited chunks of data.So, while you can use Midjourney to prompt “a basset hound wearing a Gandalf outfit while balancing on a beach ball” and get a relatively realistic photo of this bizarre scene, a multimodal AI tool like ImageBind may eventually create a video of the dog with corresponding sounds, including a detailed suburban living room, the room’s temperature and the precise locations of the dog and anyone else in the scene. “This creates distinctive opportunities to create animations out of static images by combining them with audio prompts,” Meta researchers said today in a developer-focused blog post. “For example, a creator could couple an image with an alarm clock and a rooster crowing, and use a crowing audio prompt to segment the rooster or the sound of an alarm to segment the clock and animate both into a video sequence.”Meta’s graph showing ImageBind’s accuracy outperforming single-mode models.MetaAs for what else one could do with this new toy, it points clearly to one of Meta’s core ambitions: VR, mixed reality and the metaverse. For example, imagine a future headset that can construct fully realized 3D scenes (with sound, movement, etc.) on the fly. Or, virtual game developers could perhaps eventually use it to take much of the legwork out of their design process. Similarly, content creators could make immersive videos with realistic soundscapes and movement based on only text, image or audio input. It’s also easy to imagine a tool like ImageBind opening new doors in the accessibility space, generating real-time multimedia descriptions to help people with vision or hearing disabilities better perceive their immediate environments.“In typical AI systems, there is a specific embedding (that is, vectors of numbers that can represent data and their relationships in machine learning) for each respective modality,” said Meta. “ImageBind shows that it’s possible to create a joint embedding space across multiple modalities without needing to train on data with every different combination of modalities. This is important because it’s not feasible for researchers to create datasets with samples that contain, for example, audio data and thermal data from a busy city street, or depth data and a text description of a seaside cliff.”Meta views the tech as eventually expanding beyond its current six “senses,” so to speak. “While we explored six modalities in our current research, we believe that introducing new modalities that link as many senses as possible — like touch, speech, smell, and brain fMRI signals — will enable richer human-centric AI models.” Developers interested in exploring this new sandbox can start by diving into Meta’s open-source code.This article originally appeared on Engadget at https://www.engadget.com/metas-open-source-imagebind-ai-aims-to-mimic-human-perception-181500560.html?src=rss
...284285286287288289290291292293...