Feed slashdot Slashdot

Favorite IconSlashdot

Link https://slashdot.org/
Feed https://rss.slashdot.org/Slashdot/slashdotMain
Copyright Copyright Slashdot Media. All Rights Reserved.
Updated 2026-05-15 15:22
Single Dose of Magic Mushroom Psychedelic Can Cause Anatomical Brain Changes
A small study found that a single 25mg dose of psilocybin produced measurable brain changes that were still visible a month later, along with reported improvements in psychological insight, wellbeing, and mental flexibility. The Guardian reports: Evidence for the changes came from specialized scans that measured the diffusion of water along nerve bundles in the brain. They suggested that some nerve tracts had become denser and more robust after the drug was taken. While the findings are preliminary, the scientists said the opposite was seen in ageing and dementia. "It's remarkable to see potential anatomical brain changes one month after a single dose of any drug," said Prof Robin Carhart-Harris, a neurologist at the University of California, San Francisco, and senior author on the study. "We don't yet know what these changes mean, but we do note that overall, people showed positive psychological changes in this study, including improved wellbeing and mental flexibility." [...] Writing in Nature Communications, the researchers describe another key finding. Those who had the largest spike in brain entropy after psilocybin were most likely to report deeper psychological insight and better wellbeing a month later, underlining the link between flexible thinking and improved mental health. "It suggests a psychobiological therapeutic action for psilocybin," said Carhart-Harris. Prof Alex Kwan, a neuroscientist at Cornell University in New York, said studies in mice had shown that psychedelics can rewire connections between nerves, a form of "plasticity" that could underlie their therapeutic effects. The big question is whether the same occurs in humans. "This study comes closer than most to addressing that question, by giving evidence of lasting changes in brain structure after psychedelic use," he said. But while the results were "exciting," the study involved a small number of people and DTI provides an indirect and limited view of brain connections, he said.Read more of this story at Slashdot.
Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial
Sam Altman's management style came under scrutiny on the seventh day of Elon Musk's high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman's brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider: The first witness was Mira Murati, OpenAI's former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI's interim CEO after the board briefly ousted Sam Altman. Murati's testimony focused on her concerns about Altman's "difficult and chaotic" management style. She said Altman had trouble "making decisions on big controversial things." He also had a habit of telling people what they wanted to hear. "My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with," said Murati. Murati said that her issue with Altman was not about safety, "it is about Sam creating chaos." She said she supported Altman's return to OpenAI because the company "was at catastrophic risk of falling apart" at the time of his ousting. "I was concerned about the company completely blowing up." Zilis said she was upset that Altman rolled out ChatGPT without involving the board. "It wasn't just me but the entire board raised concern about that whole thing happening without any board communication," she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It "felt super out of left field," she said. "How is it the case that we want to place a major bet on a speculative technology?" In a video deposition, Helen Toner, a former member of OpenAI's board who resigned in 2023, said she first became aware of ChatGPT's release when an OpenAI employee asked another board member whether the board was aware of the development. [...] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. "There were a number of things -- the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes," said Toner. Recap:Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)Musk Concludes Testimony At OpenAI Trial (Day Four)Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)Read more of this story at Slashdot.
Microsoft Edge Stores Passwords In Plaintext In RAM
Longtime Slashdot reader UnknowingFool writes: Security researcher Tom Joran Sonstebyseter Ronning has found that Microsoft Edge stores passwords in plaintext in RAM. After creating a password and storing it using Edge's password manager, Ronning found that he could dump the RAM and recover his password which was stored in plaintext. Part of the issue is Edge loads all passwords to all sites upon a single verification check, even if the user was not visiting a specific site. This is very different from Chrome, which only loads passwords for specific websites when challenged for the site's password. Also, Chrome will delete the password from memory once the password has been filled. Edge does not delete the passwords from memory once they are used. Microsoft downplayed the risk noting access would require control over a user's PC like a malware infection: "Access to browser data as described in the reported scenario would require the device to already be compromised," Microsoft said. Ronning countered that it was possible to dump passwords for multiple users using administrative privileges for one user to view the passwords for other logged-on users. "Design choices in this area involve balancing performance, usability, and security, and we continue to review it against evolving threats," Microsoft said. "Browsers access password data in memory to help users sign in quickly and securely -- this is an expected feature of the application. We recommend users install the latest security updates and antivirus software to help protect against security threats."Read more of this story at Slashdot.
Google's AI Search Results Will Now Turn To Reddit For 'Expert Advice'
Google is updating AI Overviews and AI Mode to more prominently surface "Expert Advice" from public discussions, social platforms, forums, blogs, and Reddit. Engadget reports: Via a new "Expert Advice" section that can appear in AI responses, Google will display "a preview of perspectives from public online discussions, social media and other firsthand sources." In the sample screenshot the company provided, quotes from forums, WordPress blogs and Reddit were arranged above links to their respective sources. Google plans to add more context to these links, too, showing "a creator's name, handle or community name," so you can judge what you might want to click through and read from a glance. Google will also start recommending in-depth articles at the end of AI responses for further exploration of a given topic, and link to more sources directly in its generated answers rather than just at the end. If you subscribe to any publications, AI responses will also highlight sources from the subscriptions you link to your Google account.Read more of this story at Slashdot.
Valve Releases Steam Controller CAD Files Under Creative Commons License
Valve has released CAD files for the new Steam Controller and its Puck under a Creative Commons license. "The idea is to let enterprising modders create their own Steam Controller add-ons, like skins, charging stands, grip extenders or smartphone mounts," reports Digital Foundry. From the report: The Valve release includes files for the external shell ("surface topology") of the Controller and Puck, with a .STP, .STL and engineering diagram of each device, with the latter showing areas that must remain uncovered to let the device maintain its signal strength and otherwise function as designed. Valve has previously released CAD files for its Steam Deck handheld, Valve Index VR suite and even the original Steam Controller a decade ago, so this release is welcomed but not unexpected. The release is under a fairly restrictive Creative Commons license which allows for non-commercial use and requires attribution and sharing of designs back to the community. However, the license also suggests that commercial entities interested in making accessories for the Steam Controller or its Puck can contact Valve directly to discuss terms. You can find the files here.Read more of this story at Slashdot.
Morgan Stanley Undercuts Rivals On Pricing In Crypto Trading Debut
Morgan Stanley is adding crypto trading to E*Trade, with a pilot now underway and a broader rollout planned for the platform's 8.6 million customers later this year. The bank is reportedly undercutting rivals with a 50-basis-point trading fee as it bets traditional finance and DeFi will converge. "By contrast, Robinhood Markets' (HOOD) fees start at 95 bps, Coinbase Global's (COIN) begins at 60 bps, and Charles Schwab (SCHW) will charge 75 bps," notes Seeking Alpha. Morgan Stanley's head of wealth management, Jed Finn, told Bloomberg: "This is much bigger than trading crypto at a cheaper rate. In a way, the strategy is disintermediating the disintermediators."Read more of this story at Slashdot.
Claude Managed Agents Can Engage In a 'Dreaming' Process To Preserve Memories
An anonymous reader quotes a report from Ars Technica: At its Code with Claude developers' conference, Anthropic has introduced what it calls "dreaming" to Claude Managed Agents. Dreaming, in this case, is a process of going over recent events and identifying specific things that are worth storing in "memory" to inform future tasks and interactions. Dreaming is a feature that is currently in research preview and limited to Managed Agents on the Claude Platform. Managed Agents are a higher-level alternative to building directly on the Messages API that Anthropic describes as a "pre-built, configurable agent harness that runs in managed infrastructure." It's intended for situations where you want multiple agents working on a task or project to some end point over several minutes or hours. Anthropic describes dreaming as a scheduled process, in which sessions and memory stores are reviewed, and specific memories are curated. This is important because context windows are limited for LLMs, and important information can be lost over lengthy projects. On the chat side of things, many models use a process called compaction, whereby lengthy conversations are periodically analyzed, and the models attempt to remove irrelevant information from the context window while keeping what's actually important for the ongoing conversation, project, or task. However, that process, as I described it, is usually limited to a specific conversation with a single agent. "Dreaming" is a periodically recurring process in which past sessions and memory stores can be analyzed across agents, and important patterns are identified and saved to memory for the future. Users will be able to choose between an automatic process, or reviewing changes to memory directly.Read more of this story at Slashdot.
ReactOS Unifies Installation Media, Introduces GUI Installer and New ATA Driver
jeditobe writes: Developers of ReactOS told Phoronix that the project has introduced a unified BootCD, replacing its previously separate installation media and LiveCD images. The new image combines the traditional text-mode installer with a LiveCD mode in a single medium. Within this unified BootCD, the updated LiveCD mode now includes an option to launch a first-stage GUI installer. The graphical interface is intended to make installation more approachable for new users compared to the long-standing text-based setup process. In a separate development, the project has also merged a new ATA storage driver that has been in progress since early 2024. The plug-and-play aware storage stack supports SATA, PATA, ATAPI, AHCI, and even SCSI devices, potentially expanding the range of hardware on which ReactOS can successfully boot. Following recent improvements to graphics driver support, the project continues to make incremental progress across core subsystems, though its long development timeline remains a point of discussion. Will these usability and hardware compatibility improvements be enough to broaden ReactOS adoption beyond its current niche? Please note that all new features are not present in version 0.4.15 and are available for testing in the latest nightly test builds.Read more of this story at Slashdot.
Zuckerberg 'Personally Authorized and Encouraged' Meta's Copyright Infringement
Five major publishers and author Scott Turow have sued Meta and Mark Zuckerberg, alleging that Zuckerberg "personally authorized and actively encouraged" massive copyright infringement by using pirated books, journal articles, and web-scraped material to train Meta's Llama AI systems. Meta denies wrongdoing and says it will fight the case, arguing that courts have recognized AI training on copyrighted material as potentially fair use. Variety reports: "In their effort to win the AI 'arms race' and build a functional generative AI model, Defendants Meta and Zuckerberg followed their well-known motto: 'move fast and break things,'" the plaintiffs say in their lawsuit. "They first illegally torrented millions of copyrighted books and journal articles from notorious pirate sites and downloaded unauthorized web scrapes of virtually the entire internet. They then copied those stolen fruits many times over to train Meta's multibillion-dollar generative AI system called Llama. In doing so, Defendants engaged in one of the most massive infringements of copyrighted materials in history." The suit was filed Tuesday (May 5) in the U.S. District Court for the Southern District of New York by five publishers (Hachette, Macmillan, McGraw Hill, Elsevier and Cengage) and Turow individually. The proposed class-action suit seeks unspecific monetary damages for the alleged copyright infringement. A copy of the lawsuit is available at this link (PDF). [...] the latest lawsuit alleges that Meta and Zuckerberg deliberately circumvented copyright-protection mechanisms -- and had considered paying to license the works before abandoning that strategy at "Zuckerberg's personal instruction." The suit essentially argues that the conduct described falls outside protections afforded by fair-use provisions of the U.S. copyright code.Read more of this story at Slashdot.
Silicon Valley Bets $200 Million On AI Data Centers Floating In the Ocean
An anonymous reader quotes a report from Ars Technica: Silicon Valley investors such as Palantir co-founder Peter Thiel have bet hundreds of millions of dollars on deploying AI data centers powered by waves in the middle of the world's oceans -- a move that coincides with tech companies facing mounting challenges in building AI data center projects on land. The latest investment round of $140 million is intended to help the company Panthalassa complete a pilot manufacturing facility near Portland, Oregon, and speed up deployments of wave-riding "nodes" designed to generate electrical power, according to a May 4 press release. Instead of sending renewable energy to a land-based data center, the floating nodes would directly power onboard AI chips and transmit inference tokens representing the AI models' outputs to customers worldwide via satellite link. Each node resembles a huge steel sphere bobbing on the water with a tube-like structure extending vertically down beneath the surface. The wave motions drive water upward through the tube into a pressurized reservoir, where it can be released to spin a turbine generator that produces renewable energy for the AI chips on board. Panthalassa claims the node's AI chips would also get cooled using the surrounding water, which could offer another advantage over traditional data centers. "Ocean-based compute might offer a massive cooling advantage because the ambient temperature is so low," Lee said. "Land-based data centers use a lot of electricity and fresh water for cooling." The newest node prototype, called Ocean-3, is scheduled for testing in the northern Pacific Ocean later in 2026. The latest version reaches about 85 meters in length and would stand nearly as tall as London's Big Ben or New York City's Flatiron Building, according to the Financial Times. Panthalassa has already tested several earlier prototypes of the wave energy converter technology, including the Ocean-1 in 2021 and the Ocean-2 that underwent a three-week sea trial off the coast of Washington state in February 2024. The company's CEO and co-founder, Garth Sheldon-Coulson, said in a CBS interview that he hopes to eventually deploy thousands of the nodes.Read more of this story at Slashdot.
Microsoft Gives Up On Xbox Copilot AI
Microsoft is winding down Xbox Copilot on mobile and ending development of Copilot on console, reversing plans to bring the gaming-focused AI assistant to current-generation Xbox consoles this year. "The move follows [new Xbox CEO Asha Sharma's] reorganization of the Xbox platform team earlier on Tuesday, which added executives from Microsoft's CoreAI team -- where Sharma worked before taking over Xbox -- to the Xbox side of the company," reports The Verge. Sharma said in a post on X: Xbox needs to move faster, deepen our connection with the community, and address friction for both players and developers. Today, we promoted leaders who helped build Xbox, while also bringing in new voices to help push us forward. This balance is important as we get the business back on track. As part of this shift, you'll see us begin to retire features that don't align with where we're headed. We will begin winding down Copilot on mobile and will stop development of Copilot on console. Since taking over for former Microsoft Gaming CEO Phil Spencer in February, Sharma has scrapped the Microsoft Gaming brand and cut the price of Xbox Game Pass.Read more of this story at Slashdot.
White House App Is a Terrifying Security Mess
New submitter spazmonkey writes: From a hidden GPS tracker polling your location every 4.5 minutes to JavaScript loaded from a random GitHub account, no SSL certificate pinning, and an in-app browser that silently strips cookie consent dialogs and paywalls from every page you visit, the new White House app seems to have a little bit of everything. A security researcher pulled the APK apart to discover the cybersecurity vulnerabilities. "The app is a React Native build using Expo SDK 54, with WordPress powering the backend through a custom REST API," reports Android Headlines. "That's pretty normal, as nearly 42% of all websites on the internet are powered by WordPress. But that's just the start; now the nightmare begins..." From the report: To start, the app has a full GPS tracking pipeline compiled in. Essentially, it's set to poll your location every 4.5 minutes in the foreground, and 9.5 minutes in the background. It's syncing latitude, longitude, accuracy, and timestamp data to OneSignal's servers. These location permissions aren't declared in the AndroidManifest, but they are hardcoded as runtime requests in the OneSignal SDK. Some have noted that the tracking only kicks in if the developer enables it server-side and the user grants permission, but it is there, ready to go. And it gets even stranger. Apparently, the app is loading JavaScript from a random person's GitHub site for YouTube embeds. Yes, you read that right, it's just loading JavaScript from a random GitHub site. So if that account ever gets compromised, arbitrary code could run inside the app's WebView. There's also no SSL certificate pinning, meaning that traffic can potentially be intercepted on compromised networks like sketchy public WiFi or corporate proxies. The app also injects JavaScript and CSS into every page you visit in the in-app browser. This strips away cookie consent dialogs, GDPR banners, login walls, and paywalls. There's also leftover dev artifacts in the production build, including a localhost URL to the Metro bundler.Read more of this story at Slashdot.
CO2 Levels In the Atmosphere Hit 'Depressing' New Record
Atmospheric carbon dioxide hit a new record in April, averaging about 431 parts per million at NOAA's Mauna Loa Observatory. That's up from under 320 ppm when the site began measurements in 1958. Scientific American reports: Greenhouse gases, such as carbon dioxide, are measured as a proportion of the total atmosphere. The numbers are presented as the number of molecules of a particular gas out of a million total molecules, or ppm. Climate scientist Zachary Labe of Climate Central, a nonprofit that researches climate change, says the new record is "depressing" but not unexpected. "It's just another sign that carbon dioxide continues to increase in our atmosphere as our planet continues to warm," he says. "For many climate scientists, this is just 'here it is again, another record in the wrong direction.'" Labe explains that the amount of CO2 in the atmosphere tends to peak in April each year as decaying plants release greenhouse gases after winter. Some of that CO2 gets reabsorbed by plants as they grow during the warmer months. But NOAA's data show a worrying trend, with the average monthly amount of CO2 steadily increasing. [...] Although the amount of CO2 in the atmosphere has continued to rise, there was a reduction in U.S. emissions in 2023 and 2024. That trend, however, was reversed in 2025, at least partially because of the increased electricity demand from artificial intelligence data centers. Still, Labe says there are reasons for optimism as the use of renewable energy sources such as solar and wind expands.Read more of this story at Slashdot.
Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla
An anonymous reader quotes a report from CNBC: OpenAI President Greg Brockman concluded his testimony on Tuesday, where he largely rebutted Elon Musk's account of the early years of the startup and negotiations that occurred at the company. Brockman testified that he never made any commitments to Musk about the company's corporate structure, and he never heard anyone else make them. He emphasized that OpenAI is still governed by a nonprofit. "This entity remains a nonprofit," Brockman said, referring to the OpenAI foundation. "It is the best-resourced nonprofit in the world." [...] Brockman, who spoke from the witness stand in federal court in Oakland, California, over the course of two days, also revealed that Musk had enlisted several OpenAI employees to do months of free work for him at Tesla, Musk's electric vehicle company. That work mainly included efforts to overhaul the company's approach to developing self-driving technology as part of the Autopilot team there in 2017. During his two days on the stand, Brockman answered questions about his personal financial ambitions, his understanding of OpenAI's structure and Musk's involvement at the company, which they co-founded with other executives in 2015. In Musk's testimony last week, the Tesla and SpaceX CEO said that the time, money and resources he poured into OpenAI had been integral to the company's success. He repeatedly said that he helped recruit the company's top talent. Brockman said Tuesday that while Musk was helpful in convincing some employees to take the leap to join OpenAI, he was a polarizing figure for others. "Elon had a reputation of being an extremely hard driver," Brockman said. He added that "certain candidates were very attracted" by Musk's involvement at OpenAI, and that "certain candidates were very turned off." Musk testified last week that a former OpenAI researcher named Andrej Karpathy joined Tesla, but only after he had planned to leave the startup already. Brockman said that Musk, after he hired Karpathy, approached him with "an apology and a confession," about the hire, and that neither Musk nor Karpathy had told him the researcher planned to leave OpenAI before that. Musk was generally not very available for meetings and conversations, Brockman said, so he relied on employees, including Sam Teller and former OpenAI board member Shivon Zilis, as proxies. Brockman testified that open sourcing OpenAI's technology was "not a topic of conversation" during Musk's time with the nonprofit, despite Musk's claims that it was supposed to be central to the organization. He also described tense 2017 negotiations over a possible for-profit arm, saying Musk became angry when equity stakes were discussed. "He said Musk declined the proposal during an in-person meeting, then tore a painting of a Tesla Model 3 car off the wall, and began storming out of the room," reports CNBC. He also demanded to know when the cofounders would leave the company. Brockman further said Musk wanted control of OpenAI because he disliked situations where he lacked control, citing Zip2 and SolarCity as examples Musk had raised. He also testified that Musk partly wanted control to help fund his broader SpaceX ambition of building a "city on Mars." CNBC notes the trial will resume at 8:30 a.m. PT on Wednesday, with Shivon Zilis expected to testify. She is the mother of four of Musk's children and a former OpenAI board member. Recap:OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)Musk Concludes Testimony At OpenAI Trial (Day Four)Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)Read more of this story at Slashdot.
Apple Agrees To Pay iPhone Owners $250 Million For Not Delivering AI Siri
Apple has agreed to a proposed $250 million settlement over claims that it misled iPhone buyers about the availability of Apple Intelligence and its upgraded Siri features. The settlement would cover U.S. buyers of the iPhone 16 lineup and iPhone 15 Pro models between June 10, 2024, and March 29, 2025. The Verge reports: The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance." Apple brought certain AI-powered features to the iPhone 16 weeks after its release, and delayed the launch of its more personalized Siri, which is now expected to arrive later this year. Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for Apple Intelligence. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.Read more of this story at Slashdot.
Coinbase Lays Off Nearly 700 Workers In 'AI-Native' Restructuring
Coinbase is laying off about 700 workers, or 14% of its workforce, as CEO Brian Armstrong says the company is restructuring to become "lean, fast, and AI-native." Engadget reports: Armstrong claimed he'd seen engineers "use AI to ship in days what used to take a team weeks" and that non-technical teams in the company are "shipping production code," while Coinbase is automating many of its workflows. "All of this has led us to an inflection point, not just for Coinbase, but for every company," Armstrong wrote. "The biggest risk now is not taking action. We are adjusting early and deliberately to rebuild Coinbase to be lean, fast and AI-native. We need to return to the speed and focus of our startup founding, with AI at our core." An AI-driven restructuring is only one half of the equation for Coinbase, though. Armstrong wrote that while the company "is well-capitalized, has diversified revenue streams and is well-positioned to weather any storm," the crypto market is down. As such, Coinbase is attempting to become leaner and faster ahead of the next crypto cycle. The company is eliminating some management layers and organizing the business around "AI-native talent who can manage fleets of agents to drive outsized impact," Armstrong wrote. "We'll also be experimenting with reduced pod sizes, including 'one person teams' with engineers, designers and product managers all in one role." That sure sounds like an attempt to get workers to take on more responsibilities.Read more of this story at Slashdot.
Google DeepMind Workers Vote To Unionize Over Military AI Deals
An anonymous reader quotes a report from Wired: Employees at Google DeepMind in London have voted to unionize as part of a bid to block the AI lab from providing its technology to the US and Israeli militaries. In a letter addressed to Google's managing director for the UK and Ireland, Debbie Weinstein, the workers asked the company to recognize the Communication Workers Union and Unite the Union as joint representatives for DeepMind employees. "Fundamentally, the push for unionization is about holding Google to its own ethical standards on AI, how they monetize it, what the products do, and who they work with," John Chadfield, national officer for technology at the CWU, tells WIRED. "Through the process of unionization, workers are collectively in a much stronger place to put [demands] to an increasingly deaf management." [...] The DeepMind employee tells WIRED that if the staff succeeds in unionizing in the UK, they will likely demand that Google pulls out of its long-standing contract with the Israeli military, and seek greater transparency over how its AI products will be used, and some sort of assurance relating to layoffs made possible by automation. If Google does not engage, the letter states, the employees will ask an arbitration committee to compel the company to recognize the unions. Since the turn of the year, both Anthropic and OpenAI have announced large-scale expansions of their operations in London. CWU hopes the unionization effort at DeepMind will spur workers at those labs into similar action. "These conversations are happening," claims Chadfield. "The workers at other frontier labs have seen what Google DeepMind workers have done. They've come to us asking for help as well." The unionization push began in February 2025 after Alphabet removed a pledge from its AI ethics guidelines that had barred uses such as weapons development and surveillance. "A lot of people here bought into the Google DeepMind tagline 'to build AI responsibly to benefit humanity,'" the DeepMind employee told WIRED. "The direction of travel is to further militarization of the AI models we're building here."Read more of this story at Slashdot.
Moving To Mainframe Can Be Cheaper Than Sticking With VMware
Gartner says some VMware customers may find it cheaper to move certain Linux VM workloads to IBM mainframes than to adopt Broadcom's new VMware licensing, especially for fleets of hundreds of Linux VMs and mission-critical apps needing long-term stability. The Register reports: Speaking to The Register to discuss the analyst firm's mid-April publication, "The State of the IBM Mainframe in 2026," [Gartner Vice President Analyst Alessandro Galimberti] said some buyers in many fields are comparing mainframes to modern environments and deciding Big Blue's big iron comes out ahead. "I can build a multi-region cloud application, but things like data synchronization and high availability are things I need to build into application logic," he said. "The mainframe has that in the platform, which shields developers from complexity." He also thinks mainframes are ideally suited to workloads that need many years of transactional consistency and backward-compatibility. That said, Galimberti doesn't recommend the mainframe for all applications. He said mission-critical applications that are unlikely to change much for a decade are best-suited to the machines, as are Linux applications because the open source OS runs on IBM's hardware. IBM also offers the z/VM hypervisor, which he says can make Linux "even better and more enterprise-ready." Which is why Galimberti thinks IBM's ecosystem is attractive to VMware users, especially those who operate a fleet of 500 to 700 Linux VMs. [...] Committing to mainframes therefore means planning "to spend time negotiating price and renewal protections, rather than prioritizing the business value these solutions can deliver." Another downside is that mainframes pose clear lock-in risk, so users may hold back on useful customizations out of fear they make it harder to extricate themselves from the platform. Access to skills remains an issue, too, as kids these days mostly don't contemplate a career working with big iron. Galimberti sees more service providers investing in their mainframe programs, which might help. So does the availability of Linux.Read more of this story at Slashdot.
Kids Bypass Age Verification With Fake Moustaches
A new Internet Matters survey suggests the UK's Online Safety Act age checks are easy for many children to bypass. Reported workarounds include fake birthdays, borrowed IDs, video game characters, and even drawing on a fake mustache. The Register reports: The group surveyed over 1,000 UK children and their parents, and while it did report some positive effects from changes made under the OSA, many children saw age verification as an easy-to-bypass hurdle rather than something that kept them genuinely safe. A full 46 percent of children even said that age checks were easy to bypass, while just 17 percent said that they were difficult to fool. The methods kids use to fool age gates vary, but most are pretty simple: There's the classic use of a video game character to fool video selfie systems, while in other instances, children reported just entering a fake birthday or using someone else's ID card when that was required. The report even cites cases of children drawing a mustache on their faces to fool age detection filters. Seriously. While nearly half of UK kids say it's easy to bypass online age checks (and another 17 percent say it's neither hard nor easy), only 32 percent say they've actually bypassed them, according to Internet Matters. Like scoring some booze from "cool" parents, keeping age-gated content out of the hands of kids under the OSA is only as effective as parents let it be, and a quarter of them enable their kids' online delinquency. More specifically, Internet Matters found that a full 17 percent of parents admitted to actively helping their kids evade age checks, while an additional 9 percent simply turned a blind eye to it.Read more of this story at Slashdot.
US Government Warns of Severe CopyFail Bug Affecting Major Versions of Linux
An anonymous reader quotes a report from TechCrunch: A severe security vulnerability affecting almost every version of the Linux operating system has caught defenders off-guard and scrambling to patch after security researchers publicly released exploit code that allows attackers to take complete control of vulnerable systems. The U.S. government says the bug, dubbed "CopyFail," is now being exploited in the wild, meaning it's being actively used in malicious hacking campaigns. [...] Given the risk to the federal enterprise network, U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.Read more of this story at Slashdot.
Oscars Bans AI Actors and Writing From Awards
The Academy has clarified that only human-performed acting and human-authored writing are eligible for Oscar nominations. The Oscars will not ban AI tools broadly, but says it will judge films based on the degree to which humans remain central to the creative work. The BBC reports: The Academy of Motion Picture Arts and Sciences [...], which controls the US film industry's most prestigious award, on Friday issued updated rules for what kind of work in movies and documentaries would be considered eligible for an Oscar as the use of artificial intelligence (AI) technology grows. In updated eligibility requirements, the Academy specified that only acting "demonstrably performed by humans" and that writing "must be human-authored" in order to be nominated for an award. The Academy called the requirements a "substantive" change to the rules for the Oscars. The need to specify awards can only go to acting and writing done by "humans" is new for the academy. [...] However, the academy did not issue a ban on AI use in films more broadly. Outside of acting and writing, if a filmmaker used AI tools in their work, such "tools neither help nor harm the chances of achieving a nomination," the academy wrote. "The Academy and each branch will judge the achievement, taking into account the degree to which a human was at the heart of the creative authorship when choosing which movie to award," the group added. "If questions arise regarding the aforementioned use of generative artificial intelligence, the Academy reserves the right to request more information about the nature of the use and human authorship."Read more of this story at Slashdot.
VS Code Update Added Copilot As Default Co-Author To Git Commits
Longtime Slashdot reader UnknowingFool writes: On April 15, 2026, a Microsoft employee made a change to Visual Studio Code and pushed it within 8 hours without review, notification, or documentation. The change added "Co-authored-by: Copilot" by default to the end of commit messages in Git when Copilot was used in creating the code. However, the implementation was bugged, and the message was added to every commit regardless if Copilot was used or disabled. Since this message was automatically added to the end of commit messages, users were not aware of it as the UI does not show this addition when making commits. The change as been reverted as of May 3, but not before 1.4 million commits were made. Unfortunately, those messages cannot be cleansed and are permanent.Read more of this story at Slashdot.
'Notepad++ For Mac' Release Is Disavowed By the Creator of the Original
An anonymous reader quotes a report from Ars Technica, written by Andrew Cunningham: As its name implies, the venerable Notepad++ text editor began as a more capable version of the classic Windows Notepad, with features such as line numbering and syntax highlighting. It was created in 2003 by Don Ho, who continues to be its primary author and maintainer, and it has been a Windows-exclusive app throughout its existence (older Notepad++ versions support OSes as old as Windows 95; the current version officially supports everything going back to Windows 7). I'm not a devoted user of the app, but I was aware of its history, which is why I was surprised to see news of a "Notepad++ for Mac" port making the rounds last week, as though it were a port of the original available from the Notepad++ website. Apparently, this news surprised Ho as well, who claims that the Mac version and its author, Andrey Letov, are "using the Notepad++ trademark (the name) without permission." "This is misleading, inappropriate, and frankly disrespectful to both the project and its users," Ho wrote. "It has already fooled people -- including tech media -- into believing this is an official release. To be crystal clear: Notepad++ has never released a macOS version. Anyone claiming otherwise is simply riding on the Notepad++ name." Ho repeatedly asked the developer to stop using the brand and eventually reported the trademark use to Cloudflare, the CDN of the Notepad++ for Mac site. "Every day that website remains active, you are in further violation of the law," Ho wrote. "I cannot authorize a 'week or two' of continued trademark infringement." Letov has since begun rebranding the app as "NextPad++," though the old branding and URL reportedly remained available. The name changes is "an homage to NeXT Computer," notes Ars, "and uses a frog icon rather than the Notepad++ lizard."Read more of this story at Slashdot.
How Microplastics Are Likely Helping To Heat Up the Planet
A new Nature Climate Change study suggests airborne microplastics -- especially darker and colored particles -- are likely contributing to atmospheric warming by absorbing more heat than they reflect. Researchers estimate the effect could be roughly one-sixth that of black carbon, though outside experts say the uncertainties remain large and more study is needed before drawing firm policy conclusions. "We can say with confidence that overall they are warming agents," said Drew Shindell, a Duke University earth science professor and co-author of the study. "To me, that's the big advance." The Washington Post reports: To undertake their study, a group led by researchers at Fudan University in China examined how different colors and sizes of microplastics interact with light across the spectrum, while combining that information with simulations of how particles get dispersed in the air across the planet. "Black, yellow, blue and red [particles] absorb sunlight much more strongly than the white particles," Yu Liu, a Fudan professor and study co-author, said in a call with reporters. In fact, the study details how black and colored particles showed "absorption levels nearly 75 times higher than pristine, non-pigmented plastics." The scientists also found that different sizes of particles absorb light at different intensities -- and that how they absorb light can change as they age. The authors estimate that microplastics suspended in the atmosphere could be contributing to global warming at about one-sixth the amount of black carbon, also known as soot, a pollutant generated largely from burning fossil fuels. If the latest estimates are right, Shindell said, microplastics might not be an enormous source of atmospheric warming, compared with massive contributors such as cars and trucks, belching industrial plants or even burping cows. "But not a trivial one, either," he said. By his calculation, the effect of one year's microplastic emissions globally is approximately equivalent to 200 coal-fired power plants running for that year. But that rough estimate does not factor the longer-term repercussions of microplastics decaying and persisting in the environment for decades to come. Whatever the exact impact, the topic deserves further study, the authors say, because current climate modeling does not account for any additional warming that these tiny particles might be causing.Read more of this story at Slashdot.
Astronomers May Have Detected an Atmosphere Around a Tiny, Icy World Past Pluto
"The Associated Press is reporting on a new study in Nature Astronomy suggesting that a tiny, icy world beyond Pluto harbors a thin, delicate atmosphere that may have been created by volcanic eruptions or a comet strike," writes longtime Slashdot reader fahrbot-bot. From the report: Just 300 miles (500 kilometers) or so across, this mini Pluto is thought to be the solar system's smallest object yet with a clearly detected global atmosphere bound by gravity, said lead researcher Ko Arimatsu of the National Astronomical Observatory of Japan. This so-called minor planet -- formally known as (612533) 2002 XV93 -- is considered a plutino, circling the sun twice in the time it takes Neptune to complete three solar orbits. At the time of the study, it was more than 3.4 billion miles (5.5 billion kilometers) away, farther than even Pluto, the only other object in the Kuiper Belt with an observed atmosphere. This cosmic iceball's atmosphere is believed to be 5 million to 10 million times thinner than Earth's protective atmosphere, according to the the study [...]. It's 50 to 100 times thinner than even Pluto's tenuous atmosphere. The likeliest atmospheric chemicals are methane, nitrogen or carbon monoxide, any of which could reproduce the observed dimming as the object passed before the star, according to Arimatsu. Further observations, especially by NASA's Webb Space Telescope, could verify the makeup of the atmosphere, according to Arimatsu.Read more of this story at Slashdot.
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion
OpenAI president Greg Brockman's testimony dominated the fifth day of the trial for Elon Musk's lawsuit against the AI company. Brockman took the witness stand on Monday, disclosing that his stake in OpenAI is worth nearly $30 billion, despite not personally investing money in OpenAI. The judge also declined to admit a pretrial text in which Musk allegedly warned Brockman that he and Altman would become "the most hated men in America." From a report: Brockman's disclosure would put him on the Forbes list of the world's richest people, with wealth comparable to Melinda French Gates. [...] Late Sunday, OpenAI lawyers tried to admit as evidence a text message Musk sent to Brockman two days before the trial began. According to a court filing -- which did not include the actual text exchange -- Musk sent a message to Brockman to gauge interest in settlement. When Brockman replied that both sides should drop their respective claims, Musk shot back, according to the filing, "By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be." Judge Yvonne Gonzalez Rogers, who is overseeing the trial, did not admit the text exchange as evidence. Brockman acknowledged that he had promised to personally donate $100,000 to OpenAI's charity but never did. In explaining the delay, Brockman put the onus on Altman: "I asked Sam when I should donate this, and he said he would let me know," reports Business Insider. The first witness to testify on Monday was Stuart Russell, an artificial intelligence expert who teaches computer science at the University of California, Berkeley. "The most memorable part of Russell's testimony was when he talked about how much Musk's legal team paid him," notes Business Insider. "He received an eye-popping $5,000 per hour for 40 hours of preparatory work. Expert witnesses in high-profile cases typically make between $500 to $1,000 per hour." Recap:Musk Concludes Testimony At OpenAI Trial (Day Four)Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)Read more of this story at Slashdot.
White House Considers Vetting AI Models Before They Are Released
The Trump administration is reportedly considering an executive order to create a working group that could review advanced AI models before public release. The shift follows concerns over Anthropic's powerful Mythos model and its cyber capabilities, with officials weighing whether the government should get early access to frontier models without necessarily blocking their release. The New York Times reports: In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said. The discussions signal a stark reversal in the Trump administration's approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born," Mr. Trump said of A.I. at an event in July. "We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Mr. Trump left room for some rules, but he added that "they have to be more brilliant than even the technology itself." The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said.Read more of this story at Slashdot.
OpenAI, Google, and Microsoft Back Bill To Fund 'AI Literacy' In Schools
An anonymous reader quotes a report from 404 Media: A new, bipartisan bill introduced (PDF) by Democratic Senator of California Adam Schiff and endorsed by the biggest AI developers in the world -- including OpenAI, Google, and Microsoft -- would change the K-12 curriculum to shoehorn in "AI literacy," something that young people and teachers alike already hate in schools. The Literacy in Future Technologies Artificial Intelligence, or LIFT AI Act, would empower the new director of the National Science Foundation (NSF) to make grant awards "on a merit-reviewed, competitive basis to institutions of higher education or nonprofit organizations (or a consortium thereof) to support research activities to develop educational curricula, instructional material, teacher professional development, and evaluation methods for AI literacy at the K-12 level," the bill says. It defines AI literacy as using AI; specifically, "having the age-appropriate knowledge and ability to use artificial intelligence effectively, to critically interpret outputs, to solve problems in an AI-enabled world, and to mitigate potential risks." The bill is endorsed by the American Federation of Teachers, Google, OpenAI, Information Technology Industry Council, Software & Information Industry Association, Microsoft, and HP Inc. [...] The grant would support "AI literacy evaluation tools and resources for educators assessing proficiency in AI literacy," according to the bill. It would also fund "professional development courses and experiences in AI literacy," and the development of "hands-on learning tools to assist in developing and improving AI literacy." Most importantly for real-world implications, it would fund changing the existing curriculum "to incorporate AI literacy where appropriate, including responsible use of AI in learning."Read more of this story at Slashdot.
The Pixel 11 Could Be the Next Victim of the RAM Shortage
Google's Pixel 11 lineup could see RAM cuts or lower starting configurations because of the global memory shortage, with leaks suggesting the base model may drop from 12GB to 8GB while Pro models could add 12GB versions below the current 16GB tier. The Verge reports: There will be 16GB configurations available for each, but adding a lower-spec model could mean the 16GB version is getting a price hike. However, the silver lining is that the specs from MysticLeaks also include camera upgrades and brighter displays for the Pro models. The RAM shortage is pushing other phone makers, including Samsung, to raise prices, too.Read more of this story at Slashdot.
Expanded AMD HDMI 2.1 Support Is Coming To Linux
AMD is preparing expanded HDMI 2.1 support for Linux, following earlier delays after the HDMI Forum rejected an open source implementation of HDMI 2.1 as proprietary technology. As GamingOnLinux reports, AMD developer Harry Wentland submitted a patch series to the Linux kernel mailing list, noting that it brings "HDMI FRL support to the amdgpu display driver" and that "DSC is still being tested and will be sent out later." A forum post on Phoronix from an AMD driver developer also said "a full implementation will ultimately be available once the patches are ready and have completed compliance testing."Read more of this story at Slashdot.
The Audio Industry Is Grappling With the Rise of 'Podslop'
An anonymous reader quotes a report from Bloomberg's Ashley Carman: Welcome to the modern era of podcasting in which thousands of new shows are released into the world every day with a sizable portion likely being AI-generated. Figuring out exactly which ones fall into that growing category is becoming more difficult just as the industry is starting to take this issue seriously. In only the past month or so, Amazon launched a feature that explains a product by generating a quasi-podcast, complete with co-hosts talking to each other and taking questions from users. Shout out to Business Insider reporter Katie Notopoulos for spotting this (and, naturally, demoing it with an adult diaper rash-cream). Not long ago, Nicholas Thompson, chief executive officer of the Atlantic, noted "podslop" dominated his Spotify search results when he typed in the word "Sora." This was around the time that OpenAI shut down its user-generated, AI-content-only app. [...] All of which raises some big, difficult questions. For one, what should the listening platforms do about this incursion? As of right now, Apple Podcasts requires creators who generated a "material portion" of their show using AI to disclose it. The platform also bans misleading or deceptive content. Spotify hasn't published any specific guidelines around AI, though it maintains general rules around dangerous and misleading content. Where this conversation gets even trickier is when it comes to money. Many of these podcasts are hosted on at least one free service that allows programs to opt into their ad marketplace with zero barrier to entry, meaning these shows (and the hosting service) profit off every listen or download. Spreaker, a company owned by iHeartMedia, is the primary one to watch here. Though it tells users to disclose when they rely on AI, it still allows those shows to opt into its programmatic ad marketplace, which pays creators 60% of the revenue generated by the ads placed in their shows. It stands to reason that most of these thousands of shows don't reach many people. But in the aggregate, the ears and dollars could add up. Are the advertisers on board with being next to AI-generated content, some of which might be deemed "slop?" There's also the question of how to define "slop." Jackson of the Podcast Index and his co-host Adam Curry treat it as something listeners simply know when they hear it, while Alberto Betella, co-founder of RSS.com, defines it as "fully automated content with no human review." Jeanine Wright, co-founder of Inception Point, rejects the debate altogether: "The people still talking about slop are still making 6-7 jokes," she said. "It's still yesterday's conversation."Read more of this story at Slashdot.
Anthropic Nears $1.5 Billion AI Joint Venture With Wall Street Firms
Anthropic is reportedly nearing a roughly $1.5 billion joint venture with Blackstone, Goldman Sachs, Hellman & Friedman, and other Wall Street firms to sell AI tools to private-equity-backed companies. "The investors aim to create a company that acts as a consulting arm for Anthropic and helps teach businesses -- including the private-equity firms' portfolio companies -- how to incorporate AI across their operations," reports the Wall Street Journal. Anthropic, Blackstone, and Hellman & Friedman would each invest about $300 million, while Goldman would contribute around $150 million.Read more of this story at Slashdot.
GameStop Offers to Buy eBay for $56 Billion
GameStop has made an unsolicited $56 billion cash-and-stock offer to buy eBay (paywalled; alternative source), with CEO Ryan Cohen arguing he can turn the marketplace into a far larger Amazon competitor. "EBay should be worth -- and will be worth -- a lot more money," Cohen said in an interview. "I'm thinking about turning eBay into something worth hundreds of billions of dollars." The Wall Street Journal reports: Cohen said GameStop has a commitment letter from TD Bank to provide up to $20 billion in debt financing to help make a deal possible. GameStop delivered an offer letter to eBay on Sunday and released a copy of it following the Journal's report on the details of the bid. Cohen wrote in the letter to eBay Chairman Paul Pressler that GameStop started building its eBay position on Feb. 4. It said its offer consists of 50% cash and 50% GameStop shares. EBay said Monday morning its board and financial advisers would review GameStop's unsolicited proposal. It said there were no discussions with or outreach from GameStop before receiving the offer. Ebay added that it will review the offer "with a focus on the value to be delivered to eBay shareholders, including the value of the GameStop stock consideration and the ability of GameStop to deliver a binding, actionable proposal." If eBay isn't receptive, Cohen said he was prepared to run a proxy fight and take the offer directly to its shareholders. The window for shareholders to nominate director candidates at eBay ahead of an annual meeting scheduled for this June has already closed, according to the company's proxy materials. Cohen told the Journal that putting his videogame retailer and eBay under one roof could create opportunities to cut costs and improve earnings. The two companies have some overlap already, including a focus on selling collectibles such as trading cards. "There is nobody who is more qualified, based on my experience, to run the eBay business," Cohen said, referencing his time at GameStop and previously Chewy, the online pet-products marketplace he co-founded.Read more of this story at Slashdot.
Scientists Discover 27 Potential New Planets That Orbit Two Stars
Astronomers have identified 27 potential new circumbinary planets -- worlds that orbit two stars, like Star Wars' Tatooine. "To date, only about 18 circumbinary planets ... had been identified in the universe," reports the Guardian. "More than 6,000 planets have been discovered that orbit single stars, like Earth does around the sun." The Guardian reports: In a timely publication for May 4, also known as Star Wars Day, scientists have identified nearly 30 more candidate planets, whose distances range from 650 to 18,000 light years away from Earth. [...] More than half of the stars in the universe exist in binary or multiple star systems. The researchers instead used a method known as "apsidal precession," searching for a wobble between stars that orbit around and eclipse each other. "If we monitor the exact timing of these eclipses ... that can tell us that there's something else going on in the system," said Margo Thornton, the study's lead author and a PhD candidate at UNSW. After eliminating other factors such as the rotation and gravitational pull of the two stars, the team identified 36 star systems out of 1,590 whose behavior could only be explained by a third body. For "27 of those objects, it is possible that they are planet mass," Thornton said. More research into their spectra -- the light they emit -- was needed to formally confirm them as circumbinary planets, she said. "It's just a matter of: what is the mass of it? Is it a planet? Is it a brown dwarf? Is it a star?" The team discovered the potential planets -- which likely range from Neptune-sized to ten times heavier than Jupiter -- using data from Nasa's Transiting Exoplanet Survey Satellite, a planet-hunting space telescope that launched in 2018. The research was published in the Monthly Notices of the Royal Astronomical Society.Read more of this story at Slashdot.
Infrasound Waves Stop Kitchen Fires, But Can They Replace Sprinklers?
An anonymous reader quotes a report from Ars Technica: In a makeshift demonstration kitchen in Concord, California, cooking oil splatters in and around a frying pan, which catches fire on an unattended gas stove. Within moments, a smoke detector wails. But in this demonstration, something less common happens: An AI-driven sensor activates and wall emitters blast infrasound waves toward the source of the fire in an attempt to put it out. The science of acoustic fire suppression, which has long been known and documented in scientific literature and the press, works by vibrating oxygen molecules away from a fuel source, depriving the fire of a critical component needed for combustion. Indeed, after just a few seconds of infrasound, the tiny kitchen blaze goes out. "We were able to not just point-and-shoot like a fire extinguisher; we figured out how to run it through ducting and distribute it like a sprinkler system," said Geoff Bruder, co-founder and CEO of Sonic Fire Tech, during the presentation. The company's goal is to replace sprinklers, which are effective at stopping fires but can also do significant water damage to a property. Sonic Fire Tech appears to be the first company trying to commercialize the science of acoustic fire suppression. Its executives have already been touring Southern California; Wednesday's event was the first in the northern half of the state. The company aims to make this infrasound technique mainstream in both commercial (for instance, a data center, where sprinklers would damage electronics) and in-home installations, given that sprinklers are already required in all new California homes built in 2011 and later. Sonic Fire Tech also hopes to produce a backpack-based system that could be worn by wildland firefighters headed out into the field. "We are making meaningful technological improvements on a monthly basis," Stefan Pollack, a company spokesperson, emailed Ars after the event. But two experts who spoke with Ars raised serious questions about the potential for this technology to supplant traditional sprinklers in a home. They are even more skeptical as to whether the technique can be effective in an uncontrolled wildfire situation, where flames can grow very quickly. Experts are concerned that infrasound may knock down small flames but does not cool hot surfaces or wet fuel like sprinklers do, which raises the risk of re-ignition, smoldering fires, hidden fires, or blocked fires. Sonic Fire Tech has claimed third-party validation and possible NFPA 13D equivalency, but it has not publicly released full testing details. Fire officials and outside observers also want more information about reliability, maintenance, calibration, and how system failures would be detected and communicated.Read more of this story at Slashdot.
16% of Parents Help Their Children Bypass Online Age Checks, Study Finds. One 15-Year-Old Just Uses a Fake Moustache
The Independent reports that "more than a third of children in the UK have found a way around age verification measures" for social media sites and other online platforms. And new research from online safety organisation Internet Matters "suggests one in six parents have helped their child to get past age verification checks, with children reporting 'tricking' platforms into thinking they are older. "Parents also said they had caught their children drawing on facial hair in a bid to evade the technology. One mother said: "I did catch my son using an eyebrow pencil to draw a moustache on his face, and it verified him as 15 years old"... From a sample of 1,000 UK children, 46% said they believed age checks are easy to bypass, while 32% admitted to having done so. 49% of the children surveyed said they'd still encountered harmful content, according to the online safety activists. The group called the figure "unacceptable," and complained that age verification measures "are often ineffective in practice or easy to bypass."Read more of this story at Slashdot.
Can Investors Trust AI Sales Figures? Asks Wall Street Journal Opinion Piece
A Wall Street Journal opinion piece warns of "a troubling trend" in AI's growth. "Rather than selling software, some AI companies are paying their partners to use it." It cites OpenAI's $1.5 billion joint venture with private-equity firms, Anthropic's $200 million contribution to a private-equity firm joint venture, and Google's $750 million subsidization of Gemini's adoption by consulting firms. "These agreements muddy the distinction between a company's sound growth trajectory and artificial financial engineering."[T]he scale and structure of the recent AI deals go beyond standard incentive mechanisms... When a seller pays customers to buy its products, it is unclear if its revenue growth reflects vibrant demand or a willingness to accept subsidies. Slashdot reader destinyland writes:This warning comes from a prominent figure in the investing community. For six years Robert Pozen was chairman of America's oldest mutual fund company, after five years at Fidelity. An advocate for corporate governance, he's currently a lecturer at MIT's business school (and the author of the book Remote Inc.: How to Thrive at Work...Wherever You Are). "As AI companies prepare initial public offerings, investors should scrutinize their numbers closely," Pozner writes, warning about "time-limited financial support". "In evaluating AI sales figures, analysts should consider the distorted incentives that the recent financing deals create," writes Pozner: Private-equity firms, enticed by promised returns, might demand rapid rollouts of AI products, rather than ensuring their orderly and safe development. Portfolio companies of private-equity firms may embrace AI tools not because they are needed but because adoption is mandated by their owners. Consultants may favor one set of AI models based on the subsidy instead of the merits. If guarantees and subsidies are major factors in the rapid adoption of AI tools, investors should be skeptical of AI companies' revenue projections. Many of their customers enticed by consultants will stop paying full price when the financial incentives are gone. Many of the portfolio companies of private-equity firms could back away from selected AI tools once these joint ventures expire. The challenge with evaluating these AI financing deals is the lack of transparency. At present, AI vendors don't separate revenue driven by subsidies or joint ventures from standard sales. The lesson from the telecom debacle is that financial engineering can obscure, for years, the difference between real customer demand and demand driven by incentives. When AI companies begin to finance their own product distribution, guaranteeing returns to investors and subsidizing sales, it's a signal for investors to dig deeper.Investing in an AI company? Ask what percentage of enterprise revenue is coming from subsidized channels or joint ventures, Pozner suggests. And the renewal/retention rate for customers not supported by subsidies or joint ventures...Read more of this story at Slashdot.
Roblox Blames Age-Verification Rollout for Lowered Growth. Stock Tumbles 22%
Age verification became mandatory for chat access on Roblox in January - and Friday morning Quartz reported it's apparently impacted the company's financials:Roblox cut its full-year 2026 bookings forecast by roughly $900 million at the midpoint on Thursday, blaming stronger-than-expected headwinds from its mandatory age-verification rollout on an audience that skews heavily toward children and teenagers. Full-year 2026 bookings are now projected at $7.33 billion to $7.60 billion, a range that sits roughly $900 million below the prior guidance of $8.28 billion to $8.55 billion; analysts had expected $8.38 billion, according to Yahoo Finance. Roblox stock fell almost 22% in premarket trading.... Daily active users rose 35% year over year to 132 million, while hours engaged climbed 43% to 31 billion hours... Daily Active Users and hours engaged fell below forecasts of 143.8 million and 33.68 billion, respectively, according to Yahoo Finance... Users who have not completed age checks have faced restricted communication features, and the process has weighed on the platform's ability to bring in new users. Russia's blocking of the platform, which took effect in December 2025, added further drag on user growth, according to Yahoo Finance. As of the end of the first quarter, 51% of global daily active users had completed age verification, with 65% of U.S. users having done so, Roblox said.... The safety push has come with legal costs. Roblox accrued $57 million in the first quarter for settlements and settlement proposals with certain states over youth-related consumer protection and digital safety matters, with payments structured over multiple years, the company said. Roblox acknowledged in a letter to shareholders that "our aggressive push to enhance safety lowers our expectations for topline growth in 2026." But they argued that it also "makes our platform fundamentally better and amplifies the long-term growth potential of Roblox through more effective content targeting, tailored communication experiences, and improved community sentiment."Read more of this story at Slashdot.
NetHack 5.0 Released
"So yesterday the Devteam (it is always the Devteam) released version 5.0 of legendary and venerable rogueike compuer game NetHack," writes the Rogue-like games column @Play. "It is 39 years old..." MilenCent (Slashdot reader #219,397) writes: In addition to play changes it's left for players to discover, this version updates the code to compile with C99, makes it much easier to cross compile the code for other systems than the one running, and now uses Lua for its dungeon generation. Happy hacking! For new players, "Nethack 5.0 now has an optional tutorial in the early phases of the game that might help you," notes the Rogue-like games column @Play:Three systems binaries are provided: Windows, MS-DOS and Amiga. Yes, Nethack still supports MS-DOS, and yes, it still supports classic Amiga: it explicitly supports AmigaDOS 3.0, meaning it can still run on 68000 machines... That these are the only systems they provide binaries for shouldn't be seen as an indication that these are the "most important" platforms for Nethack, it's more that, since it's entirely open source, building it yourself is entirely possible, and more expected than with most software. Nethack can be built for Linux, Windows 8-11, AmigaDOS, MacOS (I'm not sure if this includes classic Mac too but it might), Windows CE (wow), OS/2 (additional wow), BeOS, VMS and multiple Unixes... Another option is to play through public Nethack servers. The most popular of these are probably alt.org and Hardfought.Read more of this story at Slashdot.
OpenAI Introduces AI-Generated Pets for Its Codex App
"Vibe coding just got a whole lot more adorable," writes Engadget:OpenAI introduced AI-generated pets to the Codex app, its agentic tool that helps with coding. These "optional animated companions" don't do any coding themselves, but serve as a floating overlay that can tell you what Codex is working on, notify you when Codex completes a task or whether it needs your input on something. The new feature lets developers see Codex's active thread, without having to switch away from your current open app. "The feature ships with eight built-in variations - including a cat and dog," reports Mashable. "But the more interesting play is the custom pet creator." Users can prompt Codex directly to generate their own companion, then share it online. A quick scroll through the homepage reveals the community has already gotten to work. Current creations include Goku, Patrick Star, Microsoft's long-retired Clippy, OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and - naturally - a goblin. There's also Grogu, Dobby, a tiny Bob Rossi, and a "Doge-style Shiba Inu dog"...Read more of this story at Slashdot.
AI Cameras are Being Deployed Across the Western US for Early Detection of Wildfires
The Associated Press reports:On a March afternoon, artificial intelligence detected something resembling smoke on a camera feed from Arizona's Coconino National Forest. Human analysts verified it wasn't a cloud or dust, then alerted the state's forest service and largest electric utility. One of dozens of AI cameras installed for the utility Arizona Public Service had spotted early signs of what came to be known as the Diamond Fire. Firefighters raced to the scene and contained the blaze before it grew past 7 acres (2.8 hectares). As record-breaking heat and an abysmal snowpack raise concerns about severe wildfires, states across the fire-prone West are adding AI to their wildfire detection toolbox, banking on the technology to help save lives and property. Arizona Public Service has nearly 40 active AI smoke-detection cameras and plans to have 71 by summer's end, and the state's fire agency has deployed seven of its own. Another utility, Xcel Energy in Colorado, has installed 126 and aims to have cameras in seven of the eight states it serves by year's end... ALERTCalifornia is a network of some 1,240 AI-enabled cameras across the Golden State that work similar to the system in Arizona.... Pano AI, whose technology combines high-definition camera feeds, satellite data and AI monitoring, has seen a growing interest in its cameras since launching in 2020. They've been deployed in Australia, Canada and 17 U.S. states, including Oregon, Washington and Texas... Last year, its technology detected 725 wildfires in the U.S., the company said... Cindy Kobold, an Arizona Public Service meteorologist, said the technology notifies them about 45 minutes faster on average than the first 911 call.Read more of this story at Slashdot.
Carbon Pollution Is Making Food Less Nutritious, Risking the Health of Billions
A new meta-analysis found nutrients in food decreased over the last 40 years, reports the Washington Post. "Many of humanity's most important crops - including wheat, potatoes, beans - contain fewer vitamins and minerals than they did a generation ago." "The invisible culprit behind this damaging phenomenon? Carbon dioxide pollution."Surging concentrations of carbon in the atmosphere, caused largely by burning fossil fuels, have produced potent changes in the way plants grow - from increasing their sugar content to depleting essential nutrients like zinc... "The diets we eat today have less nutritional density than what our grandparents ate, even if we eat exactly the same thing," said Kristie Ebi, a professor at the University of Washington's Center for Health and the Global Environment. People in wealthy countries with strong health care systems will have many tools to cope with the change, experts said. But for the world's poorest and most vulnerable, the consequences could be devastating. One study concluded that by the middle of the century the phenomenon could put more than a billion additional women and children at risk of iron-deficiency anemia - a condition that can cause pregnancy complications, developmental problems and even death. Meanwhile, some 2 billion people across the globe who already suffer from some form of nutrient shortage could see their health problems grow even worse. "The scale of the problem is huge," Ebi said. Plants depend on carbon dioxide to perform photosynthesis - but that doesn't mean they grow better when there's more carbon in the air, scientists say. A sweeping survey of changes among 32 compounds in 43 crops found that nearly every plant that humans eat is harmed by rising CO2 levels... On average, they found, nutrients have already decreased by an average 3.2 percent across all plants since the late 1980s, when the concentration of carbon dioxide in the atmosphere was about 350 parts per million. Thanks to long-time Slashdot reader GameboyRMH for sharing the news.Read more of this story at Slashdot.
Robots Are Building Clay Homes In Texas Using Dirt From the Ground
A startup south of Austin is using robots to build homes out of clay pulled directly from the ground, reports a local news station:The materials are gathered on site, mixed, and placed on a build plate. From there, a robot lowers from above, picks up the clay with a claw, carries it to the wall and drops it into place. Later, the same robot switches tools, using a hammer attachment to pound the material into shape. "It's kind of trying to replicate how a human might build an adobe house," said software engineer Anastasia Nikoulina... Using machine learning, the system constantly evaluates the wall, adjusting how it builds to create a flat, solid surface... The project is underway at Proto-Town, a ranch between Lockhart and Luling where startups test new technologies, from anti-drone systems to nuclear reactors. The company plans to build their next home on the property, with hopes to do more than 20 homes over the next year.Read more of this story at Slashdot.
It's Goodbye Time for Jeeves and Ask.com - Relics of Yesterday's Internet
A 1999 press release bragged "Jeeves" answered 92.3 million questions in just three months. "In the digital wilds of Y2K, we came to him with our most probing questions," remembers the New York Times - whether it was Britney Spears or tamagotchis: We asked, and he answered: Jeeves, the digital butler of information, the online valet who led us into the depths of cyberspace. Now, like so many other relics of yesterday's internet, Jeeves - and his home, Ask.com - are no more. After almost 30 years, the question-and-answer service and former search engine shuttered on Friday. "To you - the millions of users who turned to us for answers in a rapidly changing world - thank you for your endless curiosity, your loyalty, and your trust," the company said in a notice posted on its now-defunct website... Created in Berkeley, Calif., in the days of the dot-com gold rush, Ask Jeeves first appeared on computer screens in 1996.... Their mascot, Jeeves, was modeled on the clever English butler character from the famed P.G. Wodehouse book series. Its search function was simple - type in a question, get an answer. But the quality of its responses was uneven, and the website was quickly eclipsed by Google and Yahoo as the world's go-to search engines. The site was bought by InterActive Corp. for more than $1 billion in 2005, and was given an injection of cash to help it compete as a search engine. It rebranded as Ask.com and as part of the reimagining, the site also ditched the character of Jeeves in 2006. Scrappy but inventive, the site was one of the first to introduce hyperlocal map overlays to its searches and incorporate thumbnails of webpages. "They are doing a lot of clever and interesting things," a Google executive noted of Ask.com at the time. Still, Ask.com struggled to compete and returned in 2010 to its bread and butter: question-and-answer style prompts. Even then, it faltered against newer, crowdsourced iterations like Quora and Google's unyielding march to the internet fore - the platform now dominates search traffic, and the world's general experience of the internet. A statement at Ask.com ends "by thanking its millions of users, and saying, 'Jeeves' spirit endures'," notes this article from Engadget:As sad as it is to see a relic of the early Internet days fade into obscurity, we still have Ask Jeeves to thank for why some users still punch in full questions when querying Google. On top of that, Jeeves was built to provide detailed answers in natural language, which could have arguably acted as a precursor to today's AI chatbots like ChatGPT. "Now, Ask.com joins the Internet graveyard that includes competitors like AltaVista, which shut down in 2013," the article points out. "With Ask.com gone, alongside AIM and AOL dial-up services also sunsetting, we're truly coming to an end of a specific era of the Internet." And the New York Times argues the memory of Jeeves now rests somewhere between Limewire and Beanie Babies... Slashdot reader BrianFagioli calls it "a quiet reminder of how quickly the web moves, and how even widely recognized names can drift into obscurity once the underlying technology leaves them behind."Read more of this story at Slashdot.
Former Nintendo Executive Says Amazon Once Requested 'Illegal' Price Discounts
Amazon once tried to pressure Nintendo to break the law, says former Nintendo of America President Reggie Fils-Aime. At a recent NYU lecture, he describes a conversation with an Amazon executive, Kotaku reports:"Amazon was looking to get bigger into the video game space," said Fils-Aime. "Amazon's mentality back then is they wanted to have the lowest price out in the marketplace, even lower than Walmart... Essentially what Amazon wanted (was an) obscene amount of support, financial support, so they could have the lowest price and beat Walmart. I literally said to the executive, 'You know that's illegal, right? I can't do that'...." At the time, the Wii and DS were Nintendo's best selling hardware in history. Amazon originally sold books, but in the 2000s rapidly expanded with cheaper discounts to became a one-stop shop for almost everything. Everything except Nintendo, that is.... "Literally we stopped selling to Amazon," Fils-Aime continued, "and it's because I wasn't going to do something illegal. I wasn't going to do something that would put at risk the relationship we have with other retailers." "The two sides have since made amends," notes the Verge, "and you can buy a Switch 2 through Amazon. But for a long time, Nintendo consoles had been largely unavailable on the site."Read more of this story at Slashdot.
ChatGPT Became So Obsessed With Goblins That OpenAI Had to Intervene
The Wall Street Journal reports that OpenAI "recently gave its popular ChatGPT strict instructions. Stop talking about goblins."Recent models of the artificial-intelligence chatbot have been bringing up the creatures in conversations with users seemingly out of the blue, as well as gremlins, trolls and ogres. The goblin-speak caught the attention of programmers, who are often heavy users of the bot. Barron Roth, a 32-year-old product manager at a tech company, said the bot referred to a flaw in his code as a "classic little goblin." He said he counted more than 20 times it mentioned goblins, without any prompting... Several users speculated that goblin terminology was how the model characterized itself, in lieu of identifying as a person with a soul. Then OpenAI decided enough was enough. "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query," reads an open source line in ChatGPT's base instructions for its coding assistant. The Journal calls this "a reminder that even as AI companies tout one advance after another in their technology, they are sometimes baffled by the things their own models do...." While training a "nerdy" personality for their model's customization feature, "We unknowingly gave particularly high rewards for metaphors with creatures," OpenAI explained in a log post. And "From there, the goblins spread."When we looked, use of "goblin" in ChatGPT had risen by 175% after the launch of GPT-5.1, while "gremlin" had risen by 52%... With GPT-5.4, we and our usersa noticed an even bigger uptick in references to these creatures... Nerdy accounted for only 2.5% of all ChatGPT responses, but 66.7% of all "goblin" mentions in ChatGPT responses... The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them. Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data. It all started because the "nerdy" personality's prompt had said "You must undercut pretension through playful use of language. The world is complex and strange, and its strangeness must be acknowledged, analyzed, and enjoyed..." Now OpenAI calls this "a powerful example of how reward signals can shape model behavior in unexpected ways, and how models can learn to generalize rewards in certain situations to unrelated ones." But "fans of goblins don't have to fear," notes the Wall Street Journal. "OpenAI provided a command in its blog post that would remove its creature-suppressing instructions."Read more of this story at Slashdot.
South Africa's Draft AI Policy Withdrawn Due to 'Fictitious' AI-Generated Citations
An official in South Africa withdrew a draft of the country's national AI policy, reports a local newspaper, "after it was found the draft policy was compiled using AI, which cited academic articles that were 'fictitious'."Earlier this month, minister in the Presidency Khumbudzo Ntshavheni announced cabinet had approved the draft policy for public comment. [Ntshavheni] said the policy seeks to strengthen government's ability to regulate and adopt AI responsibly, while fostering innovation, job creation, and skills access. The article includes this quotes from the country's minister of communications/digital technologies department. "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical." Thanks to Slashdot reader Tokolosh for sharing the article.Read more of this story at Slashdot.
Ransomware Is Getting Uglier As Cybercriminals Fake Leaks and Skip Encryption Entirely
"Ransomware activity jumped again in Q1 2026," writes Slashdot reader BrianFagioli, "with 2,638 victim posts on leak sites, up 22% year over year," according to a report from cybersecurity company ReliaQuest.But the bigger shift is how messy the ecosystem has become. Established groups like Akira and Qilin are still active, while newer players like The Gentlemen surged into the top tier with a 588 percent spike in activity. At the same time, questionable leak sites such as 0APT and ALP-001 are muddying the waters by posting possibly fake breach claims, forcing companies to investigate incidents that may not even be real. Meanwhile, actors like ShinyHunters are showing that ransomware does not always need encryption anymore. By targeting identity systems and SaaS platforms, attackers can steal data using legitimate access, often through phishing or even phone-based social engineering, and then extort victims without deploying traditional malware. With a record 91 active leak sites and faster attack timelines, the report suggests defenders should focus less on tracking specific groups and more on stopping common tactics like credential theft, remote access abuse, and large-scale data exfiltration.Read more of this story at Slashdot.
Smuggled Starlink Terminals are Beating Iran's Internet Blackout
An anonymous reader shared this report from the BBC:"If even one extra person is able to access the internet, I think it's successful and it's worth it," says Sahand. The Iranian man is visibly anxious, speaking to the BBC outside Iran, as he carefully explains how he is part of a clandestine network smuggling satellite internet technology - which is illegal in Iran - into the country. Sahand, whose name we have changed, fears for family members and other contacts inside the country. "If I was identified by the Iranian regime, they might make those I'm in touch with in Iran pay the price," he says. For more than two months, Iran has been in digital darkness as the government maintains one of the longest-running national internet shutdowns ever recorded worldwide... Sahand says he has sent a dozen [Starlink terminals] to Iran since January and "we are actively looking for other ways to smuggle in more". The human rights organisation Witness estimated in January that there are at least 50,000 Starlink terminals in Iran. Activists say the number is likely to have risen... Last year, the Iranian government passed legislation that made using, buying or selling Starlink devices punishable by up to two years in prison. The jail term for distributing or importing more than 10 devices can be up to 10 years. State-affiliated media has reported multiple cases of people being arrested for selling and buying Starlink terminals, including four people - two of them foreign nationals - arrested last month for "importing satellite internet equipment". "The BBC contacted SpaceX for more details about the use of Starlink in the country but did not receive a response."Read more of this story at Slashdot.
Claude, Microsoft Copilot Fail Again to Predict the Winners of the Kentucky Derby
In 2016 an online "swarm intelligence" platform generated a correct prediction for the Kentucky Derby - naming all four top finishers in order. (But its 2017 predictions weren't even close.) Slashdot checked in again on how modern AI systems performed in 2023, 2024, and 2025 - but their predictions were still pretty bad. Would AI-generated Derby predictions be any better in 2026? This year's winner was 24-to-1 longshot "Golden Tempo" - though a lot of oddsmakers had favored a horse named Further Ado (which ultimately only finished 11th). So when USA Today prompted Microsoft Copilot for its own picks for the Kentucky Derby, Copilot also went with Further Ado. (Even worse, it predicted Golden Tempo would come in... 13th.) Here's how Copilot's picks actually performed... Further Ado (finished 11th)Chief Wallabee (finished 4th)The Puma (SCRATCHED)Renegade (finished 2nd)Commandment (finished 7th)So Happy (finished 9th)Emerging Market (finished 10th)Danon Bourbon (finished 5th)Potente (finished 12th)Incredibolt (finished 6th)Robusta (finished 14th)Ocelli (finished 3rd)Golden Tempo (finished 1st)Pavlovian (finished 18th)Great White (SCRATCHED)Wonder Dean (finished 8th) Litmus Test (finished 17th)Albus (finished 15th)Six Speed (finished 13th)Intrepido (finished 16th)Copilot was told to use the latest odds, conditions, and analysis of favorites, best bets, expert picks, previous results and race history with the post positions, according to USA Today.And meanwhile, Yahoo Sports asked Claude "to simulate the race using the opening odds, draw and potential track conditions. We also asked it to factor in some human predictions." Like Microsoft Copilot, Claude also picked Further Ado to finish first (though it came in 11th) - and predicted that Golden Tempo (the eventual first-place finisher) would finish 12th. Further Ado (finished 11th)The Puma (SCRATCHED)Commandment (finished 7th)Chief Wallabee (finished 4th)Renegade (finished 2nd)Emerging Market (finished 10th)So Happy (finished 9th)Incredibolt (finished 6th)Danon Bourbon (finished 5th)Potente (finished 12th)Pavlovian (finished 18th)Golden Tempo (finished 1st) Litmus Test (finished 17th)Albus (finished 15th)Wonder Dean (finished 8th)Six Speed (finished 13th)Intrepido (finished 16th)Read more of this story at Slashdot.
12345678910...