Theia crashed into earth and formed the moon, the theory goes. But then where did Theia come from? The lead author on a new study says "The most convincing scenario is that most of the building blocks of Earth and Theia originated in the inner Solar System. Earth and Theia are likely to have been neighbors." Though Theia was completely destroyed in the collision, scientists from the Max Planck Institute for Solar System Research led a team that was able to measure the ratio of tell-tale isotopes in Earth and Moon rocks, Euronews explains:The research team used rocks collected on Earth and samples brought back from the lunar surface by Apollo astronauts to examine their isotopes. These isotopes act like chemical fingerprints. Scientists already knew that Earth and Moon rocks are almost identical in their metal isotope ratios. That similarity, however, has made it hard to learn much about Theia, because it has been difficult to separate material from early Earth and material from the impactor. The new research attempts a kind of planetary reverse engineering. By examining isotopes of iron, chromium, zirconium and molybdenum, the team modelled hundreds of possible scenarios for the early Earth and Theia, testing which combinations could produce the isotope signatures seen today. Because materials closer to the Sun formed under different temperatures and conditions than those further out, those isotopes exist in slightly different patterns in different regions of the Solar System. By comparing these patterns, researchers concluded that Theia most likely originated in the inner Solar System, even closer to the Sun than the early Earth. The team published their findings in the journal Science. Its title? "The Moon-forming impactor Theia originated from the inner Solar System."Read more of this story at Slashdot.
In October cryptologist/CS professor Daniel J. Bernstein alleged that America's National SecurityAgency (and its UK counterpart GCHQ) were attempting to influence NIST to adopt weaker post-quantum cryptographystandards without a "hybrid" approach that would've also included pre-quantum ECC. Bernstein is of the opinion that "Given howmany post-quantum proposals have been broken and the continuing flood of side-channel attacks, any competent engineering evaluation will conclude thatthe best way to deploy post-quantum [PQ] encryption for TLS, and for the Internet more broadly, is as double encryption: post-quantum cryptography on top of ECC." Buthe says he's seen it playing out differently:By 2013, NSA had a quarter-billion-dollar-a-yearbudget to "covertly influence and/or overtly leverage"systems to "make the systems in question exploitable"; inparticular, to "influence policies, standards and specificationfor commercial public key technologies". NSA is quietlyusing stronger cryptography for the data it cares about, butmeanwhile is spending money to promote a market for weakenedcryptography, the same way that it successfully created decades ofsecurity failures by building up the market for, e.g., 40-bitRC4 and 512-bitRSA and Dual EC. I looked concretely at what was happening in IETF'sTLS working group, compared to the consensusrequirements for standards-development organizations. I reviewedhow a call for "adoption" of an NSA-driven specification produced a variety of objections that weren'thandled properly. ("Adoption" is a preliminary step before IETF standardization....) On 5 November 2025, the chairs issued "last call" for objections to publication of the document. The deadline for input is "2025-11-26", this coming Wednesday. Bernstein also shares concerns about how the Internet Engineering Task Force is handling the discussion, and argues that the document is even "out of scope" for theIETF TLS working groupThis document doesn't serve any of the official goals in the TLS working group charter. Most importantly, this document is directly contrary to the "improve security" goal, so it would violate the charter even if it contributed to another goal... Half of the PQ proposals submitted to NIST in 2017 have been broken already... often with attacks having sufficiently low cost to demonstrate onreadily available computer equipment. Further PQ software has been broken by implementation issues such as side-channel attacks. He's also concerned about how that discussion is being handled:On 17 October 2025, they posted a "Notice of Moderation for Postings by D. J. Bernstein" saying that they would "moderate the postings of D. J. Bernstein for 30 days due to disruptive behavior effective immediately" and specifically that my postings "will be held for moderation and after confirmation by the TLS Chairs of being on topic and not disruptive, will be released to the list"... I didn't send anything to the IETF TLS mailing list for 30 days after that. Yesterday [November 22nd] I finished writing up my new objection and sent that in. And, gee, after more than 24 hours it still hasn't appeared... Presumably the chairs "forgot" to flip the censorship button off after 30 days. Thanks to alanw (Slashdot reader #1,822) for spotting the blog posts.Read more of this story at Slashdot.
"Three years ago, Google removed JPEG XL support from Chrome, stating there wasn't enough interest at the time," writes the blog Windows Report. "That position has now changed."In a recent note to developers, a Chrome team representative confirmed that work has restarted to bring JPEG XL to Chromium and said Google "would ship it in Chrome" once long-term maintenance and the usual launch requirements are met. The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents. There has been continuous demand from developers and users who ask for its return. Before Google ships the feature in Chrome, the company wants the integration to be secure and supported over time. A developer has submitted new code that reintroduces JPEG XL to Chromium. This version is marked as feature complete. The developer said it also "includes animation support," which earlier implementations did not offer.Read more of this story at Slashdot.
"Fresh from announcing it is building an AI browsing mode in Firefox and laying the groundwork for agentic interactions in the Firefox 145 release, the corp arm of Mozilla is now flexing its AI muscles in the direction of those more likely to care," writes the blog OMG Ubuntu:If you're a developer building AI agents, you can sign up to get early access to Mozilla's TABS API, a "powerful web content extraction and transformation toolkit designed specifically for AI agent builders"... The TABS API enables devs to create agents to automate web interactions, like clicking, scrolling, searching, and submitting forms "just like a human". Real-time feedback and adaptive behaviours will, Mozilla say, offer "full control of the web, without the complexity." As TABS is not powered by a Mozilla-backed LLM you'll need to connect it to your choice of third-party LLM for any relevant processing... Developers get 1,000 requests monthly on the free tier, which seems reasonable for prototyping personal projects. Complex agentic workloads may require more. Though pricing is yet to be locked in, the TABS API website suggests it'll cost ~$5 per 1000 requests.Paid plans will offer additional features too, like lower latency and, somewhat ironically, CAPTCHA solving so AI can 'prove' it's not a robot on pages gated to prevent automated activities. Google, OpenAI, and other major AI vendors offer their own agentic APIs. Mozilla is pitching up late, but it plans to play differently. It touts a "strong focus on data minimisation and security", with scraped data treated ephemerally - i.e., not kept. As a distinction, that matters. AI agents can be given complex online tasks that involve all sorts of personal or sensitive data being fetched and worked with.... If you're minded to make one, perhaps without a motivation to asset-strip the common good, Mozilla's TABS API look like a solid place to start.Read more of this story at Slashdot.
Long-time Slashdot reader jenningsthecat shared this article from IEEE Spectrum:By dropping a nuclear reactor 1.6 kilometers (1 mile) underground, Deep Fission aims to use the weight of a billion tons of rock and water as a natural containment system comparable to concrete domes and cooling towers. With the fission reaction occurring far below the surface, steam can safely circulate in a closed loop to generate power. The California-based startup announced in October that prospective customers had signed non-binding letters of intent for 12.5 gigawatts of power involving data center developers, industrial parks, and other (mostly undisclosed) strategic partners, with initial sites under consideration in Kansas, Texas, and Utah... The company says its modular approach allows multiple 15-megawatt reactors to be clustered on a single site: A block of 10 would total 150 MW, and Deep Fission claims that larger groupings could scale to 1.5 GW. Deep Fission claims that using geological depth as containment could make nuclear energy cheaper, safer, and deployable in months at a fraction of a conventional plant's footprint... The company aims to finalize its reactor design and confirm the pilot site in the coming months. [Company founder Liz] Muller says the plan is to drill the borehole, lower the canister, load the fuel, and bring the reactor to criticality underground in 2026. Sites in Utah, Texas, and Kansas are among the leading candidates for the first commercial-scale projects, which could begin construction in 2027 or 2028, depending on the speed of DOE and NRC approvals. Deep Fission expects to start manufacturing components for the first unit in 2026 and does not anticipate major bottlenecks aside from typical long-lead items. In short "The same oil and gas drilling techniques that reliably reach kilometer-deep wells can be adapted to host nuclear reactors..." the article points out. Their design would also streamline construction, since "Locating the reactors under a deep water column subjects them to roughly 160 atmospheres of pressure - the same conditions maintained inside a conventional nuclear reactor - which forms a natural seal to keep any radioactive coolant or steam contained at depth, preventing leaks from reaching the surface." Other interesting points from the article:They plan on operating and controlling the reactor remotely from the surface.Company founder Muller says if an earthquake ever disrupted the site, "you seal it off at the bottom of the borehole, plug up the borehole, and you have your waste in safe disposal."For waste management, the company "is eyeing deep geological disposal in the very borehole systems they deploy for their reactors.""The company claims it can cut overall costs by 70 to 80 percent compared with full-scale nuclear plants.""Among its competition are projects like TerraPower's Natrium, notesthe tech news site Hackaday, saying TerraPower's fast neutron reactors "are already under construction and offer much more power per reactor, along with Natrium in particular also providing built-in grid-level storage. "One thing is definitely for certain..." they add. "The commercial power sector in the US has stopped being mind-numbingly boring."Read more of this story at Slashdot.
With some animated graphics, CNN "reimagined" what three of America's busiest air and road travel routes would look like with high-speed trains, for "a glimpse into a faster, more connected future."The journey from New York City to Chicago could take just over six hours by high-speed train at an average speed of 160 mph, cutting travel time by more than 13 hours compared with the current Amtrak route... The journey from San Francisco to Los Angeles could be completed in under three hours by high-speed train... The journey from Atlanta to Orlando could be completed in under three hours by high-speed train that reaches 160 mph, cutting travel time by over half compared with driving... While high-speed rail remains a fantasy in the United States, it is already hugely successful across the globe. Passengers take 3 billion trips annually on more than 40,000 miles of modern high-speed railway across the globe, according to the International Union of Railways. China is home to the world's largest high-speed rail network. The 809-mile train journey from Beijing to Shanghai takes just four and a half hours... In Europe, France's Train a Grand Vitesse (TGV) is recognized as a pioneer of high-speed rail technology. Spain soon followed France's success and now hosts Europe's most extensive high-speed rail network... [T]rain travel contributes relatively less pollution of every type, said Jacob Mason of the Institute for Transportation and Development Policy, from burning less gasoline to making less noise than cars and taking up less space than freeways. The reduction in greenhouse gas emissions is staggering: Per kilometer traveled, the average car or a short-haul flight each emit more than 30 times the CO2 equivalent than Eurostar high-speed trains, according to data from the UK government.Read more of this story at Slashdot.
"Security, development, and AI now move as one," says Microsoft's director of cloud/AI securityproduct marketing. Microsoft and GitHub "have launched a native integration between Microsoft Defender for Cloud and GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." according to The New Stack:The integration, announced this week in San Francisco at theMicrosoftIgnite 2025 conference and now available in public preview,connects runtime intelligence from production environments directlyinto developer workflows. The goal is to help organizationsprioritize which vulnerabilities actually matter and use AI to fixthem faster. "Throughout my career, I've seen vulnerabilitytrends going up into the right. It didn't matter how good of adetectionengine and how accurate our detection engine was, people justcouldn't fix things fast enough," said MarceloOliveira, VP of product management at GitHub, who has spentnearly a decade in application security. "That basically resultedin decades of accumulation of security debt into enterprise codebases." According to industry data, critical and high-severityvulnerabilities constitute 17.4% of security backlogs, with a meantime to remediation of 116 days, said AndrewFlick, senior director of developer services, languages and toolsat Microsoft, in a blogpost. Meanwhile, applications face attacks as frequently as onceevery three minutes, Oliveira said. The integration represents the first native link between runtimeintelligence and developer workflows, said ElifAlgedik, director of product marketing for cloud and AI securityat Microsoft, in a blogpost... The problem, according to Flick, comes down to threechallenges: security teams drowning in alert fatigue while AI rapidlyintroduces new threatvectors that they have little time to understand; developerslacking clear prioritization while remediation takes too long; andboth teams relying on separate, nonintegrated tools that makecollaboration slow and frustrating... The new integration worksbidirectionally. When Defender for Cloud detects a vulnerability in arunning workload, that runtime context flows into GitHub, showingdevelopers whether the vulnerability is internet-facing, handlingsensitive data or actually exposed in production. This is powered bywhat GitHub calls the Virtual Registry, which creates code-to-runtimemapping, Flick said... In the past, this alert would age in a dashboard while developersworked on unrelated fixes because they didn't know this was thecritical one, he said. Now, a security campaign can be created inGitHub, filtering for runtime risk like internet exposure orsensitive data, notifying the developer to prioritize this issue. GitHub Copilot "now automatically checks dependencies, scansfor first-party code vulnerabilities and catches hardcoded secretsbefore code reaches developers," the article points out - butGitHub's VP of product management says this takes things evenfurther. "We're not only helping you fix existing vulnerabilities,we're also reducing the number of vulnerabilities that come intothe system when the level of throughput of new code being created isincreasing dramatically with all these agentic coding agent platforms."Read more of this story at Slashdot.
"On the slopes of an Oregon volcano, engineers are building the hottest geothermal power plant on Earth," reports the Washington Post:The plant will tap into the infernal energy of Newberry Volcano, "one of the largest and most hazardous active volcanoes in the United States," according to the U.S. Geological Survey. It has already reached temperatures of 629 degrees Fahrenheit, making it one of the hottest geothermal sites in the world, and next year it will start selling electricity to nearby homes and businesses. But the start-up behind the project, Mazama Energy, wants to crank the temperature even higher - north of 750 degrees - and become the first to make electricity from what industry insiders call "superhot rock." Enthusiasts say that could usher in a new era of geothermal power, transforming the always-on clean energy source from a minor player to a major force in the world's electricity systems. "Geothermal has been mostly inconsequential," said Vinod Khosla, a venture capitalist and one of Mazama Energy's biggest financial backers. "To do consequential geothermal that matters at the scale of tens or hundreds of gigawatts for the country, and many times that globally, you really need to solve these high temperatures." Today, geothermal produces less than 1 percent of the world's electricity. But tapping into superhot rock, along with other technological advances, could boost that share to 8 percent by 2050, according to the International Energy Agency (IEA). Geothermal using superhot temperatures could theoretically generate 150 times more electricity than the world uses, according to the IEA. "We believe this is the most direct path to driving down the cost of geothermal and making it possible across the globe," said Terra Rogers, program director for superhot rock geothermal at the Clean Air Task Force, an environmentalist think tank. "The [technological] gaps are within reason. These are engineering iterations, not breakthroughs." The Newberry Volcano project combines two big trends that could make geothermal energy cheaper and more widely available. First, Mazama Energy is bringing its own water to the volcano, using a method called "enhanced geothermal energy"... [O]ver the past few decades, pioneering projects have started to make energy from hot dry rocks by cracking the stone and pumping in water to make steam, borrowing fracking techniques developed by the oil and gas industry... The Newberry project also taps into hotter rock than any previous enhanced geothermal project. But even Newberry's 629 degrees fall short of the superhot threshold of 705 degrees or above. At that temperature, and under a lot of pressure, water becomes "supercritical" and starts acting like something between a liquid and a gas. Supercritical water holds lots of heat like a liquid, but it flows with the ease of a gas - combining the best of both worlds for generating electricity... [Sriram Vasantharajan, Mazama's CEO] said Mazama will dig new wells to reach temperatures above 750 degrees next year. Alongside an active volcano, the company expects to hit that temperature less than three miles beneath the surface. But elsewhere, geothermal developers might have to dig as deep as 12 miles. While Mazama plans to generate 15 megawatts of electricity next year, it hopes to eventually increase that to 200 megawatts. (And the company's CEO said it could theoretically generate five gigawatts of power.) But more importantly, successful projects "motivate other players to get into the market," according to a senior geothermal research analyst at energy consultancy Wood Mackenzie, who predicted "a ripple effect," to the Washington Post where "we'll start seeing more companies get the financial support to kick off their own pilots."Read more of this story at Slashdot.
"The internet did transform work - but not the way 1998 thought..." argues the Wall Street Journal. "The internet slipped inside almost every job and rewired how work got done." So while the number of single-task jobs like travel agent dropped, most jobs "are bundles of judgment, coordination and hands-on work," and instead the internet brought "the quiet transformation of nearly every job in the economy... Today, just 10% of workers make minimal use of the internet on the job - roles like butcher and carpet installer."[T]he bigger story has been additive. In 1998, few could conceive of social media - let alone 65,000 social-media managers - and 200,000 information-security analysts would have sounded absurd when data still lived on floppy disks... Marketing shifted from campaign bursts to always-on funnels and A/B testing. Clinics embedded e-prescribing and patient portals, reshaping front-office and clinical handoffs. The steps, owners and metrics shifted. Only then did the backbone scale: We went from server closets wedged next to the mop sink to data centers and cloud regions, from lone system administrators to fulfillment networks, cybersecurity and compliance. That is where many unexpected jobs appeared. Networked machines and web-enabled software quietly transformed back offices as much as our on-screen lives. Similarly, as e-commerce took off, internet-enabled logistics rewired planning roles - logisticians, transportation and distribution managers - and unlocked a surge in last-mile work. The build-out didn't just hire coders; it hired coordinators, pickers, packers and drivers. It spawned hundreds of thousands of warehouse and delivery jobs - the largest pockets of internet-driven job growth, and yet few had them on their 1998 bingo card... Today, the share of workers in professional and managerial occupations has more than doubled since the dawn of the digital era. So what does that tell us about AI? Our mental model often defaults to an industrial image - John Henry versus the steam drill - where jobs are one dominant task, and automation maps one-to-one: Automate the task, eliminate the job. The internet revealed a different reality: Modern roles are bundles. Technologies typically hit routine tasks first, then workflows, and only later reshape jobs, with second-order hiring around the backbone. That complexity is what made disruption slower and more subtle than anyone predicted. AI fits that pattern more than it breaks it... [LLMs] can draft briefs, summarize medical notes and answer queries. Those are tasks - important ones - but still parts of larger roles. They don't manage risk, hold accountability, reassure anxious clients or integrate messy context across teams. Expect a rebalanced division of labor: The technical layer gets faster and cheaper; the human layer shifts toward supervision, coordination, complex judgment, relationship work and exception handling. What to expect from AI, then, is messy, uneven reshuffling in stages. Some roles will contract sharply - and those contractions will affect real people. But many occupations will be rewired in quieter ways. Productivity gains will unlock new demand and create work that didn't exist, alongside a build-out around data, safety, compliance and infrastructure. AI is unprecedented; so was the internet. The real risk is timing: overestimating job losses, underestimating the long, quiet rewiring already under way, and overlooking the jobs created in the backbone. That was the internet's lesson. It's likely to be AI's as well.Read more of this story at Slashdot.
"Copilot Actions on Windows 11" is currently available in Insider builds (version 26220.7262) as part of Copilot Labs, according to a recent report, "and is off by default, requiring admin access to set it up." But maybe it's off for a good reason...besides the fact that it can access any apps installed on your system:In a support document, Microsoft admits that features like Copilot Actions introduce " novel security risks ." They warn about cross-prompt injection (XPIA), where malicious content in documents or UI elements can override the AI's instructions. The result? " Unintended actions like data exfiltration or malware installation ." Yeah, you read that right. Microsoft is shipping a feature that could be tricked into installing malware on your system. Microsoft's own warning hits hard: "We recommend that you only enable this feature if you understand the security implications." When you try to enable these experimental features, Windows shows you a warning dialog that you have to acknowledge. ["This feature is still being tested and may impact the performance or security of your device."] Even with these warnings, the level of access Copilot Actions demands is concerning. When you enable the feature, it gets read and write access to your Documents, Downloads, Desktop, Pictures, Videos, and Music folders... Microsoft says they are implementing safeguards. All actions are logged, users must approve data access requests, the feature operates in isolated workspaces, and the system uses audit logs to track activity. But you are still giving an AI system that can "hallucinate and produce unexpected outputs" (Microsoft's words, not mine) full access to your personal files. To address this, Ars Technica notes, Microsoft added this helpful warning to its support document this week. "As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs." But Microsoft didn't describe "what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined..."Read more of this story at Slashdot.
A promotional video for Amazon's Kiro software development system took a unique approach, writes GeekWire. "Instead of product diagrams or keynote slides, a crew from Seattle's Packrat creative studio used action figures on a miniature set to create a stop-motion sequence..." "Can the software development hero conquer the 'AI Slop Monster' to uncover the gleaming, fully functional robot buried beneath the coding chaos?"Kiro (pronounced KEE-ro) is Amazon's effort to rethink how developers use AI. It's an integrated development environment that attempts to tame the wild world of vibe coding... But rather than simply generating code from prompts [in "vibe mode"], Kiro breaks down requests into formal specifications, design documents, and task lists [in "spec mode"]. This spec-driven development approach aims to solve a fundamental problem with vibe coding: AI can quickly generate prototypes, but without structure or documentation, that code becomes unmaintainable... The market for AI-powered development tools is booming. Gartner expects AI code assistants to become ubiquitous, forecasting that 90% of enterprise software engineers will use them by 2028, up from less than 14% in early 2024... Amazon launched Kiro in preview in July, to a strong response. Positive early reviews were tempered by frustration from users unable to gain access. Capacity constraints have since been resolved, and Amazon says more than 250,000 developers used Kiro in the first three months... Now, the company is taking Kiro out of preview into general availability, rolling out new features and opening the tool more broadly to development teams and companies... During the preview period, Kiro handled more than 300 million requests and processed trillions of tokens as developers explored its capabilities, according to stats provided by the company.Rackspace used Kiro to complete what they estimated as 52 weeks of software modernization in three weeks, according to Amazon executives. SmugMug and Flickr are among other companies espousing the virtues of Kiro's spec-driven development approach. Early users are posting in glowing terms about the efficiencies they're seeing from adopting the tool... startups in most countries can apply for up to 100 free Pro+ seats for a year's worth of Kiro credits. Kiro offers property-based testing "to verify that generated code actually does what developers specified," according to the article - plus a checkpointing system that "lets developers roll back changes or retrace an agent's steps when an idea goes sideways..." "And yes, they've been using Kiro to build Kiro, which has allowed them to move much faster."Read more of this story at Slashdot.
A week ago Bitcoin was at $93,714. Saturday it dropped to $85,300. Late Thursday, market researcher Ed Yardeni blamed some of Thursday's stock market sell-off on "the ongoing plunge in bitcoin's price," reports Fortune:"There has been a strong correlation between it and the price of TQQQ, an ETF that seeks to achieve daily investment results that correspond to three times (3x) the daily performance of the Nasdaq-100 Index," [Yardeni wrote in a note]. Yardeni blamed bitcoin's slide on the GENIUS Act, which was enacted on July 18, saying that the regulatory framework it established for stablecoins eliminated bitcoin's transactional role in the monetary system. "It's possible that the rout in bitcoin is forcing some investors to sell stocks that they own," he added... Traders who used leverage to make crypto bets would need to liquidate positions in the event of margin calls. Steve Sosnick, chief strategist at Interactive Brokers, also said bitcoin could swing the entire stock market, pointing out that it's become a proxy for speculation. "As a long-time systematic trader, it tells me that algorithms are acting upon the relationship between stocks and bitcoin," he wrote in a note on Thursday.Read more of this story at Slashdot.
"PHP 8.5 landed on Thursday with a long-awaited pipe operator and a new standards-compliant URI parser," reports the Register, "marking one of the scripting language's more substantial updates... "The pipe operator allows function calls to be chained together, which avoids theextraneous variables and nested statements that might otherwise beinvolved. Pipes tend to make code more readable than other ways toimplement serial operations. Anyone familiar with the Unix/Linuxcommand line or programming languages like R,F#,Clojure, orElixirmay have used the pipe operator. In JavaScript, aka ECMAScript, apipe operator has been proposed, though there are alternativeslike method chaining. Another significant addition is the URIextension, which allows developers to parse and modify URIs andURLs based on both the RFC 3986 and the WHATWG URL standards. Parsingwith URIs and URLs a" reading them and breaking them down into theirdifferent parts a" is a rather common task for web-orientedapplications. Yet prior versions of PHP didn't include astandards-compliant parser in the standard library. As notedby software developer Tim DA1/4sterhus, the parse_url()function that dates back to PHP 4 doesn't follow any standard andcomes with a warning that it should not be used with untrusted ormalformed URLs. Other noteworthy additions to the language include: CloneWith, for updating properties more efficiently; the #[\NoDiscard] attribute, for warning when a return value goes unused; theability to use static closures and first-class callables in constant expressions; and persistent cURL handles that can be shared across multiple PHP requests.Read more of this story at Slashdot.
In a 2023 pitch to investors, a "well-financed, highly credentialed" startup named Stardust aimed for a "gradual temperature reduction demonstration" in 2027, according to a massive new 9,600-word article from Politico. ("Annually dispersing ~1 million tons of sun-reflecting particles," says one slide. "Equivalent to ~1% extra cloud coverage.") "Another page told potential investors Stardust had already run low-altitude experiments using 'test particles'," the article notes:[P]ublic records and interviews with more than three dozen scientists, investors, legal experts and others familiar with the company reveal an organization advancing rapidly to the brink of being able to press "go" on its planet-cooling plans. Meanwhile, Stardust is seeking U.S. government contracts and quietly building an influence machine in Washington to lobby lawmakers and officials in the Trump administration on the need for a regulatory framework that it says is necessary to gain public approval for full-scale deployment.... The presentation also included revenue projections and a series of opportunities for venture capitalists to recoup their investments. Stardust planned to sign "government contracts," said a slide with the company's logo next to an American flag, and consider a "potential acquisition" by 2028.By 2030, the deck foresaw a "large-scale demonstration" of Stardust's system. At that point, the company claimed it would already be bringing in $200 million per year from its government contracts and eyeing an initial public offering, if it hadn't been sold already. The article notes that for "a widening circle of researchers and government officials,Stardust's perceived failures to be transparent about its work and technology have triggered a larger conversation about what kind of international governance framework will be needed to regulate a new generation of climate technologies." (Since currently Stardust and its backers "have no legal obligations to adhere to strenuous safety principles or to submit themselves to the public view.") In October Politico spoke to Stardust CEO, Yanai Yedvab, a former nuclear physicist who was once deputy chief scientist at the Israeli Atomic Energy Commission. Stardust "was ready to announce the $60 million it had raised from 13 new investors," the article points out, "far larger than any previous investment in solar geoengineering."[Yedvab] was delighted, he said, not by the money, but what it meant for the project."We are, like, few years away from having the technology ready to a level that decisions can be taken" - meaning that deployment was still on track to potentially begin on the timeline laid out in the 2023 pitch deck. The money raised was enough to start "outdoor contained experiments" as soon as April, Yedvab said. These would test how their particles performed inside a plane flying at stratospheric heights, some 11 miles above the Earth's surface... The key thing, he insisted, was the particle was "safe." It would not damage the ozone layer and, when the particles fall back to Earth, they could be absorbed back into the biosphere, he said. Though it's impossible to know this is true until the company releases its formula. Yedvab said this round of testing would make Stardust's technology ready to begin a staged process of full-scale, global deployment before the decade is over - as long as the company can secure a government client. To start, they would only try to stabilize global temperatures - in other words fly enough particles into the sky to counteract the steady rise in greenhouse gas levels - which would initially take a fleet of 100 planes. This begs the question: should the world attempt solar geoengineering?That the global temperature would drop is not in question. Britain's Royal Society... said in a report issued in early November that there was little doubt it would be effective. They did not endorse its use, but said that, given the growing interest in this field, there was good reason to be better informed about the side effects... [T]hat doesn't mean it can't have broad benefits when weighed against deleterious climate change, according to Ben Kravitz, a professor of earth and atmospheric sciences at Indiana University who has closely studied the potential effects of solar geoengineering. "There would be some winners and some losers. But in general, some amount of ... stratospheric aerosol injection would likely benefit a whole lot of people, probably most people," he said. Other scientists are far more cautious. The Royal Society report listed a range of potential negative side effects that climate models had displayed, including drought in sub-Saharan Africa. In accompanying documents, it also warned of more intense hurricanes in the North Atlantic and winter droughts in the Mediterranean. But the picture remains partial, meaning there is no way yet to have an informed debate over how useful or not solar geoengineering could be... And then there's the problem of trying to stop. Because an abrupt end to geoengineering, with all the carbon still in the atmosphere, would cause the temperature to soar suddenly upward with unknown, but likely disastrous, effects... Once the technology is deployed, the entire world would be dependent on it for however long it takes to reduce the trillion or more tons of excess carbon dioxide in the atmosphere to a safe level... Stardust claims to have solved many technical and safety challenges, especially related to the environmental impacts of the particle, which they say would not harm nature or people. But researchers say the company's current lack of transparency makes it impossible to trust. Thanks to long-time Slashdot reader fjo3 for sharing the article.Read more of this story at Slashdot.
Meta "is testing a new product that would give Facebook users a personalized daily briefing powered by the company's generative AI technology" reports the Washington Post. They cite records they've reviwed showing that Meta "would analyze Facebook content and external sources to push custom updates to its users."The company plans to test the product with a small group of Facebook users in select cities such as New York and San Francisco, according to a person familiar with the project who spoke on the condition of anonymity to discuss private company matters... Meta's foray into pushing updates for consumers follows years of controversy over its relationship with publishers. The tech company has waffled between prominently featuring content from mainstream news sources on Facebook to pulling news links altogether as regulators pushed the tech giant to pay publishers for content on its platforms. More recently, publishers have sued Meta, alleging it infringed on their copyrighted works to train its AI models.Read more of this story at Slashdot.
An anonymous reader shared this report from CNN:The universe's expansion might not be accelerating but slowing down, a new study suggests. If confirmed, the finding would upend decades of established astronomical assumptions and rewrite our understanding of dark energy, the elusive force that counters the inward pull of gravity in our universe... Last year, a consortium of hundreds of researchers using data from the Dark Energy Spectroscopic Instrument (DESI) in Arizona, developed the largest ever 3D map of the universe. The observations hinted at the fact that dark energy may be weakening over time, indicating that the universe's rate of expansion could eventually slow. Now, a study published November 6 in the journal Monthly Notices of the Royal Astronomical Society provides further evidence that dark energy might not be pushing on the universe with the same strength it used to. The DESI project's findings last year represented "a major, major paradigm change ... and our result, in some sense, agrees well with that," said Young-Wook Lee, a professor of astrophysics at Yonsei University in South Korea and lead researcher for the new study.... To reach their conclusions, the researchers analyzed a sample of 300 galaxies containing Type 1a supernovas and posited that the dimming of distant exploding stars was not only due to their moving farther away from Earth, but also due to the progenitor star's age... [Study coauthor Junhyuk Son, a doctoral candidate of astronomy at Yonsei University, said] "we found that their luminosity actually depends on the age of the stars that produce them - younger progenitors yield slightly dimmer supernovae, while older ones are brighter." Son said the team has a high statistical confidence - 99.99% - about this age-brightness relation, allowing them to use Type 1a supernovas more accurately than before to assess the universe's expansion... Eventually, if the expansion continues to slow down, the universe could begin to contract, ending in what astronomers imagine may be the opposite of the big bang - the big crunch. "That is certainly a possibility," Lee said. "Even two years ago, the Big Crunch was out of the question. But we need more work to see whether it could actually happen." The new research proposes a radical revision of accepted knowledge, so, understandably, it is being met with skepticism. "This study rests on a flawed premise," Adam Riess, a professor of physics and astronomy at the Johns Hopkins University and one of the recipients of the 2011 Nobel Prize in physics, said in an email. "It suggests supernovae have aged with the Universe, yet observations show the opposite - today's supernovae occur where young stars form. The same idea was proposed years ago and refuted then, and there appears to be nothing new in this version." Lee, however, said Riess' claim is incorrect. "Even in the present-day Universe, Type Ia supernovae are found just as frequently in old, quiescent elliptical galaxies as in young, star-forming ones - which clearly shows that this comment is mistaken. The so-called paper that 'refuted' our earlier result relied on deeply flawed data with enormous uncertainties," he said, adding that the age-brightness correlation has been independently confirmed by two separate teams in the United States and China... "Extraordinary claims require extraordinary evidence," Dragan Huterer, a professor of physics at the University of Michigan in Ann Arbor, said in an email, noting that he does not feel the new research "rises to the threshold to overturn the currently favored model...." The new Vera C. Rubin Observatory, which started operating this year, is set to help settle the debate with the early 2026 launch of the Legacy Survey of Space and Time, an ultrawide and ultra-high-definition time-lapse record of the universe made by scanning the entire sky every few nights over 10 years to capture a compilation of asteroids and comets, exploding stars, and distant galaxies as they change.Read more of this story at Slashdot.
An anonymous reader shared this report from Sky News:A new wind record has been set for Britain, with enough electricity generated from turbines to power 22 million homes, the system operator has said. The mark of 22,711 megawatts (MW) was set at 7.30pm on 11 November... enough to keep around three-quarters of British homes powered, the National Energy System Operator (Neso) said. The country had experienced windy conditions, particularly in the north of England and Scotland... Neso has predicted that Britain could hit another milestone in the months ahead by running the electricity grid for a period entirely with zero carbon power, renewables and nuclear... Neso said wind power is now the largest source of electricity generation for the UK, and the government wants to generate almost all of the UK's electricity from low-carbon sources by 2030. "Wind accounted for 55.7 per cent of Britain's electricity mix at the time..." reports The Times:Gas provided only 12.5 per cent of the mix, with 11.3 per cent coming from imports over subsea power cables, 8 per cent from nuclear reactors, 8 per cent from biomass plants, 1.4 per cent from hydroelectric plants and 1.1 per cent from storage. Britain has about 32 gigawatts of wind farms installed, approximately half of that onshore and half offshore, according to the Wind Energy Database from the wind industry body Renewable UK. That includes five of the world's biggest offshore wind farms. The government is seeking to double onshore wind and quadruple offshore wind power by 2030 as part of its plan for clean energy.... Jane Cooper, deputy chief executive of Renewable UK, said: "On a cold, dark November evening, wind was generating enough electricity to power 80 per cent of British homes when we needed it most.Read more of this story at Slashdot.
For nearly three years OpenAI has touted ChatGPT as a "revolutionary" (and work-transforming) productivity tool, reports the Washington Post. But after analyzing 47,000 ChatGPT conversations, the Post found that users "are overwhelmingly turning to the chatbot for advice and companionship, not productivity tasks."The Post analyzed a collection of thousands of publicly shared ChatGPT conversations from June 2024 to August 2025. While ChatGPT conversations are private by default, the conversations analyzed were made public by users who created shareable links to their chats that were later preserved in the Internet Archive and downloaded by The Post. It is possible that some people didn't know their conversations would become publicly preserved online. This unique data gives us a glimpse into an otherwise black box... Overall, about 10 percent of the chats appeared to show people talking about their emotions, role-playing, or seeking social interactions with the chatbot. Some users shared highly private and sensitive information with the chatbot, such as information about their family in the course of seeking legal advice. People also sent ChatGPT hundreds of unique email addresses and dozens of phone numbers in the conversations... Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said that it appears ChatGPT "is trained to further or deepen the relationship." In some of the conversations analyzed, the chatbot matched users' viewpoints and created a personalized echo chamber, sometimes endorsing falsehoods and conspiracy theories. Four of ChatGPT's answers about health problems got a failing score from a chair of medicine at the University of California San, Francisco, the Post points out. But four other answers earned a perfect score.Read more of this story at Slashdot.
In October Zorin OS claimed it had 100,000 downloads in a little over two days in the days following Microsoft's end of support for Windows 10. And one month later, Zorin OS developers now claim that 780,000 people downloaded it from a Windows computer in the space of a month, according to the tech news site XDA Developers.In a post on the Zorin blog, the developers of the operating system Zorin OS 18 announced that they've managed to accrue one million downloads of the operating system in a single month [since its launch on October 14]. While this is plenty impressive by itself, the developers go on to reveal that, out of that million, 78% of the downloads came from a Windows machine. That means that at least 780,000 people on Windows gave Zorin OS 18 a download... [I]t's easy to see why: the developers put a heavy emphasis on making their system the perfect home for ex-Windows users.Read more of this story at Slashdot.
ScienceDaily reports:Electrons can freeze into strange geometric crystals and then melt back into liquid-like motion under the right quantum conditions. Researchers identified how to tune these transitions and even discovered a bizarre "pinball" state where some electrons stay locked in place while others dart around freely. Their simulations help explain how these phases form and how they might be harnessed for advanced quantum technologies... When electrons settle into these rigid arrangements, the material undergoes a shift in its state of matter and stops conducting electricity. Instead of acting like a metal, it behaves as an insulator. This unusual behavior provides scientists with valuable insight into how electrons interact and has opened the door to advances in quantum computing, high-performance superconductors used in energy and medical imaging, innovative lighting systems, and extremely precise atomic clocks... [Florida State University assistant professor Cyprian Lewandowski said] "Here, it turns out there are other quantum knobs we can play with to manipulate states of matter, which can lead to impressive advances in experimental research."Read more of this story at Slashdot.
The Washington Post reports:Scientists in Switzerland have created a robot the size of a grain of sand that is controlled by magnets and can deliver drugs to a precise location in the human body, a breakthrough aimed at reducing the severe side effects that stop many medicines from advancing in clinical trials... "I think surgeons are going to look at this," [said Bradley J. Nelson, an author of the paper in Science describing the discovery and a professor of robotics and intelligent systems at ETH Zurich]. I'm sure they're going to have a lot of ideas on how to use" the microrobot. The capsule, which is steered by magnets, might also be useful in treating aneurysms, very aggressive brain cancers, and abnormal connections between arteries and veins known as arteriovenous malformations, Nelson said. The capsules have been tested successfully in pigs, which have similar vasculature to humans, and in silicone models of the blood vessels in humans and animals... Nelson said drug-ferrying microrobots of this kind may be three to five years from being tested in clinical trials.The problem faced by many drugs under development is that they spread throughout the body instead of going only to the area in need... A major cause of side effects in patients is medications traveling to parts of the body that don't need them. The capsules developed in Switzerland, however, can be maneuvered into precise locations by a surgeon using a tool not that different from a PlayStation controller. The navigation system involves six electromagnetic coils positioned around the patient, each about 8 to 10 inches in diameter... The capsules are made of materials that have been found safe for people in other medical tools... When the capsule reaches its destination in the body, "we can trigger the capsule to dissolve," Nelson said.Read more of this story at Slashdot.
A California judge has shut down a decade-long surveillance program in which Sacramento's utility provider shared granular smart-meter data on 650,000 residents with police to hunt for cannabis grows. The EFF reports: The Sacramento County Superior Court ruled that the surveillance program run by the Sacramento Municipal Utility District (SMUD) and police violated a state privacy statute, which bars the disclosure of residents' electrical usage data with narrow exceptions. For more than a decade, SMUD coordinated with the Sacramento Police Department and other law enforcement agencies to sift through the granular smart meter data of residents without suspicion to find evidence of cannabis growing. EFF and its co-counsel represent three petitioners in the case: the Asian American Liberation Network, Khurshid Khoja, and Alfonso Nguyen. They argued that the program created a host of privacy harms -- including criminalizing innocent people, creating menacing encounters with law enforcement, and disproportionately harming the Asian community. The court ruled that the challenged surveillance program was not part of any traditional law enforcement investigation. Investigations happen when police try to solve particular crimes and identify particular suspects. The dragnet that turned all 650,000 SMUD customers into suspects was not an investigation. "[T]he process of making regular requests for all customer information in numerous city zip codes, in the hopes of identifying evidence that could possibly be evidence of illegal activity, without any report or other evidence to suggest that such a crime may have occurred, is not an ongoing investigation," the court ruled, finding that SMUD violated its "obligations of confidentiality" under a data privacy statute. [...] In creating and running the dragnet surveillance program, according to the court, SMUD and police "developed a relationship beyond that of utility provider and law enforcement." Multiple times a year, the police asked SMUD to search its entire database of 650,000 customers to identify people who used a large amount of monthly electricity and to analyze granular 1-hour electrical usage data to identify residents with certain electricity "consumption patterns." SMUD passed on more than 33,000 tips about supposedly "high" usage households to police. [...] Going forward, public utilities throughout California should understand that they cannot disclose customers' electricity data to law enforcement without any "evidence to support a suspicion" that a particular crime occurred.Read more of this story at Slashdot.
Longtime Slashdot reader fahrbot-bot shares a report from 404 Media: The Ukrainian Army is knocking a once-hyped Russian superweapon out of the sky by jamming it with a song and tricking it into thinking it's in Lima, Peru. The Kremlin once called its Kh-47M2 Kinzhal ballistic missiles "invincible." Joe Biden said the missile was "almost impossible to stop." Now Ukrainian electronic warfare experts say they can counter the Kinzhal with some music and a re-direction order. [...] Kinzhals and other guided munitions navigate by communicating with Russian satellites that are part of the GLONASS system, a GPS-style navigation network. Night Watch uses a jamming system called Lima EW to generate a disruption field that prevents anything in the area from communicating with a satellite. Many traditional jamming systems work by blasting receivers on munitions and aircraft with radio noise. Lima does that, but also sends along a digital signal and spoofs navigation signals. It "hacks" the receiver it's communicating with to throw it off course. Night Watch shared pictures of the downed Kinzhals with 404 Media that showed a missile with a controlled reception pattern antenna (CRPA), an active antenna that's meant to resist jamming and spoofing. "We discovered that this missile had pretty old type of technology," Night Watch said. "They had the same type of receivers as old Soviet missiles used to have. So there is nothing special, there is nothing new in those types of missiles." Night Watch told 404 Media that it used this Lima to take down 19 Kinzhals in the past two weeks. First, it replaces the missile's satellite navigation signals with the Ukrainian song "Our Father Is Bandera." Any digital noise or random signal would work to jam the navigation system, but Night Watch wanted to use the song because they think it's funny. "We just send a song... we just make it into binary code, you know, like 010101, and just send it to the Russian navigation system," Night Watch said. "It's just kind of a joke. [Bandera] is a Ukrainian nationalist and Russia tries to use this person in their propaganda to say all Ukrainians are Nazis. They always try to scare the Russian people that Ukrainians are, culturally, all the same as Bandera." Once the song hits, Night Watch uses Lima to spoof a navigation signal to the missiles and make them think they're in Lima, Peru. Once the missile's confused about its location, it attempts to change direction. These missiles are fast -- launched from a MiG-31 they can hit speeds of up to Mach 5.7 or more than 4,000 miles per hour -- and an object moving that fast doesn't fare well with sudden changes of direction.Read more of this story at Slashdot.
A magician who implanted an RFID chip in his hand lost access to it after forgetting the password, leaving him effectively locked out of the tech embedded in his own body. The Register reports: "It turns out," said [said magician Zi Teng Wang], "that pressing someone else's phone to my hand repeatedly, trying to figure out where their phone's RFID reader is, really doesn't come off super mysterious and magical and amazing." Then there are the people who don't even have their phone's RFID reader enabled. Using his own phone would, in Zi's words, lack a certain "oomph." Oh well, how about making the chip spit out a Bitcoin address? "That literally never came up either." In the end, Zi rewrote the chip to link to a meme, "and if you ever meet me in person you can scan my chip and see the meme." It was all suitably amusing until the Imgur link Zi was using went down. Not everything on the World Wide Web is forever, and there is no guarantee that a given link will work indefinitely. Indeed, access to Imgur from the United Kingdom was abruptly cut off on September 30 in response to the country's age verification rules. Still, the link not working isn't the end of the world. Zi could just reprogram the chip again, right? Wrong. "When I went to rewrite the chip, I was horrified to realize I forgot the password that I had locked it with." The link eventually started working again, but if and when it stops, Zi's party piece will be a little less entertaining. He said: "Techie friends I've consulted with have determined that it's too dumb and simple to hack, the only way to crack it is to strap on an RFID reader for days to weeks, brute forcing every possible combination." Or perhaps some surgery to remove the offending hardware.Read more of this story at Slashdot.
An anonymous reader quotes a report from Scientific American: Amid a deepening ecological crisis and acute water shortage, Tehran can no longer remain the capital of Iran, the country's president has said. The situation in Tehran is the result of "a perfect storm of climate change and corruption," says Michael Rubin, a political analyst at the American Enterprise Institute. "We no longer have a choice," said Iranian president Masoud Pezeshkian during a speech on Thursday. Instead Iranian officials are considering moving the capital to the country's southern coast. But experts say the proposal does not change the reality for the nearly 10 million people who live in Tehran and are now suffering the consequences of a decades-long decline in water supply. Iran's capital has moved many times over the centuries, notes the report. "But this marks the first time the Iranian government has moved the capital because of an ecological catastrophe." Yet, Rubin says, "it would be a mistake to look at this only through the lens of climate change" and not factor in the water, land, and wastewater mismanagement and corruption that have made the crisis worse. Linda Shi, a social scientist and urban planner at Cornell University, says: "Climate change is not the thing that is causing it, but it is a convenient factor to blame in order to avoid taking responsibility" for poor political decisions.Read more of this story at Slashdot.
The International Association of Cryptologic Research (IACR) was forced to cancel its leadership election after a trustee lost their portion of the Helios voting system's decryption key, making it impossible to reveal or verify the final results. Ars Technica reports: The IACR said Friday that the votes were submitted and tallied using Helios, an open source voting system that uses peer-reviewed cryptography to cast and count votes in a verifiable, confidential, and privacy-preserving way. Helios encrypts each vote in a way that assures each ballot is secret. Other cryptography used by Helios allows each voter to confirm their ballot was counted fairly. "Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share," the IACR said. "As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election." The IACR will switch to a two-of-three private key system to prevent this sort of thing from happening again. Moti Yung, the trustee responsible for the incident, has resigned and is being replaced by Michael Abdalla.Read more of this story at Slashdot.
Google has begun testing sponsored ads inside its Gemini-powered AI Mode, placing labeled "sponsored" links at the bottom of AI-generated responses. Engadget reports: [A] Google spokesperson says the result shown is akin to similar tests it's been running this year. "People seeing ads in AI Mode in the wild is simply part of Google's ongoing tests, which we've been running for several months," the spokesperson said. The push to start offering ads in AI Mode was announced in May. The company also told 9to5Google that there are no current plans to fully update AI Mode to incorporate ads. For now, the software seems to be prioritizing organic links over sponsored links, but we all know how insidious ads can be once the floodgates open...Read more of this story at Slashdot.
The SEC has officially dismissed its high-profile case against SolarWinds and its CISO that was tied to a Russia-linked cyberattack involving the software company. Reuters reports: The landmark case, which SEC brought in late 2023, rattled the cybersecurity community and later faced scrutiny from a judge who dismissed many of the charges. The SEC had said SolarWinds and its chief information security officer had violated U.S. securities laws by concealing vulnerabilities in connection with the high-profile 2020 Sunburst cyber attack. The SEC, SolarWinds and CISO Timothy Brown filed a motion on Thursday to dismiss the case with prejudice, according to a joint stipulation posted on the agency's website. A SolarWinds spokesperson said the firm is "clearly delighted" with the dismissal. "We hope this resolution eases the concerns many CISOs have voiced about this case and the potential chilling effect it threatened to impose on their work," the spokesperson said.Read more of this story at Slashdot.
An anonymous reader quotes a report from Bloomberg: Malaysia's palm oil giants, long-blamed for razing rainforests, fueling toxic haze and driving orangutans to the brink of extinction, are recasting themselves as unlikely champions in a different, potentially greener race: the quest to lure the world's AI data centers to the Southeast Asian country (source paywalled; alternative source). Palm oil companies are earmarking some of the vast tracts of land they own for industrial parks studded with data centers and solar panels, the latter meant to feed the insatiable energy appetites of the former. The logic is simple: data centers are power and land hogs. By 2035, they could demand at least five gigawatts of electricity in Malaysia -- almost 20% of the country's current generation capacity and roughly enough to power a major city like Miami. Malaysia also needs space to house server farms, and palm oil giants control more land than any other private entity in the country. The country has been at the heart of a regional data center boom. Last year, it was the fastest-growing data center market in the Asia-Pacific region and roughly 40% of all planned capacity in Southeast Asia is now slated for Malaysia, according to industry consultant DC Byte. Over the past four years, $34 billion in data center investments has poured into the country -- Alphabet's Google committed $2 billion, Microsoft announced a $2.2 billion investment and Amazon is spending $6.2 billion, to name a few. The government aims for 81 data centers by 2035. The rush is partly a spillover from Singapore, where a years-long moratorium on new centers forced operators to look north. Johor, just across the causeway, is now a hive of construction cranes and server farms -- including for firms such as Singapore Telecommunications, Nvidia and ByteDance. But delivering on government promises of renewable power is proving harder. The strains are already being felt in Malaysia's data center capital. Sedenak Tech Park, one of Johor's flagship sites, is telling potential tenants they'll need to wait until the fourth quarter of 2026 for promised water and power hookups under its second-phase expansion, according to DC Byte. The vacancy rate in Johor's live facilities is just 1.1%, according to real estate consultant Knight Frank. Despite its rapid growth, the market is nowhere near saturation, with six gigawatts of capacity expected to be built out over time, said Knight Frank's head of data centers for Asia Pacific, Fred Fitzalan Howard. That potential bottleneck has incentivized palm oil majors such as SD Guthrie Bhd. to pitch themselves as both landowners and green-power suppliers. The $8.9 billion palm oil producer, SD Guthrie, is the world's largest palm oil planter by acreage, with more than 340,000 hectares in Malaysia. "SD Guthrie is pivoting to solar farms and industrial parks, betting that tech giants hungry for server space will prefer sites with ready access to renewable energy," reports Bloomberg. "The company has reserved 10,000 hectares for such projects over the next decade, starting with clearing old rubber estates and low-yielding palm plots in areas near data center and semiconductor investment hubs." "The company's calculation is based on this: one megawatt of solar requires about 1.5 hectares. Helmy said SD Guthrie wants one gigawatt in operation within three years, enough to power up to 10 hyperscale data centers used for AI computing. The new business is expected to make up about a third of its profits by the end of the decade."Read more of this story at Slashdot.
Phoronix's Michael Larabel reports: A 21 year old bug report requesting support of the XDG Base Directory specification is finally being addressed by Firefox. The Firefox 147 release should respect this XDG specification around where files should be positioned within Linux users' home directory. The XDG Base Directory specification lays out where application data files, configuration files, cached assets, and other files and file formats should be positioned within a user's home directory and the XDG environment variables for accessing those locations. To date Firefox has just positioned all files under ~/.mozilla rather than the likes of ~/.config and ~/.local/share.Read more of this story at Slashdot.
Google's AI infrastructure chief told employees the company must double its AI serving capacity every six months in order to meet demand. In a presentation earlier this month, Amin Vahdat, a vice president at Google Cloud, gave a presentation titled "AI Infrastructure." It included a slide on "AI compute demand" that said: "Now we must double every 6 months.... the next 1000x in 4-5 years." CNBC reports: The presentation was delivered a week after Alphabet reported better-than-expected third-quarter results and raised its capital expenditures forecast for the second time this year, to a range of $91 billion to $93 billion, followed by a "significant increase" in 2026. Hyperscaler peers Microsoft, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year. Google's "job is of course to build this infrastructure but it's not to outspend the competition, necessarily," Vahdat said. "We're going to spend a lot," he said, adding that the real goal is to provide infrastructure that is far "more reliable, more performant and more scalable than what's available anywhere else." In addition to infrastructure build-outs, Vahdat said Google bolsters capacity with more efficient models and through its custom silicon. Last week, Google announced the public launch of its seventh generation Tensor Processing Unit called Ironwood, which the company says is nearly 30 times more power efficient than its first Cloud TPU from 2018. Vahdat said the company has a big advantage with DeepMind, which has research on what AI models can look like in future years. Google needs to "be able to deliver 1,000 times more capability, compute, storage networking for essentially the same cost and increasingly, the same power, the same energy level," Vahdat said. "It won't be easy but through collaboration and co-design, we're going to get there."Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: The US crackdown on chip exports to China has continued with the arrests of four people accused of a conspiracy to illegally export Nvidia chips. Two US citizens and two nationals of the People's Republic of China (PRC), all of whom live in the US, were charged in an indictment (PDF) unsealed on Wednesday in US District Court for the Middle District of Florida. The indictment alleges a scheme to send Nvidia "GPUs to China by falsifying paperwork, creating fake contracts, and misleading US authorities," John Eisenberg, assistant attorney general for the Justice Department's National Security Division, said in a press release yesterday. The four arrestees are Hon Ning Ho (aka Mathew Ho), a US citizen who was born in Hong Kong and lives in Tampa, Florida; Brian Curtis Raymond, a US citizen who lives in Huntsville, Alabama; Cham Li (aka Tony Li), a PRC national who lives in San Leandro, California; and Jing Chen (aka Harry Chen), a PRC national who lives in Tampa on an F-1 non-immigrant student visa. The suspects face a raft of charges for conspiracy to violate the Export Control Reform Act of 2018, smuggling, and money laundering. They could serve many decades in prison if convicted and given the maximum sentences and forfeit their financial gains. The indictment says that Chinese companies paid the conspirators nearly $3.9 million. One of the suspects was briefly the CTO of Corvex, a Virginia-based AI cloud computing company that is planning to go public. Corvex told CNBC yesterday that it "had no part in the activities cited in the Department of Justice's indictment," and that "the person in question is not an employee of Corvex. Previously a consultant to the company, he was transitioning into an employee role but that offer has been rescinded."Read more of this story at Slashdot.
British soldiers are using computer games such as Call of Duty to sharpen their "war-fighting readiness," an Army chief has said. From a report: General Sir Tom Copinger-Symes, the deputy commander of Cyber and Specialist Operations Command, said the war in Ukraine, where remote-operated drones have become crucial on the battlefield, proved the worth of having soldiers skilled in video gaming. The Ministry of Defence on Friday announced the launch of the International Defence Esports Games (IDEG), a video gaming tournament that will pit the best of Britain's "future cyber warriors" against military teams from 40 other countries.Read more of this story at Slashdot.
The Japanese government said that the world's biggest nuclear plant would restart operations. Semafor: The Kashiwazaki-Kariwa site closed in 2012, as Japan -- which previously generated 30% of its electricity from nuclear power -- shuttered most of its fleet in the wake of the Fukushima meltdown. But like much of the world, it is looking once again to nuclear power for reliable, low-carbon energy, especially in the face of high gas and oil prices following Russia's invasion of Ukraine. It has restarted 14 out of 54 plants and announced plans for a first new reactor since the disaster.Read more of this story at Slashdot.
Google confirmed in a statement Friday that hackers have stolen the Salesforce-stored data of more than 200 companies in a large-scale supply chain hack. TechCrunch reports: On Thursday, Salesforce disclosed a breach of "certain customers' Salesforce data" -- without naming affected companies -- that was stolen via apps published by Gainsight, which provides a customer support platform to other companies. In a statement, Austin Larsen, the principal threat analyst of Google Threat Intelligence Group, said that the company "is aware of more than 200 potentially affected Salesforce instances." After Salesforce announced the breach, the notorious and somewhat-nebulous hacking group known as Scattered Lapsus$ Hunters, which includes the ShinyHunters gang, claimed responsibility for the hacks in a Telegram channel, which TechCrunch has seen.Read more of this story at Slashdot.
Microsoft has acknowledged in a support article that major Windows 11 core features including the Start Menu, Taskbar, File Explorer and System Settings break after applying monthly cumulative updates released on or after July 2025. The problems stem from XAML component issues that affect updates beginning with July's Patch Tuesday release (KB5062553). The failures occur during first-time user logins after cumulative updates are applied and on non-persistent OS installations like virtual desktop infrastructure setups. Microsoft lists Explorer.exe crashes, shellhost.exe crashes, StartMenuExperienceHost failures and System Settings that silently refuse to launch among the symptoms. The company provided PowerShell commands and batch scripts as temporary workarounds that re-register the affected packages. Both Windows 11 versions 24H2 and 25H2 share the same codebase and are affected. Microsoft said it is working on a fix but did not provide a timeline.Read more of this story at Slashdot.
Thunderbird Pro has moved its Thundermail email service into production testing as the open-source email client's subscription bundle of additional services prepares for an Early Bird beta launch at $9 per month that will include email hosting, encrypted file sharing through Send, and scheduling via Appointment. Internal team members are now testing Thundermail accounts and the new Thunderbird Pro add-on automatically adds Thundermail accounts for users who sign up through it. The project migrated its data hosting from the Americas to Germany and the EU. Appointment received a major visual redesign being applied across all three services while Send completed an external security review and moved from its standalone add-on into the unified Thunderbird Pro add-on. The new website at tb.pro is live for signups and account management.Read more of this story at Slashdot.
Adam Marshall spent more than a decade developing Kingdoms of the Dump while working as a custodian at a school in suburban Philadelphia, cleaning floors and hauling trash bags from 3 PM to 11 PM before coming home to work on his turn-based role-playing game until 5 or 6 AM. The game, which Bloomberg has called "one of the year's most charming RPGs," came out on Tuesday after Marshall and his childhood friend Matt Loiseau -- also a janitor -- built it using RPG Maker alongside a small team of hobbyists who mostly worked for free. The pair launched a Kickstarter campaign in 2019 that raised $76,560, but the pandemic disrupted their plans and forced them to lose contractors and rethink their approach. Marshall maintained this schedule for five years straight before quitting his custodial job last year to finish the game full-time. Kingdoms of the Dump has sold about 7,000 copies since its release. The game stars a walking trashcan named Dustin Binsley who adventures through landfills and sewers in a world made entirely of garbage.Read more of this story at Slashdot.
AI nutrition tracking features in popular fitness apps are producing wildly inaccurate calorie and macro counts despite promises to simplify food logging through automated photo analysis. The Verge tested AI-powered nutrition tools in Ladder, Oura Advisor, January and MyFitnessPal. Ladder's AI estimated the outlet's carefully measured 355-calorie breakfast at 780 calories and got the macro breakdown wrong even after the reviewer manually edited entries to include exact brands and amounts. Oura Advisor routinely mistook matcha protein shakes for green smoothies. January misidentified barbecue sauce as teriyaki sauce and failed to detect mushrooms in a chicken dish. None of the apps could identify healthier ingredient swaps or accurately log ethnic foods. Oura classified a mix of edamame, quinoa and brown rice as mashed potatoes and white rice. Ladder logged dal makhani curry as chicken soup. The AI features require extensive manual corrections that negate any time savings from automated logging, the publication concluded in its scathing review.Read more of this story at Slashdot.
Amazon's 14,000-plus layoffs announced last month touched almost every piece of the company's sprawling business, from cloud computing and devices to advertising, retail and grocery stores. But one job category bore the brunt of cuts more than others: engineers. CNBC: Documents filed in New York, California, New Jersey and Amazon's home state of Washington showed that nearly 40% of the more than 4,700 job cuts in those states were engineering roles. The data was reported by Amazon in Worker Adjustment and Retraining Notification, or WARN, filings to state agencies. The figures represent a segment of the total layoffs announced in October. Not all data was immediately available because of differences in state WARN reporting requirements.Read more of this story at Slashdot.
Meta is venturing into the complex world of electricity trading, betting it can accelerate the construction of new US power plants that are vital to its AI ambitions. From a report: The foray into power trading comes after Meta heard from investors and plant developers that too few power buyers were willing to make the early, long-term commitments required to spur investment, according to Urvi Parekh, the company's head of global energy. Trading electricity will give the company the flexibility to enter more of those longer contracts. Plant developers "want to know that the consumers of power are willing to put skin in the game," Parekh said in an interview. "Without Meta taking a more active voice in the need to expand the amount of power that's on the system, it's not happening as quickly as we would like."Read more of this story at Slashdot.
An anonymous reader shares a report: Microsoft is upgrading its Advanced Paste tool in PowerToys for Windows 11, allowing you to use an on-device AI model to power some of its features. With the 0.96 update, you can route requests through Microsoft's Foundry Local tool or the open-source Ollama, both of which run AI models on your device's neural processing unit (NPU) instead of connecting to the cloud. That means you won't need to purchase API credits to perform certain actions, like having AI translate or summarize the text copied to your clipboard. Plus, you can keep your data on your device.Read more of this story at Slashdot.
OpenAI CEO Sam Altman told colleagues last month that Google's recent progress in AI could "create some temporary economic headwinds for our company," though he added that OpenAI would emerge ahead, The Information reports [non-paywalled source]. From the report: After OpenAI researchers heard that Google had created a new AI that appears to have leapfrogged OpenAI's in the way it was developed, Altman said in the memo that "we know we have some work to do but we are catching up fast." Still, he cautioned employees that "I expect the vibes out there to be rough for a bit."Read more of this story at Slashdot.
An anonymous reader shares a report: "In the 2024-2025 school year, homeschooling continued to grow across the United States, increasing at an average rate of 5.4%," Angela Watson of the Johns Hopkins University School of Education's Homeschool Hub wrote earlier this month. "This is nearly three times the pre-pandemic homeschooling growth rate of around 2%." She added that more than a third of the states from which data is available report their highest homeschooling numbers ever, even exceeding the peaks reached when many public and private schools were closed during the pandemic. After COVID-19 public health measures were suspended, there was a brief drop in homeschooling as parents and families returned to old habits. That didn't last long. Homeschooling began surging again in the 2023-2024 school year, with that growth continuing last year. Based on numbers from 22 states (not all states have released data, and many don't track homeschoolers), four report declines in the ranks of homeschooled children -- Delaware, the District of Columbia, Hawaii, and Tennessee -- while the others report growth from around 1 percent (Florida and Louisiana) to as high as 21.5 percent (South Carolina). The latest figures likely underestimate growth in homeschooling since not all DIY families abide by registration requirements where they exist, and because families who use the portable funding available through increasingly popular Education Savings Accounts to pay for homeschooling costs are not counted as homeschoolers in several states, Florida included. As a result, adds Watson, "we consider these counts as the minimum number of homeschooled students in each state."Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: Some Dell and HP laptop owners have been befuddled by their machines' inability to play HEVC/H.265 content in web browsers, despite their machines' processors having integrated decoding support. Laptops with sixth-generation Intel Core and later processors have built-in hardware support for HEVC decoding and encoding. AMD has made laptop chips supporting the codec since 2015. However, both Dell and HP have disabled this feature on some of their popular business notebooks. HP discloses this in the data sheets for its affected laptops, which include the HP ProBook 460 G11 [PDF], ProBook 465 G11 [PDF], and EliteBook 665 G11 [PDF]. "Hardware acceleration for CODEC H.265/HEVC (High Efficiency Video Coding) is disabled on this platform," the note reads. Despite this notice, it can still be jarring to see a modern laptop's web browser eternally load videos that play easily in media players. HP and Dell didn't explain why the companies disabled HEVC hardware decoding on their laptops' processors. A statement from an HP spokesperson said: "In 2024, HP disabled the HEVC (H.265) codec hardware on select devices, including the 600 Series G11, 400 Series G11, and 200 Series G9 products. Customers requiring the ability to encode or decode HEVC content on one of the impacted models can utilize licensed third-party software solutions that include HEVC support. Check with your preferred video player for HEVC software support." Dell's media relations team shared a similar statement: "HEVC video playback is available on Dell's premium systems and in select standard models equipped with hardware or software, such as integrated 4K displays, discrete graphics cards, Dolby Vision, or Cyberlink BluRay software. On other standard and base systems, HEVC playback is not included, but users can access HEVC content by purchasing an affordable third-party app from the Microsoft Store. For the best experience with high-resolution content, customers are encouraged to select systems designed for 4K or high-performance needs."Read more of this story at Slashdot.
fahrbot-bot shares a report from Phys.org: Physicists from Swansea University have played the leading role in a scientific breakthrough at CERN, developing an innovative technique that increases the antihydrogen trapping rate by a factor of ten. The advancement, achieved as part of the international Antihydrogen Laser Physics Apparatus (ALPHA) collaboration, has been published in Nature Communications and could help answer one of the biggest questions in physics: Why is there such a large imbalance between matter and antimatter? According to the Big Bang theory, equal amounts were created at the beginning of the universe, so why is the world around us made almost entirely of matter? Antihydrogen is the "mirror version" of hydrogen, made from an antiproton and a positron. Trapping and studying it helps scientists explore how antimatter behaves, and whether it follows the same rules as matter. Producing and trapping antihydrogen is an extremely complicated process. Previous methods took 24 hours to trap just 2,000 atoms, limiting the scope of experiments at ALPHA. The Swansea-led team has changed that. Using laser-cooled beryllium ions, the team has demonstrated that it is possible to cool positrons to less than 10 Kelvin (below -263C), significantly colder than the previous threshold of about 15 Kelvin. These cooler positrons dramatically boost the efficiency of antihydrogen production and trapping -- allowing a record 15,000 atoms to be trapped in less than seven hours.Read more of this story at Slashdot.
alternative_right shares a report from Phys.org: Inspired by moss's resilience, researchers sent moss sporophytes -- reproductive structures that encase spores -- to the most extreme environment yet: space. Their results, published in the journal iScience on November 20, show that more than 80% of the spores survived nine months outside of the International Space Station (ISS) and made it back to Earth still capable of reproducing, demonstrating for the first time that an early land plant can survive long-term exposure to the elements of space. [Lead author Tomomichi Fujita of Hokkaido University and his team] subjected Physcomitrium patens, a well-studied moss commonly known as spreading earthmoss, to a simulated a space environment, including high levels of UV radiation, extreme high and low temperatures, and vacuum conditions. They tested three different structures from the moss -- protenemata, or juvenile moss; brood cells, or specialized stem cells that emerge under stress conditions; and sporophytes, or encapsulated spores -- to find out which had the best chance of surviving in space. The researchers found that UV radiation was the toughest element to survive, and the sporophytes were by far the most resilient of the three moss parts. None of the juvenile moss survived high UV levels or extreme temperatures. The brood cells had a higher rate of survival, but the encased spores exhibited ~1,000x more tolerance to UV radiation. The spores were also able to survive and germinate after being exposed to 196C for over a week, as well as after living in 55C heat for a month.Read more of this story at Slashdot.
An anonymous reader quotes a report from the Associated Press: They're cute, even cuddly, and promise learning and companionship -- but artificial intelligence toys are not safe for kids, according to children's and consumer advocacy groups urging parents not to buy them during the holiday season. These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAI's ChatGPT, according to an advisory published Thursday by the children's advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators. "The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm," Fairplay said. AI toys, made by companies including Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but disrupt children's relationships and resilience, the group said. "What's different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters," said Rachel Franz, director of Fairplay's Young Children Thrive Offline Program. Because of this, she added, the trust young children are placing in these toys can exacerbate the types of harms older children are already experiencing with AI chatbots. A separate report Thursday by Common Sense Media and psychiatrists at Stanford University's medical school warned teenagers against using popular AI chatbots as therapists. Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for years. They just weren't as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattel's talking Hello Barbie doll that it said was recording and analyzing children's conversations. This time, though AI toys are mostly sold online and more popular in Asia than elsewhere, Franz said some have started to appear on store shelves in the U.S. and more could be on the way. "Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products," Franz said. Last week, consumer advocates at U.S. PIRG called out the trend of buying AI toys in its annual "Trouble in Toyland" report. This year, the organization tested four toys that use AI chatbots. "We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls," the report said.Read more of this story at Slashdot.
An Ohio IT contractor pleaded guilty to breaking into his former employer's network after being fired, impersonating another worker and using a PowerShell script to reset 2,500 passwords -- an act that locked out thousands of employees and caused more than $862,000 in damage. He faces up to 10 years in prison. The Register reports: Maxwell Schultz, 35, impersonated another contractor to gain access to the company's network after his credentials were revoked. Announcing the news, US attorney Nicholas J. Ganjei did not specify the company in question, which is typical in these malicious insider cases, although local media reported it to be Houston-based Waste Management. The attack took place on May 14, 2021, and saw Schultz use the credentials to reset approximately 2,500 passwords at the affected organization. This meant thousands of employees and contractors across the US were unable to access the company network. Schultz admitted to running a PowerShell script to reset the passwords, searching for ways to delete system logs to cover his tracks -- in some cases succeeding -- and clearing PowerShell window events, according to the Department of Justice. Prosecutors said the attack caused more than $862,000 worth of damage related to employee downtime, a disrupted customer service function, and costs related to the remediation of the intrusion. Schultz is set to be sentenced on Jan 30, 2026, and faces up to ten years in prison and a potential maximum fine of $250,000.Read more of this story at Slashdot.
IBM and Cisco plan to link quantum computers over long distances by the early 2030s, "with the goal of demonstrating the concept is workable by the end of 2030," reports Reuters. "The move could pave the way for a quantum internet, though executives at the two companies cautioned that the networks would require technologies that do not currently exist and will have to be developed with the help of universities and federal laboratories." From the report: The challenge begins with a problem: Quantum computers like IBM's sit in massive cryogenic tanks that get so cold that atoms barely move. To get information out of them, IBM has to figure out how to transform information in stationary "qubits" -- the fundamental unit of information in a quantum computer -- into what Jay Gambetta, director of IBM Research and an IBM fellow, told Reuters are "flying" qubits that travel as microwaves. But those flying microwave qubits will have to be turned into optical signals that can travel between Cisco switches on fiber-optic cables. The technology for that transformation -- called a microwave-optical transducer -- will have to be developed with the help of groups like the Superconducting Quantum Materials and Systems Center, led by the Fermi National Accelerator Laboratory near Chicago, among others. Along the way, Cisco and IBM will also publish open-source software to weave all the parts together.Read more of this story at Slashdot.