Nvidia CEO Jensen Huang "said one of his biggest takeaways from 2025 was 'the battle of narratives' over the future of AI development between those who see doom on the horizon and the optimists," reports Business Insider. Huang did acknowledge that "it's too simplistic" to entirely dismiss either side (on a recent episode of the "No Priors" podcast). But "I think we've done a lot of damage with very well-respected people who have painted a doomer narrative, end of the world narrative, science fiction narrative.""It's not helpful to people. It's not helpful to the industry. It's not helpful to society. It's not helpful to the governments..." [H]e cited concerns about "regulatory capture," arguing that no company should approach governments to request more regulation. "Their intentions are clearly deeply conflicted, and their intentions are clearly not completely in the best interest of society," he said. "I mean, they're obviously CEOs, they're obviously companies, and obviously they're advocating for themselves..." "When 90% of the messaging is all around the end of the world and the pessimism, and I think we're scaring people from making the investments in AI that makes it safer, more functional, more productive, and more useful to society," he said. Elsewhere in the podcast, Huang argues that the AI bubble is a myth. Business Insider adds that "a spokesperson for Nvidia declined to elaborate on Huang's remarks." Thanks to Slashdot reader joshuark for sharing the article.Read more of this story at Slashdot.
"The clock is ticking" on the Hubble Space Telescope, writes the space news site Daily Galaxy, citing estimates from the unofficial "Hubble Reentry Tracker" site (which uses orbital data from the site space-track.org, created by tech integrator SAIC):While Hubble was initially launched into low Earth orbit at an altitude of around 360 miles, it has since descended to approximately 326 miles, and it continues to fall... "The solar flux levels are currently longer in duration and more elevated than previously anticipated, resulting in an earlier reentry forecast for the Hubble Space Telescope if no reboost mission is conducted," Hubble Reentry Trackersays the Hubble Reentry Tracker... ["Hubble has been reboosted three times in its history," the site points out, "all by servicing missions using the Space Shuttle."] NASA partnered with SpaceX in 2022 to explore the feasibility of raising Hubble to its original altitude of 373 miles. Such an adjustment would have bought Hubble a few more years in orbit. However, the future of this plan remains uncertain, as NASA has not made any official announcements to move forward with it... Solar flux levels, which determine atmospheric drag, have increased in recent years, accelerating the telescope's decline. This change in solar behavior means that the possibility of Hubble reentering Earth's atmosphere in the next five to six years is quite high if no corrective action is taken. ["But it is difficult to estimate this value due to the variability of future solar flux," the site cautions. "In the best case, Hubble may not reenter for 15 more years, around 2040. In the worst case, it could reenter in 4 years..."]Once Hubble reaches an altitude of 248 miles, it is expected that it will have less than a year before reentry... While Hubble's end may be near, there is a promising new project on the horizon: Lazuli, a privately-funded space telescope funded by former Google CEO Eric Schmidt. Lazuli aims to become the first privately-funded space telescope, and it could be the successor Hubble enthusiasts have been hoping for. Schmidt Sciences, the organization behind the telescope, plans to launch Lazuli by 2028, providing a more modern alternative to Hubble with a larger mirror and enhanced capabilities. The telescope's proposed design includes a 94-inch-wide mirror, which is a significant upgrade from Hubble's 94.5-inch mirror, and will feature updated instruments to capture more detailed data than ever before.Read more of this story at Slashdot.
Alphabet-owned Wing "is expanding its drone delivery service to an additional 150 Walmart stores across the U.S.," reports Axios:[T]he future is already here if you live in Dallas - where some Walmart customers order delivery by Wing three times a week. By the end of 2026, some 40 million Americans, or about 12 percent of the U.S. population, will be able to take advantage of the convenience, the companies claim... Once the items are picked and packed in a small cardboard basket, they are loaded onto a drone inside a fenced area in the Walmart parking lot. Drones fly autonomously to the designated address, with human pilots monitoring each flight from a central operations hub.... For now, Wing deliveries are free. "The goal is to expose folks to the wonders of drone delivery," explains Wing's chief business officer, Heather Rivera... Over time, she said Wing expects delivery fees to be comparable to other delivery options, but faster and more convenient.Service began recently in Atlanta and Charlotte, and it's coming soon to Los Angeles, Houston, Cincinnati, St. Louis, Miami and other major U.S. cities to be announced later, according to the article."By 2027, Walmart and Wing say they'll have a network of more than 270 drone delivery locations nationwide." Walmart also announced a new deal today with Google's Gemini, allowing customers to purchase Walmart products from within Gemini. (Walmart announced a similar deal for ChatGPT in October.) Slashdot reader BrianFagioli calls this "a defensive angle that Walmart does not quite say out loud."As AI models answer more questions directly, retailers risk losing customers before they ever hit a website. If Gemini recommends a product from someone else first, Walmart loses the sale before it starts. By planting itself inside the AI, Walmart keeps a seat at the table while the internet shifts under everyone's feet. Google clearly benefits too. Gemini gets a more functional purpose than just telling you how to boil pasta or summarize recipes. Now it can carry someone from the moment they wonder what they need to the moment the order is placed. That makes the assistant stickier and a bit more practical than generic chat. Walmart's incoming CEO John Furner says the company wants to shape this new pattern instead of being dragged into it later. Sundar Pichai calls Walmart an early partner in what he sees as a broader wave of agent style commerce, where AI starts doing the errands people used to handle themselves. The article concludes "This partnership serves as a snapshot of where retail seems to be heading..."Read more of this story at Slashdot.
Gentoo Linux posted its 2025 project retrospective this week. Some interesting details:Mostly because of the continuous attempts to force Copilot usage for our repositories, Gentoo currently considers and plans the migration of our repository mirrors and pull request contributions to Codeberg. Codeberg is a site based on Forgejo, maintained by a non-profit organization, and located in Berlin, Germany. Gentoo continues to host its own primary git, bugs, etc infrastructure and has no plans to change that... We now publish weekly Gentoo images for Windows Subsystem for Linux (WSL), based on the amd64 stages, see our mirrors. While these images are not present in the Microsoft store yet, that's something we intend to fix soon... Given the unfortunate fracturing of the GnuPG / OpenPGP / LibrePGP ecosystem due to competing standards, we now provide an alternatives mechanism to choose the system gpg provider and ease compatibility testing... We have added a bootstrap path for Rust from C++ using Mutabah's Rust compiler mrustc, which alleviates the need for pre-built binaries and makes it significantly easier to support more configurations. Similarly, Ada and D support in gcc now have clean bootstrap paths, which makes enabling these in the compiler as easy as switching the useflags on gcc and running emerge. Other interesting statistics for the year:Gentoo currently consists of 31,663 ebuilds for 19,174 different packages.For amd64 (x86-64), there are 89 GBytes of binary packages available on the mirrors.Gentoo each week builds 154 distinct installation stages for different processor architectures and system configurations, with an overwhelming part of these fully up-to-date.The number of commits to the main ::gentoo repository has remained at an overall high level in 2025, with a slight decrease from 123,942 to 112,927.The number of commits by external contributors was 9,396, now across 377 unique external authors.Thanks to long-time Slashdot reader Heraklit for sharing the 2025 retrospective.Read more of this story at Slashdot.
An anonymous reader shared this report from Engadget:If you received a bunch of password reset requests from Instagram recently, you're not alone. As reported by Malwarebytes, an antivirus software company, there was a data breach revealing the "sensitive information" of 17.5 million Instagram users. Malwarebytes added that the leak included Instagram usernames, physical addresses, phone numbers, email addresses and more. The company added that the "data is available for sale on the dark web and can be abused by cybercriminals." Malwarebytes noted in an email to its customers that it discovered the breach during its routine dark web scan and that it's tied to a potential incident related to an Instagram API exposure from 2024.Read more of this story at Slashdot.
"China recently placed a supercritical carbon dioxide power generator into commercial operation," writes CleanTechnica, "and the announcement was widely framed as a technological breakthrough."The system, referred to as Chaotan One, is installed at a steel plant in Guizhou province in mountainous southwest China and is designed to recover industrial waste heat and convert it into electricity. Each unit is reported to be rated at roughly 15 MW, with public statements describing configurations totaling around 30 MW. Claimed efficiency improvements range from 20% to more than 30% higher heat to power conversion compared with conventional steam based waste heat recovery systems. These are big numbers, typical of claims for this type of generator, and they deserve serious attention. China doing something first, however, has never been a reliable indicator that the thing will prove durable, economic, or widely replicable. China is large enough to try almost everything. It routinely builds first of a kind systems precisely because it can afford to learn by doing, discarding what does not work and scaling what does. This approach is often described inside China as crossing the river by feeling for stones. It produces valuable learning, but it also produces many dead ends. The question raised by the supercritical CO2 deployment is not whether China is capable of building it, but whether the technology is likely to hold up under real operating conditions for long enough to justify broad adoption. A more skeptical reading is warranted because Western advocates of specific technologies routinely point to China's limited deployments as evidence that their preferred technologies are viable, when the scale of those deployments actually argues the opposite. China has built a single small modular reactor and a single experimental molten salt reactor, not fleets of them, despite having the capital, supply chains, and regulatory capacity to do so if they made economic sense... If small modular reactors or hydrogen transportation actually worked at scale and cost, China would already be building many more of them, and the fact that it is not should be taken seriously rather than pointing to very small numbers of trials compared to China's very large denominators... What is notably absent from publicly available information is detailed disclosure of materials, operating margins, impurity controls, and maintenance assumptions. This is not unusual for early commercial deployments in China. It does mean that external observers cannot independently assess long term durability claims. The article notes America's Energy Department funded a carbon dioxide turbine in Texas rated at roughly 10 MW electric that "reached initial power generation in 2024 after several years of construction and commissioning." But for both these efforts, the article warns that "early efficiency claims should be treated as provisional. A system that starts at 15 MW and delivers 13 MW after several years with rising maintenance costs is not a breakthrough. It is an expensive way to recover waste heat compared with mature steam based alternatives that already operate for decades with predictable degradation..." "If both the Chinese and U.S. installations run for five years without significant reductions in performance and without high maintenance costs, I will be surprised. In that case, it would be worth revisiting this assessment and potentially changing my mind." Thanks to long-time Slashdot reader cusco for sharing the article.Read more of this story at Slashdot.
Remember that re-discovered computer tape with one of the earliest versions of Unix from the early 1970s? This week several local news outlets in Utah reported on the find, with KSL creating a video report with shots of the tape arriving at Silicon Valley's Computer History Museum, the closet where it was found, and even its handwritten label. The Salt Lake Tribune reports that the closet where it was found also contained "old cords from unknown sources and mountains of papers that had been dumped from a former professor's file cabinet, including old drawings from his kids and saved plane ticket stubs." (Their report also includes a photo of the University of Utah team that found the tape - the University's Flux Research Group). Professor Robert Ricci believes only 20 copies were ever produced of the version of Unix on that tape:At the time, in the 1970s, Ricci estimates there would have been maybe two or three of those computers - called a PDP-11, or programmed data processor - in Utah that could have run UNIX V4, including the one at the U. Having that technology is part of why he believes the U. got a copy of the rare software. The other part was the distinguished computing faculty at the school. The new UNIX operating system would've been announced at conferences in the early 1970s, and a U. professor at the time named Martin Newell frequently attended those because of his own recognized work in the field, Ricci said. In another box, stuffed in under manila envelopes, [researcher Aleks] Maricq found a 1974 letter written to Newell from Ken Thompson at Bell Labs that said as soon as "a new batch comes from the printers, I will send you the system." Ricci and Maricq are unsure if the software was ever used. They reached out to Newell, who is now 72 and retired, as well as some of his former students. None of them recalled actually running it through the PDP-11... The late Jay Lepreau also worked at the U.'s computing department and created the Flux Research Group that Ricci, Maricq and [engineering research associate Jon] Duerig are now part of. Lepreau overlapped just barely with Newell's tenure. In 1978, Lepreau and a team at the U. worked with a group at the University of California, Berkeley. Together, they built their own clone of the UNIX operating system. They called it BSD, or Berkeley Standard Distribution. Steve Jobs, the former CEO of Apple, worked with BSD, too, and it influenced his work. Ultimately, it was Lepreau who saved the 9-track tape with the UNIX system on it in his U. office. And he's why the university still has it today. "He seems to have found it and decided it was worth keeping," Ricci said... The U. will also get the tape back from the museum. Maricq said it will likely be displayed in the university's new engineering building that's set to open in January 2027. That's why, the research associate said, he was cleaning out the storage room to begin with - to try to prepare for the move. He was mostly just excited to see the floor again. "I thought we'd find some old stuff, but I didn't think it'd be anything like this," he said. And Maricq still has boxes to go through, including more believed to be from Lepreau's office. Local news station KMYU captured the thoughts of some of the University researchers who found the tape:"When you see the very first beginnings of something, and you go from seed to sapling, that's what we saw here," [engineering research associate Jon] Duerig said. "We see this thing in the moment of flux. We see the signs of all the things changing - of all the things developing that we now see today." Duerig also gave this comment to local news station KSL. "The coolest thing is that anybody, anywhere in the world can now access this, right? People can go on the internet archive and download the raw tape file and simulate running it," Duerig said. "People have posted browsable directory trees of the whole thing."One of the museum's directors said the tape's recovery marked a big day for the museum "One of the things that was pretty exciting to us is that just that there is this huge community of people around the world who were excited to jump on the opportunity to look at this piece of history," Ricci said. "And it was really cool that we were able to share that." Duerig said while there weren't many comments or footnotes from the programmers of that time, they did discovery more unexpected content having to do with Bell Labs on the tape. "There were survey results of them actually asking survey questions of their employees at these operator centers," he said. Thanks to long-time Slashdot reader walterbyrd for sharing the news.Read more of this story at Slashdot.
Scifi author/tech activist Cory Doctorow has decried the "enshittification" of our technologies to extract more profit. But Saturday he also described what could be "the beginning of the end for enshittification" in a new article for the Guardian - "our chance to make tech good again".There is only one reason the world isn't bursting with wildly profitable products and projects that disenshittify the US's defective products: its (former) trading partners were bullied into passing an "anti-circumvention" law that bans the kind of reverse-engineering that is the necessary prelude to modifying an existing product to make it work better for its users (at the expense of its manufacturer)... Post-Brexit, the UK is uniquely able to seize this moment. Unlike our European cousins, we needn't wait for the copyright directive to be repealed before we can strike article 6 off our own law books and thereby salvage something good out of Brexit... Until we repeal the anti-circumvention law, we can't reverse-engineer the US's cloud software, whether it's a database, a word processor or a tractor, in order to swap out proprietary, American code for robust, open, auditable alternatives that will safeguard our digital sovereignty. The same goes for any technology tethered to servers operated by any government that might have interests adverse to ours - say, the solar inverters and batteries we buy from China. This is the state of play at the dawn of 2026. The digital rights movement has two powerful potential coalition partners in the fight to reclaim the right of people to change how their devices work, to claw back privacy and a fair deal from tech: investors and national security hawks. Admittedly, the door is only open a crack, but it's been locked tight since the turn of the century. When it comes to a better technology future, "open a crack" is the most exciting proposition I've heard in decades. Thanks to Slashdot reader Bruce66423 for sharing the article.Read more of this story at Slashdot.
For a quarter century, the TIOBE Index has attempted to rank the popularity of programming languages by the number of search engine results they bring up - and this week they had an announcement. Over the last year the language showing the largest increase in its share of TIOBE's results was C#. TIOBE founder/CEO Paul Jansen looks back at how C++ evolved:From a language-design perspective, C# has often been an early adopter of new trends among mainstream languages. At the same time, it successfully made two major paradigm shifts: from Windows-only to cross-platform, and from Microsoft-owned to open source. C# has consistently evolved at the right moment. For many years now, there has been a direct battle between Java and C# for dominance in the business software market. I always assumed Java would eventually prevail, but after all this time the contest remains undecided. It is an open question whether Java - with its verbose, boilerplate-heavy style and Oracle ownership - can continue to keep C# at bay. While C# remains stuck in the same #5 position it was in a year ago, its share of TIOBE's results rose 2.94% - the largest increase of the 100 languages in their rankngs. But TIOBE's CEO notes that his rankings for the top 10 highest-scoring languages delivered "some interesting movements" in 2025:C and C++ swapped positions. [C rose to the #2 position - behind Python - while C++ dropped from #2 to the #4 rank that C held in January of 2025]. Although C++ is evolving faster than ever, some of its more radical changes - such as the modules concept - have yet to see widespread industry adoption. Meanwhile, C remains simple, fast, and extremely well suited to the ever-growing market of small embedded systems. Even Rust has struggled to penetrate this space, despite reaching an all-time high of position #13 this month. So who were the other winners of 2025, besides C#? Perl made a surprising comeback, jumping from position #32 to #11 and re-entering the top 20. Another language returning to the top 10 is R, driven largely by continued growth in data science and statistical computing. Of course, where there are winners, there are also losers. Go appears to have permanently lost its place in the top 10 during 2025. The same seems true for Ruby, which fell out of the top 20 and is unlikely to return anytime soon. What can we expect from 2026? I have a long history of making incorrect predictions, but I suspect that TypeScript will finally break into the top 20. Additionally, Zig, which climbed from position #61 to #42 in 2025, looks like a strong candidate to enter the TIOBE top 30. Here's how TIOBE estimated the 10 most popularity programming languages at the end of 2025PythonCJavaC++C#JavaScriptVisual BasicSQLDelphi/Object PascalRRead more of this story at Slashdot.
"We will make the new algorithm...open source in 7 days," Elon Musk posted Saturday on X.com. Musk says this is "including all code used to determine what organic and advertising posts are recommended to users," and "This will be repeated every 4 weeks, with comprehensive developer notes, to help you understand what changed." Some context from Engadget:Musk has been making promises of open-sourcing the algorithm since his takeover of Twitter, and in 2023 published the code for the site's "For You" feed on GitHub. But the code wasn't all that revealing, leaving out key details, according to analyses at the time. And it hasn't been kept up to date. Bloomberg also reported on Saturday's announcement:The billionaire didn't say why X was making its algorithm open source. He and the company have clashed several times with regulators over content being shown to users. Some X users had previously complained that they were receiving fewer posts on the social media platform from people they follow. In October, Musk confirmed in a post on X that the company had found a "significant bug" in the platform's "For You" algorithm and pledged a fix. The company has also been working to incorporate more artificial intelligence into its recommendation algorithm for X, using Grok, Musk's artificial intelligence chatbot... In September, Musk wrote that the goal was for X's recommendation engine to "be purely AI" and that the company would share its open source algorithm about every two weeks. "To the degree that people are seeing improvements in their feed, it is not due to the actions of specific individuals changing heuristics, but rather increasing use of Grok and other AI tools," Musk wrote in October. The company was working to have all of the more than 100 million daily posts published to X evaluated by Grok, which would then offer individual users the posts most likely to interest them, Musk wrote. "This will profoundly improve the quality of your feed." He added that the company was planning to roll out the new features by November.Read more of this story at Slashdot.
An R&D lab under America's Energy Department annnounced this week that "Neuromorphic computers, inspired by the architecture of the human brain, are proving surprisingly adept at solving complex mathematical problems that underpin scientific and engineering challenges." Phys.org publishes the announcement from Sandia National Lab:In a paper published in Nature Machine Intelligence, Sandia National Laboratories computational neuroscientists Brad Theilman and Brad Aimone describe a novel algorithm that enables neuromorphic hardware to tackle partial differential equations, or PDEs - the mathematical foundation for modeling phenomena such as fluid dynamics, electromagnetic fields and structural mechanics. The findings show that neuromorphic computing can not only handle these equations, but do so with remarkable efficiency. The work could pave the way for the world's first neuromorphic supercomputer, potentially revolutionizing energy-efficient computing for national security applications and beyond... "We're just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly," Theilman said.For decades, experts have believed that neuromorphic computers were best suited for tasks like recognizing patterns or accelerating artificial neural networks. These systems weren't expected to excel at solving rigorous mathematical problems like PDEs, which are typically tackled by traditional supercomputers. But for Aimone and Theilman, the results weren't surprising. The researchers believe the brain itself performs complex computations constantly, even if we don't consciously realize it. "Pick any sort of motor control task - like hitting a tennis ball or swinging a bat at a baseball," Aimone said. "These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply..." Their research also raises intriguing questions about the nature of intelligence and computation. The algorithm developed by Theilman and Aimone retains strong similarities to the structure and dynamics of cortical networks in the brain. "We based our circuit on a relatively well-known model in the computational neuroscience world," Theilman said. "We've shown the model has a natural but non-obvious link to PDEs, and that link hasn't been made until now - 12 years after the model was introduced." The researchers believe that neuromorphic computing could help bridge the gap between neuroscience and applied mathematics, offering new insights into how the brain processes information. "Diseases of the brain could be diseases of computation," Aimone said. "But we don't have a solid grasp on how the brain performs computations yet." If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer's and Parkinson's.Read more of this story at Slashdot.
Is there a trend? This week four different articles appeared on various tech-news sites with an author bragging about switching to Linux. "Greetings from the year of Linux on my desktop," quipped the Verge's senior reviews editor, who finally "got fed up and said screw it, I'm installing Linux." They switched to CachyOS - just like this writer for the videogame magazine Escapist:I've had a fantastic time gaming on Linux. Valve's Windows-to-Linux translation layer, Proton, and even CachyOS' bundled fork have been working just fine. Of course, it's not perfect, and there's been a couple of instances where I've had to problem-solve something, but most of the time, any issues gaming on Linux have been fixed by swapping to another version of Proton. If you're deep in online games like Fortnite, Call of Duty, Destiny 2, GTAV or Battlefield 6, it might not be the best option to switch. These games feature anti-cheats that look for versions of Windows or even the heart of the OS, the kernel, to verify the system isn't going to mess up someone's game.... CachyOS is thankfully pre-packed with Nvidia drivers, meaning I didn't have to dance around trying to find them.... Certain titles will perform worse than their counterparts, simply due to how the bods at Nvidia are handling the drivers for Linux. This said, I'm still not complaining when I'm pushing nearly 144fps or more in newer games. The performance hit is there, but it's nowhere near enough to stave off even an attempt to mess about with Linux. Do you know how bizarre it is to say it's "nice to have a taskbar again"? I use macOS daily for a lot of my work, which uses a design baked back in the 1990s through NeXT. Seeing just a normal taskbar that doesn't try to advertise to me or crash because an update killed it for some reason is fantastic. That's how bad it is out there right now for Windows. "I run Artix, by the way," joked a senior tech writer at Notebookcheck (adding "There. That's out of the way...")I dual-booted a Linux partition for a few weeks. After a Windows update (that I didn't choose to do) wiped that partition and, consequently, the Linux installation, I decided to go whole-hog: I deleted Windows 11 and used the entire drive for Linux... Artix differs from Arch in that it does not use SystemD as its init system. I won't go down the rabbit hole of init systems here, but suffice it to say that Artix boots lightning quick (less than 10 seconds from a cold power on) and is pretty light on system resources. However, it didn't come "fully assembled..." The biggest problem I ran into after installing Artix on the [MacBook] Air was the lack of wireless drivers, which meant that WiFi did not work out of the box. The resolution was simple: I needed to download the appropriate WiFi drivers (Broadcom drivers, to be exact) from Artix's main repository. This is a straightforward process handled by a single command in the Terminal, but it requires an internet connection... which my laptop did not have. Ultimately, I connected a USB-to-Ethernet adapter, plugged the laptop directly into my router, and installed the WiFi drivers that way. The whole process took about 10 minutes, but it was annoying nonetheless. For the record, my desktop (an AMD Ryzen 7 6800H-based system) worked flawlessly out-of-the-box, even with my second monitor's uncommon resolution (1680x1050, vertical orientation). I did run into issues with installing some packages on both machines. Trying to install the KDE desktop environment (essentially a different GUI for the main OS) resulted in strange artifacts that put white text on white backgrounds in the menus, and every resolution I tried failed to correct this bug. After reverting to XFCE4 (the default desktop environment for my Artix install), the WiFi signal indicator in the taskbar disappeared. This led to me having to uninstall a network manager installed by KDE and re-linking the default network manager to the runit services startup folder. If that sentence sounds confusing, the process was much more so. It has been resolved, and I have a WiFi indicator that lets me select wireless networks again, but only after about 45 minutes of reading manuals and forum posts. Other issues are inherent to Linux. Not all games on Steam that are deemed Linux compatible actually are. Civilization III Complete is a good example: launching the game results in the map turning completely black. (Running the game through an application called Lutris resolved this issue.) Not all the software I used on Windows is available in Linux, such as Greenshot for screenshots or uMark for watermarking photos in bulk. There are alternatives to these, but they don't have the same features or require me to relearn workflows... Linux is not a "one and done" silver bullet to solve all your computer issues. It is like any other operating system in that it will require users to learn its methods and quirks. Admittedly, it does require a little bit more technical knowledge to dive into the nitty-gritty of the OS and fully unlock its potential, but many distributions (such as Mint) are ready to go out of the box and may never require someone to open a command line... [T]he issues I ran into on Linux were, for the most part, my fault. On Windows or macOS, most problems I run into are caused by a restriction or bug in the OS. Linux gives me the freedom to break my machine and fix it again, teaching me along the way. With Microsoft's refusal (either from pride or ignorance) to improve (or at least not crapify) Windows 11 despite loud user outrage, switching to Linux is becoming a popular option. It's one you should consider doing, and if you've been thinking about it for any length of time, it's time to dive in. And tinkerer Kevin Wammer switched from MacOS to Linux, saying "Linux has come a long way" after more than 30 years - but "Windows still sucks..."Read more of this story at Slashdot.
A founder of Twitter and a founder of Pinterest are now working on"social media for people who hate social media," writes a Washington Post columnist. "When I heard that this platform would harness AI to help us live more meaningful lives, I wanted to know more..."Their bid for redemption is West Co. - the Workshop for Emotional and Spiritual Technology Corporation - and the platform they're testing is called Tangle, a "purpose discovery tool" that uses AI to help users define their life purposes, then encourages them to set intentions toward achieving those purposes, reminds them periodically and builds a community of supporters to encourage steps toward meeting those intentions. "A lot of people, myself included, have been on autopilot," Stone said. "If all goes well, we'll introduce a lot of people to the concept of turning off autopilot." But will all go well? The entrepreneurs have been at it for two years, and they've scrapped three iterations before even testing them. They still don't have a revenue model. "This is a really hard thing to do," Stone admitted. "If we were a traditional start-up, we would have probably been folded by now." But the two men, with a combined net worth of at least hundreds of millions, and possibly billions, had the luxury of self-funding for a year, and now they have $29 million in seed funding led by Spark Capital... [T]he project revolves around training existing AI models in "what good intentions and helpful purposes look like," explained Long Cheng, the founding designer. When you join Tangle, which is invitation-only until this spring at the earliest, the AI peruses your calendar, examines your photos, asks you questions and then produces "threads," or categories that define your life purpose. You're free to accept, reject or change the suggestions. It then encourages you to make "intentions" toward achieving your threads, and to add "reflections" when you experience something meaningful in your life. Users then receive encouragement from friends, or "supporters." A few of the "threads" on Tangle are about personal satisfaction (traveler, connoisseur), but the vast majority involve causes greater than self: family (partner, parent, sibling), community (caregiver, connector, guardian), service (volunteer, advocate, healer) and spirituality (seeker, believer). Even the work-related threads (mentor, leader) suggest a higher purpose. The column includes this caveat. "I have no idea whether they will succeed. But as a columnist writing about how to keep our humanity in the 21st century, I believe it's important to focus on people who are at least trying..." "Quite possibly, West Co. and the various other enterprises trying to nudge technology in a more humane direction will find that it doesn't work socially or economically - they don't yet have a viable product, after all - but it would be a noble failure."Read more of this story at Slashdot.
A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post. They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy."AI can accomplish many impressive tasks involving computer code, documents or images. That has prompted predictions that human work of many kinds could soon be done by computers alone. Bentley University and Gallup found in a survey [PDF] last year that about three-quarters of Americans expect AI to reduce the number of U.S. jobs over the next decade. But economic data shows the technology largely has not replaced workers. To understand what work AI can do on its own today, researchers collected hundreds of examples of projects posted on freelancing platforms that humans had been paid to complete. They included tasks such as making 3D product animations, transcribing music, coding web video games and formatting research papers for publication. The research team then gave each task to AI systems such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI. "Current models are not close to being able to automate real jobs in the economy," said Jason Hausenloy, one of the researchers on the Remote Labor Index study... The results, which show how AI systems fall short, challenge predictions that the technology is poised to soon replace large portions of the workforce... The AI systems failed on nearly half of the Remote Labor Index projects by producing poor-quality work, and they left more than a third incomplete. Nearly 1 in 5 had basic technical problems such as producing corrupt files, the researchers found. One test involved creating an interactive dashboard for data from the World Happiness Report, according to the article. "At first glance, the AI results look adequate. But closer examination reveals errors, such as countries inexplicably missing data, overlapping text and legends that use the wrong colors - or no colors at all." The researchers say AI systems are hobbled by a lack of memory, and are also weak on "visual" understanding.Read more of this story at Slashdot.
Amazon "has submitted plans for a large-format store near Chicago that would be larger than a Walmart Supercenter," reports CNBC:As part of the plans, Amazon has proposed building a one-story, 229,000-square-foot building [on a 35-acre lot] in Orland Park, Illinois, that would offer a range of products, such as groceries, household essentials and general merchandise, the city said on Saturday. By comparison, Walmart's U.S. Supercenters typically average 179,000 square feet... The Orland Park Plan Commission approved Amazon's proposal on Tuesday, and it will now proceed to a vote from the full village board. That meeting is scheduled for January 19. In a statement cited by CNBC, an Amazon spokesperson called it "a new concept that we think customers will be excited about."Read more of this story at Slashdot.
"Scientists in China have made a breakthrough with fusion energy that could finally overcome one of the most stubborn barriers to realising the next-generation energy source," reports the Independent:A team from the Chinese Academy of Sciences (CAS) said its experimental nuclear reactor, dubbed the 'artificial Sun', achieved a plasma density that was previously thought impossible... Through a new process called plasma-wall self organisation, the CAS researchers were able to keep the plasma stable at unprecedented density levels. By pushing plasma density well past long-standing empirical limits, the researchers said fusion ignition can be achieved with far higher energy outputs. "The findings suggest a practical and scalable pathway for extending density limits in tokamaks and next-generation burning plasma fusion devices," said Professor Ping Zhu from Huazhong University of Science and Technology, who so-led the research. Professor Zhu's team now plan to apply this new method on the EAST reactor to confirm that it will work under high-performance plasma conditions. The latest breakthrough was detailed in the journal Science Advances in a study titled 'Accessing the density-free regime with ECRH-assisted ohmic start-up on EAST'.Read more of this story at Slashdot.
This week Meta announced several new features for "Meta Ray-Ban Display" smartglasses: - A new teleprompter feature for the smart glasses (arriving in a phased rollout) - The ability to send messages on WhatsApp and Messenger by writing with your finger on any surface. (Available for those who sign up for an "early access" program). - "Pedestrian navigation" for 32 cities. ("The 28 cities we launched Meta Ray-Ban Display with, plus Denver, Las Vegas, Portland, and Salt Lake City," and with more cities coming soon.) But they also warned Meta Ray-Ban Display "is a first-of-its-kind product with extremely limited inventory," saying they're delaying international expansion of sales due to inventory constraints - and also due to "unprecedented" demand in the U.S. CNBC reports:"Since launching last fall, we've seen an overwhelming amount of interest, and as a result, product waitlists now extend well into 2026," Meta wrote in a blog post. Due to "limited" inventory, the company said it will pause plans to launch in the U.K., France, Italy and Canada early this year and concentrate on U.S. orders as it reassesses international availability... Meta is one of several technology companies moving into the smart glasses market. Alphabet announced a $150 million partnership with Warby Parker in May and ChatGPT maker OpenAI is reportedly working on AI glasses with Apple.Read more of this story at Slashdot.
It will be the first medical evacuation from the International space station in its 25-year history. The Guardian reports:An astronaut in the orbital laboratory reportedly fell ill with a "serious" but undisclosed issue. Nasa also had to cancel its first spacewalk of the year... The agency did not identify the astronaut or the medical problem, citing patient privacy. "Because the astronaut is absolutely stable, this is not an emergent evacuation," [chief health and medical officer Dr. James] Polk said. "We're not immediately disembarking and getting the astronaut down, but it leaves that lingering risk and lingering question as to what that diagnosis is, and that means there is some lingering risk for that astronaut onboard." "SpaceX says it's Dragon spacecraft at the International Space Station is ready to return its four Crew-11 astronauts home in an unprecedented medical evacuation on Jan. 14 and 15," reports Space.com:The SpaceX statement came on the heels of NASA's announcement that the Crew-11 astronauts were scheduled to undock from the space station on Jan. 14 and splashdown off the coast of California early on Jan. 15. The Crew-11 Dragon spacecraft will return NASA astronauts Zena Cardman and Mike Fincke to Earth alongside Japanese astronaut Kimiya Yui and Russian cosmonaut Oleg Platanov... NASA officials opted for a "controlled medical evacuation" in order to provide the astronaut better treatment on the ground, NASA chief Jared Isaacman has said... Dr. James Polk, NASA's chief medical officer, has said the medical issue is not an injury to the astronaut afflicted, but rather something related to the prolonged exposure to weighlessness by astronauts living and working on the International Space Station. "It's mostly having a medical issue in the difficult areas of microgravity and the suite of hardware that we operate in," Polk said.Read more of this story at Slashdot.
Yes, a federal judge blocked an attempt by Texas at an app store age-verification law.But this year Silicon Valley giants including Google and Apple "are expected to fight hard against similar legislation," reports Politico, "because of the vast legal liability it imposes on app stores and developers."In Texas, Utah and Louisiana, parent advocates have linked up with conservative "pro-family" groups to pass laws forcing mobile app stores to verify user ages and require parental sign-off. If those rules hold up in court, companies like Google and Apple, which run the two largest app stores, would face massive legal liability... California has taken a different approach, passing its own age-verification law last year that puts liability on device manufacturers instead of app stores. That model has been better received by the tech lobby, and is now competing with the app-based approach in states like Ohio. In Washington D.C., a GOP-led bill modeled off of Texas' law is wending its way through Capitol Hill. And more states are expected to join the fray, including Michigan and South Carolina. Joel Thayer, president of the conservative Digital Progress Institute and a key architect of the Texas law, said states are only accelerating their push. He explicitly linked the age-verification debate to AI, arguing it's "terrifying" to think companies could build new AI products by scraping data from children's apps. Thayer also pointed to the Trump administration's recent executive order aimed at curbing state regulation of AI, saying it has galvanized lawmakers. "We're gonna see more states pushing this stuff," Thayer said. "What really put fuel in the fire is the AI moratorium for states. I think states have been reinvigorated to fight back on this." He told Politico that the issue will likely be decided by America's Supreme Court, which in June upheld Texas legislation requiring age verification for online content. Thayer said states need a ruling from America's highest court to "triangulate exactly what the eff is going on with the First Amendment in the tech world. "They're going to have to resolve the question at some point."Read more of this story at Slashdot.
The Free Software Foundation's president Ian Kelling is also their senior systems administrator. This week he shared an example of how "the work we put in to making sure a program is free for us also makes it free for the rest of the world."During the COVID-19 pandemic, like everyone everywhere, the FSF increased its videoconferencing use, especially videoconferencing software that works in web browsers. We have experience hosting several different programs to accomplish this, and BigBlueButton was an important one for us for a while. It is a videoconferencing service which describes itself as a virtual classroom because of its many features designed for educational environments, such as a shared whiteboard... In BigBlueButton 2.2, the program used a freely licensed version of MongoDB, but it unintentionally picked up MongoDB's 2018 nonfree license change in versions 2.3 and 2.4. At the FSF, we noticed this [after a four-hour review] and raised the alarm with the BigBlueButton team in late 2020. In many cases of a developer changing to a nonfree license, free forks have won out, but in this case no one judged it worth the effort to maintain a fork of the final free MongoDB version. This was a very unfortunate case for existing users of MongoDB, including the FSF, who were then faced with a challenge of maintaining their freedom by either running old and unmaintained software or switching over to a different free program. Luckily, the free software world is not especially lacking in high quality database software, and there is also a wide array of free videoconferencing software. At the FSF, we decided to spend some effort to make sure MongoDB would no longer make BigBlueButton nonfree, to help other users of MongoDB and BigBlueButton. We think BigBlueButton is really useful for free software in schools, where it is incredibly important to have free software. On the tech team, especially when it comes to software running in a web browser, we are used to making modifications to better suit our needs. In the end, we didn't find a perfect solution, but we did find FerretDB to be a promising MongoDB alternative and assisted the developers of FerretDB to see what would be required for it to work in BigBlueButton. The BigBlueButton developers decided that some architectural level changes for their 3.0 release would be the path for them to remove MongoDB. As of BigBlueButton 3.0, released in 2025, BigBlueButton is back to being entirely free software...! As you can see, in the world of free software, trust can be tricky, and this is part of why organizations like the FSF are so important. Kelling notes he's part of a tech team of just two people reponsible for "63 different services, platforms, and websites for the FSF staff, the GNU Project, other community projects, and the wider free software community..."Read more of this story at Slashdot.
An anonymous reader quotes a report from CBC.ca: A company largely owned by the French and U.K. governments is pitching Canada on a roughly $250-million plan to provide the military with secure satellite broadband coverage in the Arctic, CBC News has learned. Eutelsat, a rival to tech billionaire Elon Musk's Starlink, already provides some services to the Canadian military, but wants to deepen the partnership as Canada looks to diversify defence contracts away from suppliers in the United States. A proposal for Canada's Department of National Defence to join a French Ministry of Defence initiative involving Eutelsat was apparently raised by French President Emmanuel Macron with Prime Minister Mark Carney on the sidelines of last year's G7 summit in Alberta. The prime minister's first question, according to Eutelsat and French defence officials, was how the proposal would affect the Telesat Corporation, a former Canadian Crown corporation that was privatized in the 1990s. Telesat is in the process of developing its Lightspeed system, a Low Earth Orbit (LEO) constellation of satellites for high-speed broadband. And in mid-December, the Liberal government announced it had established a strategic partnership with Telesat and MDA Space to develop the Canadian Armed Forces' military satellite communications (MILSATCOM) capabilities. A Eutelsat official said the company already has its own satellite network in place and running, along with Canadian partners, and has been providing support to the Canadian military deployed in Latvia. "What we can provide for Canada is what we call a sovereign capacity capability where Canada would actually own all of our capacity in the Far North or wherever they require it," said David van Dyke, the general manager for Canada at Eutelsat. "We also give them the ability to not be under the control of a singular individual who could decide to disconnect the service for political or other reasons."Read more of this story at Slashdot.
Scientists are putting Einstein's claim that the speed of light is constant to the test. While researchers found no evidence that light's speed changes with energy, this null result dramatically tightens the constraints on quantum-gravity theories that predict even the tiniest violations. ScienceDaily reports: Special relativity rests on the principle that the laws of physics remain the same for all observers, regardless of how they are moving relative to one another. This idea is known as Lorentz invariance. Over time, Lorentz invariance became a foundational assumption in modern physics, especially within quantum theory. [...] One prediction shared by several Lorentz-invariance-violating quantum gravity models is that the speed of light may depend slightly on a photon's energy. Any such effect would have to be tiny to match existing experimental limits. However, it could become detectable at the highest photon energies, specifically in very-high-energy gamma rays. A research team led by former UAB student Merce Guerrero and current IEEC PhD student at the UAB Anna Campoy-Ordaz set out to test this idea using astrophysical observations. The team also included Robertus Potting from the University of Algarve and Markus Gaug, a lecturer in the Department of Physics at the UAB who is also affiliated with the IEEC. Their approach relies on the vast distances light travels across the universe. If photons of different energies are emitted at the same time from a distant source, even minuscule differences in their speeds could build up into measurable delays by the time they reach Earth. Using a new statistical technique, the researchers combined existing measurements of very-high-energy gamma rays to examine several Lorentz-invariance-violating parameters favored by theorists within the Standard Model Extension (SME). The goal was ambitious. They hoped to find evidence that Einstein's assumptions might break down under extreme conditions. Once again, Einstein's predictions held firm. The study did not detect any violation of Lorentz invariance. Even so, the results are significant. The new analysis improves previous limits by an order of magnitude, sharply narrowing where new physics could be hiding.Read more of this story at Slashdot.
Meta has signed long-term nuclear power deals totaling more than 6 gigawatts to fuel its data centers: "one from a startup, one from a smaller energy company, and one from a larger company that already operates several nuclear reactors in the U.S," reports TechCrunch. From the report: Oklo and TerraPower, two companies developing small modular reactors (SMR), each signed agreements with Meta to build multiple reactors, while Vistra is selling capacity from its existing power plants. [...] The deals are the result of a request for proposals that Meta issued in December 2024, in which Meta sought partners that could add between 1 to 4 gigawatts of generating capacity by the early 2030s. Much of the new power will flow through the PJM interconnection, a grid which covers 13 Mid-Atlantic and Midwestern states and has become saturated with data centers. The 20-year agreement with Vistra will have the most immediate impact on Meta's energy needs. The tech company will buy a total of 2.1 gigawatts from two existing nuclear power plants, Perry and Davis-Besse in Ohio. As part of the deal, Vistra will also add capacity to those power plants and to its Beaver Valley power plant in Pennsylvania. Together, the upgrades will generate an additional 433 MW and are scheduled to come online in the early 2030s. Meta is also buying 1.2 gigawatts from young provider Oklo. Under its deal with Meta, Oklo is hoping to start supplying power to the grid as early as 2030. The SMR company went public via SPAC in 2023, and while Oklo has landed a large deal with data center operator Switch, it has struggled to get its reactor design approved by the Nuclear Regulatory Commission. If Oklo can deliver on its timeline, the new reactors would be built in Pike County, Ohio. The startup's Aurora Powerhouse reactors each produce 75 megawatts of electricity, and it will need to build more than a dozen to fulfill Meta's order. TerraPower is a startup co-founded by Bill Gates, and it is aiming to start sending electricity to Meta as early as 2032.Read more of this story at Slashdot.
An anonymous reader quotes a report from Wired: [P]erhaps AI can, in fact, learn in a more human way -- by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them. The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent's actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. "Once we have that it's kind of a way to reach superintelligence," [said Zilong Zheng, a researcher at BIGAI who worked on the project].Read more of this story at Slashdot.
Experts interviewed by NBC News warn that the rapid spread of AI-generated images and videos is accelerating an online trust breakdown, especially during fast-moving news events where context is scarce. From the report: President Donald Trump's Venezuela operation almost immediately spurred the spread of AI-generated images, old videos and altered photos across social media. On Wednesday, after an Immigration and Customs Enforcement officer fatally shot a woman in her car, many online circulated a fake, most likely AI-edited image of the scene that appears to be based on real video. Others used AI in attempts to digitally remove the mask of the ICE officer who shot her. The confusion around AI content comes as many social media platforms, which pay creators for engagement, have given users incentives to recycle old photos and videos to ramp up emotion around viral news moments. The amalgam of misinformation, experts say, is creating a heightened erosion of trust online -- especially when it mixes with authentic evidence. "As we start to worry about AI, it will likely, at least in the short term, undermine our trust default -- that is, that we believe communication until we have some reason to disbelieve," said Jeff Hancock, founding director of the Stanford Social Media Lab. "That's going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces." Though AI is the latest technology to spark concern about surging misinformation, similar trust breakdowns have cycled through history, from election misinformation in 2016 to the mass production of propaganda after the printing press was invented in the 1400s. Before AI, there was Photoshop, and before Photoshop, there were analog image manipulation techniques. Fast-moving news events are where manipulated media have the biggest effect, because they fill in for the broad lack of information, Hancock said. "In terms of just looking at an image or a video, it will essentially become impossible to detect if it's fake. I think that we're getting close to that point, if we're not already there," said Hancock. "The old sort of AI literacy ideas of 'let's just look at the number of fingers' and things like that are likely to go away." Renee Hobbs, a professor of communication studies at the University of Rhode Island, added: "If constant doubt and anxiety about what to trust is the norm, then actually, disengagement is a logical response. It's a coping mechanism. And then when people stop caring about whether something's true or not, then the danger is not just deception, but actually it's worse than that. It's the whole collapse of even being motivated to seek truth."Read more of this story at Slashdot.
Intel CEO Lip-Bu Tan says the company is "going big time" into its 14A (1.4nm-class) process, signaling confidence in yields and hinting at at least one external foundry customer. Tom's Hardware reports: Intel's 14A is expected to be production-ready in 2027, with early versions of process design kit (PDK) coming to external customers early this year. To that end, it is good to hear Intel's upbeat comments about 14A. Also, Tan's phrasing 'the customer' could indicate that Intel has at least one external client for 14A, implying that Intel Foundry will produce 14A chips for Intel Products and at least one more buyer. The 14A production node will introduce Intel's 2nd Generation RibbonFET GAA transistors; 2nd Gen BSPDN called PowerDirect that will connect power directly to source and drain of transistors, enabling better power delivery (e.g., reducing transient voltage droop or clock stretching) and refined power controls; and Turbo Cells that optimize critical timing paths using high-drive, double-height cells within dense standard cell libraries, which boost speed without major area or power compromises. Yet, there is another aspect of Intel's 14A manufacturing process that is particularly important for the chipmaker: its usage by external customers. With 18A, the company has not managed to land a single major external client that demands decent volumes. While 18A will be used by Intel itself as well as by Microsoft and the U.S. Department of Defense, only Intel will consume significant volumes. For 14A, Intel hopes to land at least one more external customer with substantial volume requirements, as this will ensure that Intel will recoup its investments in the development of such an advanced node.Read more of this story at Slashdot.
Microsoft is testing a new Windows policy that lets IT administrators uninstall Microsoft Copilot from managed devices. The change rolls out via Windows Insider builds and works through standard management tools like Intune and SCCM. BleepingComputer reports: The new policy will apply to devices where the Microsoft 365 Copilot and Microsoft Copilot are both installed, the Microsoft Copilot app was not installed by the user, and the Microsoft Copilot app was not launched in the last 28 days. "Admins can now uninstall Microsoft Copilot for a user in a targeted way by enabling a new policy titled RemoveMicrosoftCopilotApp," the Windows Insider team said. "If this policy is enabled, the Microsoft Copilot app will be uninstalled, once. Users can still re-install if they choose to. This policy is available on Enterprise, Pro, and EDU SKUs. To enable this policy, open the Group policy editor and go to: User Configuration -> Administrative Templates -> Windows AI -> Remove Microsoft Copilot App."Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: Search engine optimization, or SEO, is a big business. While some SEO practices are useful, much of the day-to-day SEO wisdom you see online amounts to superstition. An increasingly popular approach geared toward LLMs called "content chunking" may fall into that category. In the latest installment of Google's Search Off the Record podcast, John Mueller and Danny Sullivan say that breaking content down into bite-sized chunks for LLMs like Gemini is a bad idea. You've probably seen websites engaging in content chunking and scratched your head, and for good reason -- this content isn't made for you. The idea is that if you split information into smaller paragraphs and sections, it is more likely to be ingested and cited by gen AI bots like Gemini. So you end up with short paragraphs, sometimes with just one or two sentences, and lots of subheads formatted like questions one might ask a chatbot. According to Google's Danny Sullivan, this is a misconception, and Google doesn't use such signals to improve ranking. "One of the things I keep seeing over and over in some of the advice and guidance and people are trying to figure out what do we do with the LLMs or whatever, is that turn your content into bite-sized chunks, because LLMs like things that are really bite size, right?" said Sullivan. "So... we don't want you to do that." The conversation, which begins around the podcast's 18-minute mark, goes on to illustrate the folly of jumping on the latest SEO trend. Sullivan notes that he has consulted engineers at Google before making this proclamation. Apparently, the best way to rank on Google continues to be creating content for humans rather than machines. That ensures long-term search exposure, because the behavior of human beings -- what they choose to click on -- is an important signal for Google.Read more of this story at Slashdot.
Longtime Slashdot reader chicksdaddy writes: CES, the Consumer Electronics Show, isn't just about shiny new gadgets. As AP reports, this year brought back the fifth annual Worst in Show anti-awards, calling out the most harmful, wasteful, invasive, and unfixable tech at the Las Vegas show. The coalition behind the awards -- including Repair.org, iFixit, EFF, PIRG, Secure Repairs, and others -- put the spotlight on products that miss the point of innovation and make life worse for users. 2026 Worst in Show winners include: Overall (and Repairability): Samsung's AI-packed Family Hub Fridge -- over-engineered, hard to fix, and trying to do everything but keep food cold.Privacy: Amazon Ring AI -- expanding surveillance with features like facial recognition and mobile towers.Security: Merach UltraTread treadmill -- an AI fitness coach that also hoovers up sensitive data with weak security guarantees, including a privacy policy that declares the company "cannot guarantee the security of your personal information" (!!).Environmental Impact: Lollipop Star -- a single-use, music-playing electronic lollipop that epitomizes needless e-waste.Enshittification: Bosch eBike Flow App -- pushing lock-in and digital restrictions that make gear worse over time."Who Asked For This?": Bosch Personal AI Barista -- a voice-assistant coffee maker that nobody really wanted.People's Choice: Lepro Ami AI Companion -- an overhyped "soulmate" cam that creeps more than it comforts. The message? Not all tech is progress. Some products add needless complexity, threaten privacy, or throw sustainability out the window -- and the industry's watchdogs are calling them out.Read more of this story at Slashdot.
Valve has added the NTSYNC kernel driver to the SteamOS 3.7.20 beta, laying the groundwork for improved Windows game synchronization performance via Wine and Proton. Phoronix reports: For gearing up for that future Proton NTSYNC support, SteamOS 3.7.20 enables the NTSYNC kernel driver and loads the module by default. Most Linux distributions are at least already building the NTSYNC kernel module though there's been different efforts on how to handle ensuring it's loaded when needed. The presence of the NTSYC kernel driver is the main highlight of the SteamOS 3.7.20 beta now available for testing.Read more of this story at Slashdot.
An anonymous reader quotes a report from TorrentFreak: Italy's communications regulator AGCOM imposed a record-breaking 14.2 million-euro fine on Cloudflare after the company failed to implement the required piracy blocking measures. Cloudflare argued that filtering its global 1.1.1.1 DNS resolver would be "impossible" without hurting overall performance. AGCOM disagreed, noting that Cloudflare is not necessarily a neutral intermediary either. [...] "The measure, in addition to being one of the first financial penalties imposed in the copyright sector, is particularly significant given the role played by Cloudflare" AGCOM notes, adding that Cloudflare is linked to roughly 70% of the pirate sites targeted under its regime. In its detailed analysis, the regulator further highlighted that Cloudflare's cooperation is "essential" for the enforcement of Italian anti-piracy laws, as its services allow pirate sites to evade standard blocking measures. Cloudflare has strongly contested the accusations throughout AGCOM's proceedings and previously criticized the Piracy Shield system for lacking transparency and due process. While the company did not immediately respond to our request for comment, it will almost certainly appeal the fine. This appeal may also draw the interest of other public DNS resolvers, such as Google and OpenDNS. AGCOM, meanwhile, says that it remains fully committed to enforcing the local piracy law. The regulator notes that since the Piracy Shield started in February 2024, 65,000 domain names and 14,000 IP addresses were blocked.Read more of this story at Slashdot.
An anonymous reader shares a report: Microsoft is celebrating the resurgence of interest in physical media in the only way it knows how... by halting the Windows Media Player metadata service. Readers of a certain vintage will remember inserting a CD into their PC and watching Windows Media Player populate with track listings and album artwork. No more. Sometime before Christmas, the metadata servers stopped working and on Windows 10 or 11, the result is the same: album not found. We tried this out at Vulture Central on some sacrificial Windows devices that had media drives and can confirm that a variety of compact discs were met with stony indifference. Some 90s cheese that was successfully ripped (for personal use, of course) decades ago? No longer recognized. A reissue of something achingly hip? Also not recognized.Read more of this story at Slashdot.
The abstract of a paper on NBER: School boards have statutory authority over most elementary and secondary education policies, but receive little attention compared to other actors in education systems. A fundamental challenge to understanding the importance of boards is the absence of data on the policy goals of board members -- i.e., their ideologies -- forcing researchers to conduct tests based on demographic and professional characteristics -- i.e., identities -- with which ideology is presumed to correlate. This paper uses new data on the viewpoints and policy actions of school board members, coupled with a regression discontinuity design that generates quasi-random variation in board composition, to establish two results. The first is that the priorities of board members have large causal effects across many domains. For example, the effect of electing an equity-focused board member on test scores for low-income students is roughly equivalent to assigning every such student a teacher who is 0.3 to 0.4 SDs higher in the distribution of teacher value-added. The second is that observing policy priorities is crucial. Identity turns out to be a poor proxy for ideology, with limited governance effects that are fully explained by differences in policy priorities. Our findings challenge the belief that school boards are unimportant, showing that who serves on the board and what they prioritize can have far-reaching consequences for students.Read more of this story at Slashdot.
Microbiology had its golden age in the late nineteenth century, when researchers identified the bacterial causes of tuberculosis, cholera, typhoid, and a dozen other diseases in rapid succession. Antibiotics had theirs in the mid-twentieth century. Both booms eventually slowed. Vaccine development, by contrast, appears to be speeding up -- and the most productive era may still lie ahead, Works in Progress writes in a story. In the first half of the 2020s alone, researchers delivered the first effective vaccines against four different diseases: Covid-19, malaria, RSV and chikungunya. No previous decade matched that output. The acceleration rests on infrastructure that took two centuries to assemble. Edward Jenner's 1796 smallpox vaccine was a lucky accident he didn't understand. Louis Pasteur needed ninety years to turn that luck into systematic methods -- attenuation and inactivation -- that could be applied to other diseases. Generations of scientists then built the supporting machinery: Petri dishes for bacterial culture, techniques to keep animal cells alive outside the body, bioreactors for industrial production, sterilization and cold-chain logistics. Those tools have now compounded. Cryo-electron microscopy reveals viral proteins atom by atom, a capability that directly enabled the RSV vaccine after earlier attempts failed. Genome sequencing costs collapsed from roughly $100 million per human genome in 2001 to under $1,000 by 2014, according to data from the National Human Genome Research Institute. The mRNA platform, refined through work by Katalin Kariko, Drew Weissman, and others, allows vaccines to be redesigned in weeks rather than years. The trajectory suggests more breakthroughs are possible. Whether they arrive depends on continued investment, however.Read more of this story at Slashdot.
The restaurant industry is trying to figure out whether America has hit peak pizza. From a report: Once the second-most common U.S. restaurant type, pizzerias are now outnumbered by coffee shops and Mexican food eateries, according to industry data. Sales growth at pizza restaurants has lagged behind the broader fast-food market for years, and the outlook ahead isn't much brighter. "Pizza is disrupted right now," Ravi Thanawala, chief financial officer and North America president at Papa John's International, said in an interview. "That's what the consumer tells us." The parent of the Pieology Pizzeria chain filed for chapter 11 bankruptcy protection in December. Others, including the parent of Anthony's Coal Fired Pizza & Wings and Bertucci's Brick Oven Pizza & Pasta, earlier filed for bankruptcy. Pizza once was a novelty outside big U.S. cities, providing room for growth for independent shops and then chains such as Pizza Hut with its red roof dine-in restaurants. Purpose-made cardboard boxes and fleets of delivery drivers helped make pizza a takeout staple for those seeking low-stress meals. Today, pizza shops are engaged in price wars with one another and other kinds of fast food. Food-delivery apps have put a wider range of cuisines and options at Americans' fingertips. And $20 a pie for a family can feel expensive compared with $5 fast-food deals, frozen pizzas or eating a home-cooked meal. [...] Pizza's dominance in American restaurant fare is declining, however. Among different cuisines, it ranked sixth in terms of U.S. sales in 2024 among restaurant chains, down from second place during the 1990s, Technomic said. The number of pizza restaurants in the U.S. hit a record high in 2019 and has declined since then, figures from the market-research firm Datassential show. Further reading, at WSJ: The Feds Need to Bail Out the Pizza Industry.Read more of this story at Slashdot.
Amazon has begun equipping managers with a dashboard that tracks not just whether corporate employees show up to the office but how long they stay once they're there, according to an internal document obtained by Business Insider. The system, which started rolling out in December, flags "Low-Time Badgers" who average less than four hours daily over an eight-week period and "Zero Badgers" who don't badge into any building during that span.Read more of this story at Slashdot.
Linus Torvalds has weighed in on an ongoing debate within the Linux kernel development community about whether documentation should explicitly address AI-generated code contributions, and his position is characteristically blunt: stop making it an issue. The Linux creator was responding to Oracle-affiliated kernel developer Lorenzo Stoakes, who had argued that treating LLMs as "just another tool" ignores the threat they pose to kernel quality. "Thinking LLMs are 'just another tool' is to say effectively that the kernel is immune from this," Stoakes wrote. Torvalds disagreed sharply. "There is zero point in talking about AI slop," he wrote. "Because the AI slop people aren't going to document their patches as such." He called such discussions "pointless posturing" and said that kernel documentation is "for good actors." The exchange comes as a team led by Intel's Dave Hansen works on guidelines for tool-generated contributions. Stoakes had pushed for language letting maintainers reject suspected AI slop outright, arguing the current draft "tries very hard to say 'NOP.'" Torvalds made clear he doesn't want kernel documentation to become a political statement on AI. "I strongly want this to be that 'just a tool' statement," he wrote.Read more of this story at Slashdot.
Craigslist, the 30-year-old classifieds site that looks virtually unchanged since the dial-up era, continues to draw more than 105 million monthly users and remains enormously profitable despite never spending a cent on advertising or marketing. The site ranks as the 40th most popular website in the United States, according to Internet data company Similarweb. University of Pennsylvania associate professor Jessa Lingel called it the "ungentrified" Internet. Unlike Facebook Marketplace, Etsy, or DePop, Craigslist doesn't use algorithms to track users or predict what they want to see. There are no public profiles, no rating systems, no likes or shares. The site effectively disincentivizes the clout-chasing and virality-seeking that dominates platforms like TikTok and Instagram. Craigslist began in 1995 as an email list for a few hundred San Francisco Bay Area locals sharing events and job openings. Engineer Craig Newmark even recruited CEO Jim Buckmaster through a site ad. The two spent roughly a decade battling eBay in court after the tech giant purchased a minority stake in 2004, ultimately buying back shares and regaining full control in 2015.Read more of this story at Slashdot.
Apple's iOS 26 appears to be witnessing the slowest adoption rate in recent memory, with third-party analytics from StatCounter indicating that only 15 to 16% of active iPhones worldwide are running the operating system nearly four months after its September release. The figures stand in stark contrast to iOS 18, which had reached approximately 63% adoption by January 2025, and iOS 17, which hit 54% by January 2024. iOS 16 had surpassed 60% by January 2023. StatCounter's breakdown for January 2026 shows iOS 26.1 accounting for roughly 10.6% of devices, iOS 26.2 at about 4.6%, and the original iOS 26.0 at 1.1%. More than 60% of iPhones tracked by the analytics firm remain on iOS 18. MacRumors' own visitor data tells a similar story: 89.3% of the site's readers were on iOS 18 during the first week of January 2025, but only 25.7% are running iOS 26 during the same period this year. iOS 26 introduced Liquid Glass, a sweeping visual redesign that replaces much of the traditional opaque interface with translucent layers, blurred backgrounds, and dynamic depth effects.Read more of this story at Slashdot.
Amazon is now requiring its corporate employees to submit a list of three to five accomplishments that represent their best work as part of an overhauled performance review process, according to Business Insider, which cites internal documents. The company's internal Forte review system previously asked employees softer questions like "When you're at your best, how do you contribute?" but the new standards place greater emphasis on individual productivity and specific deliverables. Amazon's roughly 350,000 corporate employees must also outline actions they plan to take to continue growing at the company.Read more of this story at Slashdot.
Microsoft is discontinuing its Send to Kindle integration in Word, ending a feature that allowed Microsoft 365 subscribers to send documents directly to their Kindle e-readers and preserve complex formatting through fixed layouts. The company updated its documentation to announce that beginning February 9th, 2026, the Send to Kindle feature will no longer work across Web, Win32, and Mac platforms. Microsoft has not disclosed why it's killing the integration but recommends users switch to Amazon's official Send to Kindle app. The feature launched in 2023 and was particularly valued by Kindle Scribe owners who could annotate the transferred documents.Read more of this story at Slashdot.
Abstract of a paper on NBER: We construct an international panel data set comprising three distinct yet plausible measures of government indebtedness: the debt-to-GDP, the interest-to-GDP, and the debt-to-equity ratios. Our analysis reveals that these measures yield differing conclusions about recent trends in government indebtedness. While the debt-to-GDP ratio has reached historically high levels, the other two indicators show either no clear trend or a declining pattern over recent decades. We argue for the development of stronger theoretical foundations for the measures employed in the literature, suggesting that, without such grounding, assertions about debt (un)sustainability may be premature.Read more of this story at Slashdot.
The world's oceans absorbed yet another record-breaking amount of heat in 2025, continuing an almost unbroken streak of annual records since the start of the millennium and fueling increasingly extreme weather events around the globe. More than 90% of the heat trapped by humanity's carbon emissions ends up in the oceans, making ocean heat content one of the clearest indicators of the climate crisis's trajectory. The analysis, published in the journal Advances in Atmospheric Sciences, drew on temperature data collected across the oceans and collated by three independent research teams. The measurements cover the top 2,000 meters of ocean depth, where most heat absorption occurs. The amount of heat absorbed is equivalent to more than 200 times the total electricity used by humans worldwide. This extra thermal energy intensifies hurricanes and typhoons, produces heavier rainfall and greater flooding, and results in longer marine heatwaves that decimate ocean life. The oceans are likely at their hottest in at least 1,000 years and heating faster than at any point in the past 2,000 years.Read more of this story at Slashdot.
alternative_right shares a report from ScienceAlert: At the Experimental Advanced Superconducting Tokamak (EAST), physicists successfully exceeded what is known as the Greenwald limit, a practical density boundary beyond which plasmas tend to violently destabilize, often damaging reactor components. For a long time, the Greenwald limit was accepted as a given and incorporated into fusion reactor engineering. The new work shows that precise control over how the plasma is created and interacts with the reactor walls can push it beyond this limit into what physicists call a 'density-free' regime. [...] A team led by physicists Ping Zhu of Huazhong University of Science and Technology and Ning Yan of the Chinese Academy of Sciences designed an experiment to take this theory further, based on a simple premise: that the density limit is strongly influenced by the initial plasma-wall interactions as the reactor starts up. In their experiment, the researchers wanted to see if they could deliberately steer the outcome of this interaction. They carefully controlled the pressure of the fuel gas during tokamak startup and added a burst of heating called electron cyclotron resonance heating. These changes altered how the plasma interacts with the tokamak walls through a cooler plasma boundary, which dramatically reduced the degree to which wall impurities entered the plasma. Under this regime, the researchers were able to reach densities up to about 65 percent higher than the tokamak's Greenwald limit. This doesn't mean that magnetically confined plasmas can now operate with no density limits whatsoever. However, it does show that the Greenwald limit is not a fundamental barrier and that tweaking operational processes could lead to more effective fusion reactors. The findings have been published in Science Advances.Read more of this story at Slashdot.
Researchers at Stanford University have created a programmable synthetic "skin" that can independently change color and texture, "a feat previously only available within the animal kingdom," reports the Register. From the report: The technique employs electron beams to write patterns and add optical layers that create color effects. When exposed to water, the film swells to reveal texture and colors independently, depending on which side of the material is exposed, according to a paper published in the scientific journal Nature this week. In an accompanying article, University of Stuttgart's Benjamin Renz and Na Liu said the researchers' "most striking achievement was a photonic skin in which color and texture could be independently controlled, mirroring the separate regulation... in octopuses." The research team used the polymer PEDOT:PSS, which can swell in water, as the basis for their material. Its reaction to water can be controlled by irradiating it with electrons, creating textures and patterns in the film. By adding thin layers of gold, the researchers turned surface texture into tunable optical effects. A single layer could be used to scatter light, giving the shiny metal a matte, textured appearance. To control color, a polymer film was sandwiched between two layers of gold, forming an optical cavity, which selectively reflects light.Read more of this story at Slashdot.
An anonymous reader quotes a report from NPR: [I]t turns out that some genius dogs can learn a brand new word, like the name of an unfamiliar toy, by just overhearing brief interactions between two people. What's more, these "gifted" dogs can learn the name of a new toy even if they first hear this word when the toy is out of sight -- as long as their favorite human is looking at the spot where the toy is hidden. That's according to a new study in the journal Science. "What we found in this study is that the dogs are using social communication. They're using these social cues to understand what the owners are talking about," says cognitive scientist Shany Dror of Eotvos Lorand University and the University of Veterinary Medicine, Vienna. "This tells us that the ability to use social information is actually something that humans probably had before they had language," she says, "and language was kind of hitchhiking on these social abilities." [...] "There's only a very small group of dogs that are able to learn this differentiation and then can learn that certain labels refer to specific objects," she says. "It's quite hard to train this and some dogs seem to just be able to do it." [...] To explore the various ways that these dogs are capable of learning new words, Dror and some colleagues conducted a study that involved two people interacting while their dog sat nearby and watched. One person would show the other a brand new toy and talk about it, with the toy's name embedded into sentences, such as "This is your armadillo. It has armadillo ears, little armadillo feet. It has a tail, like an armadillo tail." Even though none of this language was directed at the dogs, it turns out the super-learners registered the new toy's name and were later able to pick it out of a pile, at the owner's request. To do this, the dogs had to go into a separate room where the pile was located, so the humans couldn't give them any hints. Dror says that as she watched the dogs on camera from the other room, she was "honestly surprised" because they seemed to have so much confidence. "Sometimes they just immediately went to the new toy, knowing what they're supposed to do," she says. "Their performance was really, really high." She and her colleagues wondered if what mattered was the dog being able to see the toy while its name was said aloud, even if the words weren't explicitly directed at the dog. So they did another experiment that created a delay between the dog seeing a new toy and hearing its name. The dogs got to see the unfamiliar toy and then the owner dropped the toy in a bucket, so it was out of sight. Then the owner would talk to the dog, and mention the toy's name, while glancing down at the bucket. While this was more difficult for dogs, overall they still could use this information to learn the name of the toy and later retrieve it when asked. "This shows us how flexible they are able to learn," says Dror. "They can use different mechanisms and learn under different conditions."Read more of this story at Slashdot.
YouTube is updating search filters so users can explicitly choose between Shorts and long-form videos. The change also replaces view-count sorting with a new "Popularity" filter and removes underperforming options like "Sort by Rating." The Verge reports: Right now, a filter-less search shows a mix of longform and short form videos, which can be annoying if you just want to see videos in one format or the other. But in the new search filters, among other options, you can pick to see "Videos," which in my testing has only showed a list of longform videos, or "Shorts," which just shows Shorts. YouTube is also removing the "Upload Date - Last Hour" and "Sort by Rating" filters because they "were not working as expected and had contributed to user complaints." The company will still offer other "Upload Date" filters, like "Today," "This week," "This Month," and "This Year," and you can also find popular videos with the new "Popularity" filter, which is replacing the "View count" sort option. (With the new "Popularity" filter, YouTube says that "our systems assess a video's view count and other relevance signals, such as watch time, to determine its popularity for that specific query.")Read more of this story at Slashdot.
Longtime Slashdot reader schwit1 shares a report from Reuters: Billionaire entrepreneur Elon Musk persuaded a judge on Wednesday to allow a jury trial on his allegations that ChatGPT maker OpenAI violated its founding mission in its high-profile restructuring to a for-profit entity. Musk was a cofounder of OpenAI in 2015 but left in 2018 and now runs an AI company that competes with it. U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California, said at a hearing that there was "plenty of evidence" suggesting OpenAI's leaders made assurances that its original nonprofit structure was going to be maintained. The judge said there were enough disputed facts to let a jury consider the claims at a trial scheduled for March, rather than decide the issues herself. She said she would issue a written order after the hearing that addresses OpenAI's bid to throw out the case. [...] Musk contends he contributed about $38 million, roughly 60% of OpenAI's early funding, along with strategic guidance and credibility, based on assurances that the organization would remain a nonprofit dedicated to the public benefit. The lawsuit accuses OpenAI co-founders Sam Altman and Greg Brockman of plotting a for-profit switch to enrich themselves, culminating in multibillion-dollar deals with Microsoft and a recent restructuring. OpenAI, Altman and Brockman have denied the claims, and they called Musk "a frustrated commercial competitor seeking to slow down a mission-driven market leader." Microsoft is also a defendant and has urged the judge to toss Musk's lawsuit. A lawyer for Microsoft said there was no evidence that the company "aided and abetted" OpenAI. OpenAI in a statement after the hearing said: "Mr Musk's lawsuit continues to be baseless and a part of his ongoing pattern of harassment, and we look forward to demonstrating this at trial."Read more of this story at Slashdot.
Illinois Department of Human Services disclosed that a misconfigured internal mapping website exposed sensitive personal data for more than 700,000 Illinois residents for over four years, from April 2021 to September 2025. Officials say they can't confirm whether the publicly accessible data was ever viewed. TechCrunch reports: Officials said the exposed data included personal information on 672,616 individuals who are Medicaid and Medicare Savings Program recipients. The data included their addresses, case numbers, and demographic data -- but not individuals' names. The exposed data also included names, addresses, case statuses, and other information relating to 32,401 individuals in receipt of services from the department's Division of Rehabilitation Services.Read more of this story at Slashdot.
An anonymous reader quotes a report from Wired: Google is putting even more generative AI tools into Gmail as part of its goal to further personalize user inboxes and streamline searches. On Thursday, the company announced a new "AI Inbox" tab, currently in a beta testing phase, that reads every message in a user's Gmail and suggests a list of to-dos and key topics, based on what it summarizes. In Google's example of what this AI Inbox could look like in Gmail, the new tab takes context from a user's messages and suggests they reschedule their dentist appointment, reply to a request from their child's sports coach, and pay an upcoming fee before the deadline. Also under the AI Inbox tab is a list of important topics worth browsing, nestled beneath the action items at the top. Each suggested to-do and topic links back to the original email for more context and for verification. [...] For users who are concerned about their privacy, the information Google gleans by skimming through inboxes will not be used to improve the company's foundational AI models. "We didn't just bolt AI onto Gmail," says Blake Barnes, who leads the project for Google. "We built a secure privacy architecture, specifically for this moment." He emphasizes that users can turn off Gmail's new AI tools if they don't want them. At the same time Google announced its AI Inbox, the company made free for all Gmail users multiple Gemini features that were previously available only to paying subscribers. This includes the Help Me Write tool, which generates emails from a user prompt, as well as AI Overviews for email threads, which essentially posts a TL;DR summary at the top of long message threads. Subscribers to Google's Ultra and Pro plans, which start at $20 a month, get two additional new features in their Gmail inbox. First, an AI proofreading tool that suggests more polished grammar and sentence structures. And second, an AI Overviews tool that can search your whole inbox and create relevant summaries on a topic, rather than just summarizing a single email thread.Read more of this story at Slashdot.