Earlier this year, Mac OS and Windows NT-capable ROMs were discovered for Apple's unique AIX Network Server. Cameron Kaiser has since spent more time digging into just how capable these ROMs are, and has published another one of his detailed stories about his efforts. Well, thanks to Jeff Walther who generously built a few replica ROM SIMMs for me to test, we can now try the 2.0" MacOS ROMs on holmstock, our hard-working Apple Network Server 700 test rig (stockholm, my original ANS 500, is still officially a production unit). And there are some interesting things to report, especially when we pit the preproduction ROMs and this set head-to-head in MacBench, and even try booting Rhapsody on it. Cameron Kaiser A great read, as always.
With Windows being as old and long-running as it is, there's a ton of old and outdated bits and pieces lurking in every nook and cranny. I have always found these old relics fascinating, especially now that over the past few years, Microsoft has attempted to replace some of those bits and pieces with modern replacements (not always to great success, but that's another story). One of those parts of the UI that's been virtually unchanged since the release of Windows 95 is the Run dialog, but that's about to change: Microsoft has released a completely new Run dialog to early testers. Windows Run, also known as the Run dialog, is a surface that has been around for over 30 years. It has become a heavily relied upon tool for developers and advanced users alike. Users have decades of muscle memory where they hit Win+R, navigate through their Run history, and hit Enter to quickly access various paths and tools. We all have our favorite tool we launch there as well. For us, some of our favorites are wt (Windows Terminal), mstsc (Remote Desktop) and winword (Microsoft Word). But it's more than jUsT a TeXt BoX tHaT rUnS tHiNgS. The Run dialog can handle navigating both local and network file paths as well. And everything it does, it does fast. Win+R opens the run dialog seemingly instantly. If we wanted to modernize the Run Dialog to fit the modern Windows 11 design style, we had to make sure it did everything just as well as before. We needed to maintain the same performance while also keeping the user interface minimal, just as Windows 95 intended. Clint Rutkas at the Microsoft Dev Blogs The new Run dialog looks like it belongs in Windows 11, which is a nice improvement, but the most important part is that they actually seem to have made it a little faster. Sure, they may have only shaved off a few milliseconds from its opening time, but considering virtually everything else they've touched in Windows over the years got considerably slower, that's a good showing for Microsoft. The new feature they've added is that by typing ~\, you can open your home directory. The one casualty is the browse button, which according to Microsoft's data, literally nobody ever used. I know it's just a small thing and in the end not even a remotely consequential one, but with an operating system as old and storied as Windows, replacing these ancient parts that millions of people rely on every day absolutely fascinates me. There must be a considerable amount of pressure on the people developing something like this new Run dialog, especially with Windows' reputation being at one of its lowest points, so it's good to see them being able to deliver. The new Run dialog is available today for testers, and if you're on the Windows Insider Experimental Channel, you can enable it in Settings > System > Advanced. Coincidentally, on my Windows 11 machine that I use for just one stupid video game, this Advanced page displays a loading spinner for five minutes and then just dies. Also, Notepad won't start (one time it showed this dialog), and using the terminal to load it causes the old Win32 version of Notepad to open after 5 minutes of waiting, which then hangs and crashes. People pay money for this.
While I'm normally a KDE user, I do keep close tabs on various other desktop environments, and install and set them up every now and then to see how they're fairing, what improvements they've made, and ultimately, if my preference for KDE is still warranted. This usually means setting up a nice OpenBSD installation for Xfce, Fedora for GNOME, and less often others for some of the more niche desktop environments. Since GNOME 50 was just released, guess who's time in the round is up? Since everybody's already made up their mind about their preferred desktop eons ago, with upsides and downsides debated far past their expiration date, I'm not particularly interested in reviewing desktop environments or Linux distributions. However, after asking around on Fedi, it seemed there was quite a bit of interest in an article detailing how I set up GNOME, what changes I make to the defaults, which extensions I use, what tweaks I apply, and so on. Of course, everything described in this article is highly personal, and I'm not arguing that this is the optimal way to tweak GNOME, that the extensions I use are the best ones, or that any visual modifications I make are better than whatever defaults GNOME uses. No, my goal with this article is twofold: one, to highlight that GNOME is a lot more configurable, extensible, and malleable than common wisdom on the internet would have you believe. It's not KDE or one of those cobbled-together tiling Wayland desktops, but it's definitely not as rigid as you might think. And two, that GNOME is good, actually. Tools of the trade The first thing I do is install a few crucial tools that make it easier to modify and tweak GNOME. I really dislike lists in articles, but I will begrudgingly use one here: After installing all of these tools, the actual tweaking can commence. Visual tweaks I didn't use to like GNOME's Adwaita visual style, but over the years, it started growing on me to the point where I don't actively dislike it anymore. With the arrival of libadwaita, it has also become effectively impossible to theme modern GNOME applications, so even if you do change to something else, many of your applications won't follow along. If consistency is something you care about, you'll stick to Adwaita, but that leaves one problem unresolved: applications that still use GTK3. These applications will follow a much older version of Adwaita, making them stand out like eyesores among all the modern GTK4 stuff. Luckily, since GTK3 applications are still properly themable, this is easily fixed: just install the adw-gtk3 theme, either by hand, or through your distribution's repositories. To enable it, first install the user themes extension through Extension Manager, and then enable the theme in GNOME Tweaks for Legacy Applications". Any potential GTK3 applications you still use will now integrate nicely with modern libadwaita applications. The one part of GNOME I really do deeply dislike is its icon theme. I can't quite explain why I dislike this icon set so much, but it runs deep, so one of the very first things I do is replace the default GNOME icon set with my personal favourite, Qogir. This is a popular icon set, so it's usually available in your distribution's repositories, but I always install it from its GitHub page. Changing GNOME's icon set is as simple as selecting it in GNOME Tweaks. You can't get much more personal taste than an icon set, and there are dozens of amazing sets to choose from in the Linux world. Changing them out and trying out new ones is stupidly easy, and it's definitely worth looking at a few that might be more pleasing to you than GNOME's (or KDE's) default. Lastly, I open Add Water and enable the amazing GNOME theme for LibreWolf. Add Water basically makes this as easy as flipping a switch, so there's no need to copy any files into your LibreWolf profile or whatever. The application also provides a few more small tweaks to fiddle with, like enabling standard tab widths so tabs don't grow and shrink as you close and open tabs, moving the bookmarks bar below the tab bar, and many more. Extensions Since the release of GNOME 3 in 2011, extensions have been the most capable way to modify GNOME's look, behaviour, and feature set. As far as I can tell, while the extension framework is an official part of the GNOME Shell, the extensions themselves are all third-party and not part of a vanilla GNOME installation. By now, there are over 2800 listed extensions, but that number includes abandoned extensions so it's hard to determine the actual number of currently-maintained ones. Whatever the actual number is, there's bound to be things in there you're going to want to use. Here are the extensions I have installed. Let's just start at the top and work our way down. I guess I'm forced to do another list. There are countless more extensions to choose from, and you're definitely going to find things you never even thought could be useful. Miscellaneous tweaks There's a few other things I modify. In GNOME Tweaks, I make it so that double-clicking a window's titlebar minimises it while right-clicking it lowers it; two features I picked up during my years as a BeOS user that I absolutely refuse to give up. I configure the dock from Dash to Dock so that it always remains on top and never hides itself, no matter the circumstances. In Settings, I disable virtual desktops entirely (I don't like virtual desktops), and I make sure tap-to-click is disabled (if I'm on a laptop). GNOME is good, actually After making all of these changes, I feel quite comfortable using GNOME, at least on my laptop. It's a nice, coherent experience, and offers what is probably the most polished graphical user interface you can find on Linux, even if it isn't the most full-featured. The third-party application ecosystem, through modern
To assess how small a macOS VM could be, I ran the same VM of macOS 26.4.1 on progressively smaller CPU core and memory allocations, using my virtualiser Viable. The VM's display window was set to a standard 1600 x 1000, and I ran Safari through its paces and performed some lightweight everyday tasks, including Storage analysis in Settings. Starting with 4 virtual cores and 8 GB vRAM, where the VM ran perfectly briskly with around 5 GB of memory used, I stepped down to 3 cores and 6 GB, to discover that memory usage fell to 3.9 GB and everything worked well. With just 2 cores and 4 GB of memory only 3.1 GB of that was used, and the VM continued to handle those lightweight tasks normally. Howard Oakley This is good news for people interested in the MacBook Neo who may also want to run a macOS virtual machine on it.
Email is like those creaking old Terminators from the '70s which continue to function without complaining. Designed for a world that doesn't exist anymore, it has optional encryption, no built-in auth, three retrofitted security layers bolted on top, an unstandardized filtering layer and many more quirks. Yet billions of emails arrive correctly every single day. Email is not elegant but nonetheless it is Lindy. In the new age of agentic AI, we can only expect it to metamorphose into another dimension. Saurabh Sam" Khawase The fact that email is as complicated as it is bad enough, but having it be so dominantly controlled by only a few large gatekeepers like Google and Microsoft surely isn't helping either. I feel like email is no longer really a technology individuals can actively partake in at every level; it feels much more like WhatsApp or iMessage or whatever in that we just get to send messages, and that's it. Running your own mail sever isn't only a complex endeavour, it's also a continuous cat-and-mouse game with companies like Google and Microsoft to ensure you don't end up on some shitlist and your emails stop arriving. I settled on Fastmail as my email service, and it works quite well. Still, I would love to be able to just run my own email server, or have some of my far more capable friends run one for a small group of us, but it's such a daunting and unpleasant effort few people seem to have the stomach and perseverance for it.
What if you run a few online services for you and your friends, like a small git instance and a grocery list service, but you get absolutely hammered by AI" scrapers? I cannot impress upon you, reader, that this is not only an attack that is coordinated, it is an attack that is distributed. I run a small set of services, basically only for me and my friends. I am not a hyperscaler, I am not a tech company, I am not even a small platform. I have a git forge where I put the shit I make, and a couple other services where me and my friends backup our files or write our grocery lists. I am not fucking Meta and I cannot scale the fuck up just because OpenAI or Anthropic or Meta or whoever is training a model that weeks wants to suck all the content out of my VPS ONCE MORE until it's dry. lux at VulpineCitrus So how much traffic did the author of this piece, lux, get from AI" scraping bots? Within a time period of 24 hours, they were hammered by 2040670 unique IP addresses, 98% of which were IPv4 addresses, which means that 1 out of every 2000 publicly available IPv4 addresses were involved in the scraping. Together, they performed over 5 million requests. And just to reiterate: they were scraping a few very small, friends-only services run by some random person. This is absolutely insane. If, at this point in time, with everything that we know about just how deeply unethical every single aspect of AI" is, you're still using and promoting it, what is wrong with you? If you're so addicted to your AI" girlfriend's unending stream of useless, forgettable sycophantic slop, despite being aware of the damage you're doing to those around you, there's something seriously wrong with you, and you desperately need professional help. You don't need any of this. The world doesn't need any of this. Nobody likes the slop AI" regurgitates, and nobody likes you for enabling it. Get help.
Microsoft is continuing its efforts to release early versions of DOS as open source, and today we've got a special one. We're stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS. The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed. Stacey Haffner and Scott Hanselman It's wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao's website and Cini's website.
When Apple unveiled the Vision Pro, almost three (!) years ago, I concluded: If there's one company that can convince people to spend $3500 to strap an isolating dystopian glowing robot mask onto their faces it's Apple, but I still have a hard time believing this is what people want. Thom Holwerda at OSNews (quoting myself is weird) MacRumors' Juli Clover, today: Apple has all but given up on the Vision Pro after the M5 model failed to revitalize interest in the device, MacRumors has learned. Apple updated the Vision Pro with a faster M5 chip and a more comfortable band in October 2025, but there were no other hardware changes, and consumers still weren't interested. Apple has apparently stopped work on the Vision Pro and the Vision Pro team has been redistributed to other teams within Apple. Some former Vision Pro team members are working on Siri, which is not a surprise as Vision Pro chief Mike Rockwell has been leading the Siri team since March 2025. Juli Clover at MacRumors VR - what the Vision Pro is, whether Apple's marketing likes to say it or not - has proven to be good for exactly two things: games and porn. The Vision Pro has neither. It was destined to be a flop from the start, as nobody wants to strap an uncomfortable computer to their face that does less than all of the other computers they already have, and what it does do, it does worse. I do wonder if this makes the Vision Pro the most expensive flop in human history. Has any company ever spent more on a product that failed this spectacularly?
It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldn't impact most people, as it's highly unlikely you're using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apple's Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable. It's important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the line's availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution. Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that it's trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that. If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the Network" folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple's legacy stack. You should also be able to use the disk for Time Machine backups. TimeCapsuleSMB It's compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although you'll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that don't and won't work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4. This whole saga is such an excellent example of why open source software protects users' rights, by design.
Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that. A new dilloc program is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in the DILLO_PID environment variable or for a unique Dillo process if not set. Dillo 3.3.0 release notes You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current page's contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. I'm sure some of you who live and die in the terminal are already thinking of all the possibilities here. You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.
Ubuntu, being one of the more commercial Linux distributions, was always going to jump on the AI" bandwagon, and Jon Seager, Canonical's VP Engineering, published a blog post with more details. Throughout 2026 we'll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values. By focusing on the combination of education for our engineers, our existing knowledge of building resilient systems and our strengthening silicon partnerships, we will deliver efficient local inference, powerful accessibility features, and a context-aware OS that makes Ubuntu meaningfully more capable for the people who rely on it Ubuntu is not becoming an AI product, but it can become stronger with thoughtful AI integration. Jon Seager at Ubuntu Discourse The problem with this entire post is that, much like all other corporate communications about AI", it's all deceptively vague, open-ended, and weasely. Adjectives like focused", principled", thoughtful", and tasteful" don't really mean anything, and leave everything open for basically every type of slop AI" feature under the sun. Their claims about open weights and open source models are also weakened by words like favour" and where possible", again leaving the door wide open for basically any shady AI" company's models and features to find their way into your default Ubuntu installation. There's also very little in terms of concrete plans and proposed features, leaving Ubuntu users in the dark about what, exactly, is going to be added to their operating system of choice during the remainder of the year. There's mentions of improved text-to-speech/speech-to-text and text regurgitators, but that's about it. None of it feels particularly inspired or ground-breaking, and the veneer of open source, ethical model creation, and so on, is particularly thin this time around, even for Canonical. I don't really feel like I know a lot more about Canonical's AI" intentions for Ubuntu after reading this post than I did before, other than Ubuntu users might be able to generate text in their email client or whatever later this year. Is that really something anybody wants?
Raymond Chen published a blog post about how a crappy uninstaller on Windows caused a mysterious spike in the number of Explorer (Windows' graphical shell) crashes. It turns out the buggy uninstaller caused repeated crashes in the 32bit version of Explorer on 64bit systems, and - hold on a minute. The how many bits on the what now? The 32-bit version of Explorer exists for backward compatibility with 32-bit programs. This is not the copy of Explorer that is handling your taskbar or desktop or File Explorer windows. So if the 32-bit Explorer is running on a 64-bit system, it's because some other program is using it to do some dirty work. Raymond Chen at The Old New Thing So I had no idea that 64bit Windows included a copy of the 32bit Explorer for backwards compatibility. It obviously makes sense, but I just never stopped to think about it. This made me wonder though if you could go nuts and do something really dumb: could you somehow trick 64bit Windows into running this 32bit copy of Explorer as its shell? You'd be running 32bit Explorer on 64bit Windows using the 32bit WoW64 binaries where you just pulled the 32bit Explorer binary from, which seems like a really nonsensical thing to do. Since there's no longer any 32bit builds of Windows 11, you also can't just copy over the 32bit Explorer from a 32bit Windows 11 build and achieve the same goal that way, so you'd really have to go digging around in WoW64 to get 32bit versions. I guess the answer to this question depends on just how complete this copy of 32bit Explorer really is, and if Windows has any defenses or triggers in place to prevent someone from doing something this uselessly stupid. Of course, there's no practical reason to do any of this and it makes very little sense, but it might be a fun hacking project. Most likely the Windows experts among you are wondering what kind of utterly deranged new designer drug I'm on, but I was always told that sometimes, the dumbest questions can lead to the most interesting answers, so here we are.
Not too long ago I had a need and an opportunity to re-acquaint myself with the mechanism used for software emulation of the 8087 FPU on 8086/8088 machines. Michal Necasek Look, when a Michal Necasek article starts out like this, you know you're in for a learnin' ol' time. The 8087 was a floating-point coprocessor for the 8086 and 8088 processors, since back in those early days, processors did not include an integrated floating-point unit. It wouldn't be until the release of the 486DX, in 1989, that Intel would integrate an FPU inside the processor itself, negating the need for a separate chip and socket. Interestingly enough, Intel also released a cut-down version of the 486 with the FPU removed, the 486SX, for which an optional external FPU did exist.
Sebastian Wick has a great explanation of why opening files - programmatically - is a lot more complex and fraught with dangers than you might think it is. This issue was relevant for Wick as he is one of the lead developers of Flatpak, for which a number of security issues have recently been discovered, and it just so happens that many of these issues dealt with this very topic. The biggest security issue found was a complete sandbox escape, originating from the fact that flatpak run, the command-line tool to start a Flatpak application, accepted path strings, since flatpak run is assumed to be run by a trusted user. The problem lay in a D-Bus service sandboxed applications could use to create subsandboxes, and this service was built around, you guessed it, flatpak run. The issues in question, including this complete sandbox escape, have been addressed and fixed, but they highlight exactly the dangers that can come from opening files. This subsandboxing approach in Flatpak is built on assumptions from fifteen years ago, and times have changed since then. If you're a programmer who deals with opening files, you might want to take a look at your own code to see if similar issues exist.
In that reading AI is a machine for the creation of epistemic injustice and the replacement of truth with what a tech elite wants it to be in order to control the population. This is a Fascist project that not so subtly aligns with Fascism's totalitarian will to power and control as well as its reliance in replacing reasoning and debate with belief in power and the leader. Jurgen Geute The purpose of a system is what it does, and what AI" does is stunt users' own abilities and development and concentrate power and wealth even further in the hands of a very small privileged few - a privileged few who consistently espouse fascist ideology and promote and implement fascist ideas. Jurgen Geute lays it out in much more detail backed by solid references and concrete examples, but the conclusion is clear. And uncomfortable to many, as such conclusions always are.
I'm not sure many OSNews readers still use Ubuntu as their operating system of choice, and from the release announcement of today's Ubuntu 26.04 it's clear why that's the case. Resolute Raccoon builds on the resilience-focused improvements introduced in interim releases, with TPM-backed full-disk encryption, improved support for application permission prompting, Livepatch updates for Arm-based servers, and Rust-based utilities for enhanced memory safety. This release brings native support for industry-leading AI/ML toolkits like NVIDIA CUDA and AMD ROCm, making Ubuntu 26.04 LTS the ideal platform for AI development and production workloads. Canonical press release It's obvious where Canonical's focus lies with Ubuntu, and us desktop people who don't like AI" aren't it. On top of all the AI" nonsense, this new version comes with all the latest versions of the various open source components that make up a Linux distribution, as well as a slew of Rust-based replacements for core CLI tools, like sudo-rs, uutils coreutils, and more. All the derivative release of Ubuntu, like Kubuntu, Xubuntu, and others, will also be updated over the coming days. If you're already running any of these, updating won't be a surprise to you.
You can find beauty in the oddest of places. WSL9x runs a modern Linux kernel (6.19 at time of writing) cooperatively inside the Windows 9x kernel, enabling users to take advantage of the full suite of capabilities of both operating systems at the same time, including paging, memory protection, and pre-emptive scheduling. Run all your favourite applications side by side - no rebooting required! Hailey Somerville Yes, this is exactly what it sounds like. Hailey Somerville basically recreated the first version of WSL - or coLinux, for the old people among us - but instead of running on Windows NT, it runs on Windows 9x. A VxD driver loads a patched Linux kernel using DOS interrupts, and this Linux kernel calls Windows 9x kernel APIs instead of POSIX APIs. A small DOS client application then allows the Linux kernel to use MS-DOS prompts as TTYs. This is a great oversimplification, but it does get the general gist across. Anyway, the end result is that you can use a modern Linux kernel and Windows 9x at the same time, without virtualising or dual-booting. This might be one of the greatest hacks in recent times, and I find it oddly beautiful in its user-facing simplicity.
Despite years of apparent stagnation and reported mass layoffs, it seems the Solaris team at Oracle has found somewhat of a renewed stride recently. Both branches of Solaris - the one for paying customers (SRU) and the free one for enthusiasts (CBE) - are receiving regular updates again, and there seems to be a more concerted effort to let the outside world know, too. We've got another update to the SRU branch this week which brings updates to a few important open source packages, like Django, Firefox, Thunderbird, Golang, and others, to address security issues. In addition, this update marks as a change in the release cadence for the commercial branch of Solaris. From here on out, there will be two Critical Patch Updates" per quarter to address security issues, followed by a Support Repository Update containing new features and larger changes.
I need to post about this because if I don't, people will get mad. Cook will continue on as Apple CEO through the summer, with Ternus set to join Apple's Board of Directors and take over as CEO on September 1, 2026. Cook is going to transition to chairman of the board at Apple, and he will assist with certain aspects of the company, including engaging with policymakers around the world." Juli Clover at MacRumors This concludes OSNews' coverage of Keeping Up With the Yacht Class, but rest assured, every other tech site will be milking this for weeks to come. You will still be worrying about how to pay for your next tank of gas.
Have you ever tried clicking the back button in your browser, only to realise the website you're on somehow doesn't allow that? Out of all the millions of annoyances on the web, Google has decided to finally address this one: they're going to punish the search rankings of websites that use this back button hijacking. Pages that are engaging in back button hijacking may be subject to manual spam actions or automated demotions, which can impact the site's performance in Google Search results. To give site owners time to make any needed changes, we're publishing this policy two months in advance of enforcement on June 15, 2026. Google Search Central It's always uncomfortable when Google unilaterally takes actions such as these, since rarely do Google's interests align with our own as users. This is in such rare case, though, and I can't wait to see this insipid practice relegated to the dustbin of history.
LXQt, the desktop environment which is effectively to KDE what Xfce is to GNOME, has released version 2.4.0. Quite a few changes in this release are further refinements and fixes related to LXQt's adoption of Wayland, but there are also a ton of small fixes, improvements, and small new features that have nothing to do with Wayland at all. There are also a few layout cleanups to make some dialogs and panels look a bit tidier and nicer. Note that LXQt supports both X11 and Wayland equally, and the choice of which to use is up to you. If you're using LXQt, you've already seen a few of these changes in point releases of its components, so not everything listed in the release notes might be news to you.
The title of my article on age verification in Linux and other operating systems had a for now" added for a reason, and here we are, with two members of the US Congress introducing a bill to add age verification to operating systems. The text of the proposed bill was only published today, and it's incredibly vague and wishy-washy, without any clear definitions and ton of open-ended questions. Still, if passed, the bill would require actual age verification, instead of mere voluntary age reporting that current state-level bills cover. It also seems to eschew the concept of age brackets, giving application developers access to specific ages of users instead. It's a vague mess of a bill that no sane person would ever want passed, but alas, sanity is a rare commodity these days, especially in US Congress. It's introduced by Democrat Josh Gottheimer and Republican Elise M. Stefanik, so it has that bipartisan sheen to it, which could increase its odds of going anywhere. At the same time, though, US Congress is about as useful as a box of matches during a house fire, so for all we know, this will end up going nowhere as its members focus on doing absolutely nothing to reign in the flock of coked-up headless chickens passing for an executive branch over there. If something like this gets passed, every US-based operating system - which includes most open source operating systems and Linux distributions - will probably fall in line when faced with massive fines and legal pressure. This isn't going to be pretty.
Tribblix, the Illumos distribution focused on giving you a classic UNIX-style experience, doesn't only support x86. It also has a branch for SPARC, which tends to run behind its x86 counterpart a little bit and has a few other limitations related to the fact SPARC is effectively no longer being developed. The Tribblix SPARC branch has been updated, and now roughly matches the latest x86 release from a few weeks ago. The graphical libraries libtiff and OpenEXR have been updated, retaining the old shared library versions for now. OpenSSL is now from the 3.5 series with the 3.0 api by default. Bind is now from the 9.20 series. OpenSSH is now 10.2, and you may get a Post-Quantum Cryptography warning if connecting to older SSH servers. zap install' now installs dependencies by default. zap create-user' will now restrict new home directories to mode 0700 by default; use the -M flag to choose different permissions. Support for UFS quotas has been removed. Tribblix release notes There's no new ISO yet, so to get to this new m34 release for SPARC you're going to have to install from an older ISO and update from there.
Another Haiku monthly activity report, but this time around, there's actually a big ticket item. Haiku has been in a pretty solid and stable state for a while now, so the activity reports have been dominated by fairly small, obscure changes, but during March a major milestone was reached for the ARM64 port. smrobtzz contributed the bulk of the work, including fixes for building on macOS on ARM64, drivers for the Apple S5L UART, fixes to the kernel base address, clearing the frame pointer before entering the kernel, mapping physical memory correctly, the basics for userland, and more. SED4906 contributed some fixes to the bootloader page mapping, and runtime_loader's page-size checks. Combined, these changes allow the ARM64 port to get to the desktop in QEMU. There's a forum thread, complete with screenshots, for anyone interested in following along. waddlesplash While it's only in QEMU, this is still a major achievement and paves the way for more people to work on the ARM64 port, possibly increasing its health. There's tons of smaller changes and fixes all over the place, too, as usual, and the team mentions beta 6 isn't quite ready yet, still. Don't let that stop you from just downloading the latest nightly, though - Haiku is mature enough to use it.
The editor in chief of this blog was born in 2004. She uses the 1997 window manager, Enlightenment E16, daily. In this article, I describe the process of fixing a show-stopping, rare bug that dates back to 2006 in the codebase. Surprisingly, the issue has roots in a faulty implementation of Newton's algorithm. Kamila Szewczyk I'm not going to pretend to understand any of this, but I know you people do. Enjoy.
Modern laptops promise a kind of magic. Shut the lid or press the sleep button, toss it in a backpack, and hours, days, or weeks later, it should wake up as if nothing happened with little to no battery drain. This sounds like a fairly trivial operation - y'know, you're literally just asking for the computer to do nothing - but in that quiet moment when the fans whir down, the screen turns dark, and your reflection stares back at you, your computer and all its little components are actually hard at work doing their bedtime routine. Aymeric Wibo at the FreeBSD Foundation A look at how suspend and resume works in practice, from the perspective of FreeBSD. Considering FreeBSD's laptop focus in recent times, not an unimportant subject.
A few weeks ago, Microsoft made some concrete promises about fixing and improving Windows, and among them was removing useless AI" integrations. Applications like Notepad, Snipping Tool, and others would see their AI" features removed. Well, it turns out Microsoft employs a very fringe definition of the concept. Microsoft seems to have stripped away mentions of the Copilot" brand in the Windows Insider version of the Notepad app. The Copilot button in the toolbar is gone, and instead, you'll find a writing icon which will present you AI-powered writing assistance, such as rewrite, summarize, tone modification, format configuration, and more. Additionally, AI features" in Notepad settings has been renamed to Advanced features" and it allows users to toggle off AI capabilities within the app. Usama Jawad at Neowin If the recent changes to Notepad are any indication, it seems Microsoft is, actually, not at all going to reducing unnecessary Copilot entry points", as they worded it, but is merely just going to rename these features so they aren't so ostentatiously present. At least, that seems to be the plan for Notepad, and we'll have to see if they have the same plans for the other applications. I mean, they have to push AI" or look like fools. I just don't understand how a company like Microsoft can be so utterly terrible at communication. While I personally would want all AI" features yeeted straight from Windows, I'm sure a ton of people are just fine with the features being less in-your-face and stuffed inside a normal menu alongside all the other normal features. They could've just been honest about their intentions, and it would've been so much better. Like virtually every other technology company, Microsoft just seems incapable of not lying.
Ever heard of a condition called bixonimania? Did you search the internet or ask your AI" girlfriend about some symptoms you were experiencing, and this was its answer? Well... The condition doesn't appear in the standard medical literature - because it doesn't exist. It's the invention of a team led by Almira Osmanovic Thunstrom, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunstrom carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. I wanted to see if I can create a medical condition that did not exist in the database," she says. Chris Stokel-Walker at Nature And AI" ate it up like quality chocolate. It started appearing in the answers from all the popular AI" tools within weeks, and later even started showing up as references in published literature, indicating that scientists copy/paste references without actually reading them. This is clearly a deeply concerning experiment, and highlights there may be many, many more nonsensical, fake studies being picked up by AI" tools. Of course, I hear you say, it's not like propagating fake or terrible studies is the sole domain of AI", as there are countless cases of this happening among actual real researchers and scientists, too. The issue, though, is that the fake studies concerning bixonimania" were intentionally made to be as silly and obviously ridiculous as possible. It references Starfleet Acadamy, the lab aboard the Enterprise, the University of Fellowship of the Ring, and many other fake references instantly recognisable as such by real humans. In fact, the studies even specifically mention that this entire paper is made up" and fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group". It would take any human only a few seconds after opening one of these papers to realise they're entirely fake - yet, the world's most advanced AI" tools gobbled them up and spit them back out as pure fact within mere weeks of their publication This shouldn't come as a surprise. After all, AI" tools have no understanding, no intelligence, no context, and they can't actually make sense of anything. They are glorified pachinko machines with the output - the ball - tumbling down the most likely path between the pins based on nothing but chance and which pins it has already hit. AI" output understands the world about as much as the pachinko ball does, and as such, can't pick up on even the most obvious of cues that something is a fake or a forgery. It won't be long before truly nefarious forces start doing this very same thing. Why build, staff, and maintain a troll farm when you can just have AI" generate intentional misinformation which will then be spread and pushed by even more AI"? Remember, it took one malicious asshole just one long since retracted fake paper to convince millions that vaccines cause autism. I shudder to think how many people are accepting anything AI" says as gospel.
Version 7.0 of the Linux kernel has been released, marking the arbitrary end of the 6.x series. Significant changes in this release include the removal of the experimental" status for Rust code, a new filtering mechanism for io_uring operations, a switch to lazy preemption by default in the CPU scheduler, support for time-slice extension, the nullfs filesystem, self-healing support for the XFS filesystem, a number of improvements to the swap subsystem (described in this article and this one), general support for AccECN congestion notification, and more. See the LWN merge-window summaries (part1, part2) and the KernelNewbies 7.0 page for more details. corbet at LWN.net You can compile the kernel yourself, or just wait until it hits your distribution's repositories.
It shouldn't be a surprise that companies - and for our field, technology companies specifically - working with the defense industry tends to raise eyebrows. With things like the genocide in Gaza, the threats of genocide and war crimes against Iran, the mass murder in Lebanon, it's no surprise that western companies working with the militaries and defense companies involved in these atrocities are receiving some serious backlash. With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled Compress the kill cycle with Red Hat Device Edge", the 2024 white paper details how Red Hat's products and technologies can make it easier and faster to, well, kill people. Links to the white paper throw up 404s now, but it can still easily be found on the Wayback Machine and other places. It's got some disturbingly euphemistic content. The find, fix, track, target, engage, assess (F2T2EA) process requires ubiquitous access to data at the strategic, operational and tactical levels. Red Hat Device Edge embeds captured, analyzed, and federated data sets in a manner that positions the warfighter to use artificial intelligence and machine learning (AI/ML) to increase the accuracy of airborne targeting and mission-guidance systems. Delivering near real-time data from sensor pods directly to airmen, accelerating the sensor-to-shooter cycle. Sharing near real-time sensor fusion data with joint and multinational forces to increase awareness, survivability, and lethality. The new software enabled the Stalker to deploy updated, AI-based automated target recognition capabilities. If the target is an adversary tracked vehicle on the far side of a ridge, a UAS carrying a server running Red Hat Device Edge could transmit video and metadata directly to shooters. Red Hat white paper titled Compress the kill cycle with Red Hat Device Edge" I don't think there's something inherently wrong with working together with your nation's military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies' products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion). There's always going to be difficult grey areas, but any military or defense company supporting the genocide in Gaza or supplying weapons to kill women and children in Iran is unequivocally wrong, morally reprehensible, and downright illegal on both an international and national level. It clearly seems someone at Red Hat feels the same way, as the company has been trying really hard to memory-hole this particular white paper, and considering its word choices and the state of the world today, it's easy to see why. Of course, the internet never forgets, and I certainly don't intend to let something like this slide. We all know companies like Microsoft, Oracle, and Google have no qualms about making a few bucks from a genocide or two, but it always feels a bit more traitorous to the cause when it's an open source company doing the profiting. It feels like Red Hat is trying to have its cake and eat it too, by, as an IBM subsidiary, trying to both profit from the vast sums of money sloshing around in the US military industrial complex as well as maintain its image as a scrappy open source business success story shitting bunnies and rainbows. It's a long time ago now that Red Hat felt like a genuine part of the open source community. Most of us - both outside and inside of Red Hat, I'm sure - have been well aware for a long time now that those days are well behind us, and I guess Red Hat doesn't like seeing its kill cycle this compressed.
If you want to run FreeBSD on a laptop, you're often yanked back to the Linux world of 20 years ago, with many components and parts not working and other issues such as sleep and wake problems. FreeBSD has been hard at work improving the experience of using FreeBSD on laptops, and now this has resulted in a list of laptops which work effortlessly with the venerable operating system. There's only about 10 laptops on the list so far, but they do span a range of affordability and age, with some of them surely being quite decent bargains on eBay or whatever other used stuff marketplace you use. If you want to use FreeBSD on a laptop, but don't want to face any surprises or do any difficult setup, get one of the laptops on this list - a list which will surely expand over time.
It may sound unbelievable to some, but not everyone has a datacenter beast with 128GB of VRAM shoved in their desktop PCs. Around the world people tell the tale of a particularly fierce group of Linux gamers: Those who dare attempt to play games with only 8 gigabytes of VRAM, or even less. Truly, it takes exceedingly strong resilience and determination to face the stutters and slowdowns bound to occur when the system starts running low on free VRAM. Carnage erupts inside the kernel driver as every application fights for as much GPU memory as it can hold on to. Any game caught up in this battle for resources will surely not leave unscathed. That is, until now. Because I fixed it. Natalie Vock The solution is to use cgroups to control the kernel's memory eviction policies, so that applications that should get priority when it comes to VRAM allocation - like games - don't get their memory evicted from VRAM to system RAM. Basically, evict everything else from VRAM before touching the protected application. This way, something like a game will have much more consistent access to more VRAM, thereby reducing needless memory evictions that harm performance. It's a clever solution that makes use of a ton of existing Linux tools, meaning it's also much easier to upstream, implement, and support. Excellent work.
You might have seen this, one of the strangest and most primitive experiences in macOS, where you're asked to press keys next to left Shift and right Shift, whatever they might be. Perhaps I can explain. Marcin Wichary It seems pretty obvious to me that's what it was for, but I guess many normal, regular people have never seen anything but one particular keyboard configuration (ANSI for Americans, ISO for some Europeans, etc.) keyboards. Perhaps they don't realise that not only are there ANSI keyboards with other layouts, but also entirely different keyboard configurations (mainly ISO and JIS). Interestingly, my home country of The Netherlands uses a US English layout on an ANSI configuration, but of course, it's the US International variant, either with deadkeys or using AltGr for the various accented/special characters we use. In my current country of residence, Sweden, they use this utterly wild and incomprehensible ISO layout where Shift unlocks characters on the bottom of keys, while AltGr unlocks characters at the top, the exact opposite of literally every other keyboard I've ever used (US Int'l, classic Dutch (no longer used), German, French, etc.). It's utterly bizarre, but entirely normal to my Swedish wife. We cannot use each other's keyboards.
This post aims to be a high level introduction to using USB for people who may not have worked with Hardware too much yet and just want to use the technology. There are amazing resources out there such as USB in a NutShell that go into a lot of detail about how USB precisely works (check them out if you want more information), they are however not really approachable for somebody who has never worked with USB before and doesn't have a certain background in Hardware. You don't need to be an Embedded Systems Engineer to use USB the same way you don't need to be a Network Specialist to use Sockets and the Internet. Nik WerWolv" A bit of a generic title, but the article details how to write a USB driver.
The months keep coming, and thus, the monthly progress reports keep coming, too, for Redox, the new general purpose operating system written in Rust. This past month, there's been considerable graphics improvements, better deadlock detection in the kernel, improved Unicode support thanks to switching over to ncurses library variant with Unicode support, and much more. Alongside these, you'll find the usual long list of kernel, driver, and relibc changes, bugfixes, and improvements. This month also covered three topics we've already discussed individually: Redox' new no-AI" code policy, capability-based security in Redox, and the brand-new CPU scheduler.
Since its launch in 2007, the Wii has seen several operating systems ported to it: Linux, NetBSD, and most-recently, Windows NT. Today, Mac OS X joins that list. In this post, I'll share how I ported the first version of Mac OS X, 10.0 Cheetah, to the Nintendo Wii. If you're not an operating systems expert or low-level engineer, you're in good company; this project was all about learning and navigating countless unknown unknowns". Join me as we explore the Wii's hardware, bootloader development, kernel patching, and writing drivers - and give the PowerPC versions of Mac OS X a new life on the Nintendo Wii. Bryan Keller And all of this, because someone on Reddit said it couldn't be done. It won't surprise you to learn that the work required was extensive, from writing a custom bootloader to digging through the XNU source code, applying binary patches to the kernel during the boot process, building a device tree, writing the necessary drivers, and so much more. Even just setting up a development environment was a pretty serious undertaking. Especially writing the drivers posed an interesting and unique challenge, as the Wii doesn't use PCI to connect and expose its hardware components. Instead, components are connected to a dedicated SoC with its own ARM processor that talks to the main Wii PowerPC processor, exposing hardware that way. This meant that Keller had to write a driver for this chip first, before moving on to the device drivers for devices connected to this ARM SoC - graphics drivers, input drivers, and so on. After a ton more work and overcoming several complex roadblocks, we now have Mac OS X 10.0 Cheetah on the Nintendo Wii. Amazing.
From 2024, but still accurate and interesting: Plan 9 is unique in this sense that everything the system needs is covered by the base install. This includes the compilers, graphical environment, window manager, text editors, ssh client, torrent client, web server, and the list goes on. Nearly everything a user can do with the system is available right from the get go. moody This is definitely something that sets Plan 9 apart from everything else, but as moody - 9front developer - notes, this also has a downside in that development isn't as fast, and Plan 9 variants of tools lack features upstream has for a long time. He further adds that he think this is why Plan 9 has remained mostly a hobbyist curiosity, but I'm not entirely sure that's the main reason. The cold and harsh truth is that Plan 9 is really weird, and while that weirdness is a huge part of its appeal and I hope it never loses it, it also means learning Plan 9 is really hard. I firmly believe Plan 9 has the potential to attract more users, but to get there, it's going to need an onboarding process that's more approachable than reading 9front's frequently questioned answers, excellent though they are. After installing 9front and loading it up for the first time, you basically hit a brick wall that's going to be rough to climb. It would be amazing if 9front could somehow add some climbing tools for first-time users, without actually giving up on its uniqueness. Sometimes, Plan 9 feels more like an experimental art project instead of the capable operating system that it is, and I feel like that chases people away. Which is a real shame.
Anos is a modern, opinionated, non-POSIX operating system (just a hobby, won't be big and professional like GNU-Linux) for x86_64 PCs and RISC-V machines. Anos currently comprises the STAGE3 microkernel, SYSTEM user-mode supervisor, and a base set of servers implementing the base of the operating system. There is a (WIP) toolchain for Anos based on Binutils, GCC (16-experimental) and Newlib (with a custom libgloss). Anos GitHub page It's written in C, runs on both x86-64 and RISC-V, and can run on real hardware too (but this hasn't been tested on RISC-V just yet). For the x86 side of things, it's strictly 64 bit, and requires a Haswell (4th Gen) chip or higher.
This year sees 35 years since 2.11BSD was announced on March 14, 1991 - itself a slightly late celebration of 20 years of the PDP-11 - and January 2026 brought what looks to be the venerable 16-bit OS's biggest ever patch! Much of the 1.3 MB size is due to Anders Magnusson, well-known for his work on NetBSD and the Portable C Compiler. Since 2.11BSD's stdio was not ANSI compliant, he's ported from 4.4BSD. BigSneakyDuck at Reddit There's an incredible amount of work in here on this old variant of BSD, including fixes for old bugs and tons of other changes. This, the 499th patch for 2.11BSD, is so big, in fact, that vi on 2.11BSD can't handle the size of the files, so you're going to need to cut them up with sed, for which instructions are included. It's quite unique to see such a big update on the 35th anniversary of an operating system.
Anyone remember the KDE 4.0 themes Oxygen and Air? Well, several KDE developers have been working tirelessly to bring them back, which means they're patching it up, fixing bugs, and generally making these classic themes work well in the current releases of KDE Plasma 6. The last post regarding work on fixing Oxygen was a month and a half ago. With all that's happened in between, it feels like so much more time has actually passed. With this post, I'd like to do a sort of mid-term update summing up all of the improvements done so far. These improvements are not just my work, but also, as you'll see, the work of the lead Oxygen designer Nuno Pinheiro, of several seasoned KDE developers, and of new contributors to Oxygen as well. Filip Fila The effort to bring these themes back go much beyond just making them nominally work; the developers and designers are also making sure the themes work properly with all the new features that have come to KDE since the 4.x and 5.x days, like adaptive and floating panels, various forms of blur, and a ton more - which includes making sure the themes are fully compatible with Wayland, which introduced a slew of new visual glitches and issues to these old themes in recent years. They are also working on improving, updating, and expanding the Oxygen icon set, which should surely bring back a ton of memories. This work involves not just designing new icons for applications and other things that didn't exist back when Oxygen was current, but also fixing old icons that look blurry on modern setups, addressing cases where monochrome and colourful icons mismatch, and so on. They're clearly taking this very seriously. It seems to be an organic effort more and more people got involved with as time passed, and they're aiming to have these themes ready for Plasma 6.7, to be released in June of this year. You can already try the current versions today, but they do require the absolute latest version of KDE Plasma to work properly. More improvements are planned for the coming weeks. This whole thing brings a massive smile to my face, and is such a perfect illustration of why I love the KDE project and its approach and spirit. At this point in time, I personally can't imagine using any other desktop environment.
This is a great post, but obviously it hasn't convinced me: The folks waving their arms and yelling about recent models' capabilities have a point: the thing works. This project finished in three weeks. Compare that to Ringspace, a similarly-sized project that took me about six months of nights and early mornings to complete, while not doing my day job or being Dad to an amazing, but demanding toddler. I simply could not have built this project as well or as quickly without help. And as other developers have noted, this is the help that's showing up. I'm not entirely onboard with Mike Masnick's optimistic view of this technology's democratizing power. I don't think it's as easy to separate the tech from its provenance or corporate control. But CertGen, my certificate application, exists now. It didn't and couldn't without the help of a tool like Claude Code. Open source in particular needs to reckon with this, because the current situation of demanding developers starve and bleed themselves dry without support isn't tenable. We need to grapple with this. I'm not yet sure how it all breaks down, and anyone who says they do is lying, foolish, or fanatical. Michael Taggart If you disregard that AI" models are trained on stolen data, that such data was prepared by exploited workers, that AI" data centres have a hugely negative impact on the environment, that AI" data centers are distorting the entire computing market, that AI" models they feed the endless firehose of intentional misinformation, that they are wreaking havoc in education, that they increase your reliance on American big tech companies, that you pay AI" companies for taking your work, that AI" models are a vital component in the technofascist wet dreams of their creators, that they are the cornerstone of politicians' dream of ending anonymity, and that they contribute to racist and abusive policing, then yes, sometimes, they produce code that works and isn't total horseshit. It's a deeply depressing reversed what have the Romans ever done for us?" that makes me sad, more than anything. I've seen so many otherwise smart, caring, and genuine people just shove all of these massive downsides aside for the mere novelty, the peer pressure, the occasional sense that their lines of code" metric is going up. It's the digital equivalent of rolling coal.
If you're using Windows or macOS and have Adobe Creative Cloud installed, you may want to take a peek at your hosts file. It turns out Adobe adds a bunch of entries into the hosts file, for a very stupid reason. They're using this to detect if you have Creative Cloud already installed when you visit on their website. When you visit https://www.adobe.com/home, they load this image using JavaScript: https://detect-ccd.creativecloud.adobe.com/cc.png If the DNS entry in your hosts file is present, your browser will therefore connect to their server, so they know you have Creative Cloud installed, otherwise the load fails, which they detect. They used to just hit http://localhost:<various ports>/cc.png which connected to your Creative Cloud app directly, but then Chrome started blocking Local Network Access, so they had to do this hosts file hack instead. thenickdude at Reddit At what point does a commercial software suite become malware?
An ultra-lightweight real-time operating system for resource-constrained IoT and embedded devices. Kernel footprint under 10 KB, 2 KB minimum RAM, preemptive priority-based scheduling. TinyOS GitHub page Written in C, open source, and supports ARM and RISC-V.
Another major improvement in Redox: a brand new scheduler which improves performance under load considerably. We have replaced the legacy Round Robin scheduler with a Deficit Weighted Round Robin scheduler. Due to this, we finally have a way of assigning different priorities to our Process contexts. When running under light load, you may not notice any difference, but under heavy load the new scheduler outperforms the old one (eg. ~150 FPS gain in the pixelcannon 3D Redox demo, and ~1.5x gain in operations/sec for CPU bound tasks and a similar improvement in responsiveness too (measured through schedrs)). Akshit Gaur Work is far from over in this area, as they're now moving on to replacing the static queue logic with the dynamic lag-calculations of full EEVDF.
You'd think if there was one corner of the open source world where you wouldn't find drama it'd be open source office suites, but it turns out we could not have been more wrong. First, there's The Document Foundation, stewards of LibreOffice, ejecting a ton of LibreOffice contributors. In the ongoing saga of The Document Foundation (TDF), their Membership Committee hasdecidedto eject from membership all Collabora staff and partners. That includes over thirty people who have contributed faithfully to LibreOffice for many years. It is interesting to see a formal meritocracy eject so many,based on unproven legal concerns and guilt by association. This includes seven of the top ten core committers of all time (excluding release engineers) currently working for Collabora Productivity.The move is the culmination of TDF losing a large number of founders from membership over the last few years with: Thorsten Behrens, Jan Kendy' Holesovsky, Rene Engelhard, Caolan McNamara, Michael Meeks, Cor Nouws and Italo Vignolino longer members. Of the remaining active founders, three of the last four are paid TDF staff (of whom none are programming on the core code). Micheal Meeks The end result seems to be that Collabora is effectively forking LibreOffice, which feels like we're back where we were 15 years ago when LibreOffice forked from OpenOffice. There seems to be a ton of drama and infighting here that I'm not particularly interested in, but it's sad to see such drama and infighting result in needless complications for developers, end users, and distributors alike. As if this wasn't enough, there's also forking drama in OnlyOffice land, the other open source office suite, licensed under the AGPL. This ope source office suite has been forked by Nextcloud and IONOS into Euro-Office, in pursuit of digital sovereignty in the EU. It's also not an entirely unimportant detail that OnlyOffice is Russian, with most of its developers residing in Russia. Anyway, the OnlyOffice team has not taken this in stride, claiming there's a violation of the AGPL license going on here, specifically because OnlyOffice adds contradictory attribution terms to the AGPL. It's a complicated story, but it does seem most experts in this area seem to disagree with OnlyOffice's interpretation. We're in for another messy time.
This is the first of a series of articles in which you will learn about what may be one of the silliest, most preventable, and most costly mishaps of the 21st century, where Microsoft all but lost OpenAI, its largest customer, and the trust of the US government. Axel Rietschin It won't take long into this series of articles before you start wondering how anyone manages to ship anything at Microsoft. If even half of this is accurate, this company should be placed under some sort of external oversight.
I assume I don't have to explain the difference between big-endian and little-endian systems to the average OSNews reader, and while most systems are either dual-endian or (most likely) little-endian, it's still good practice to make sure your code works on both. If you don't have a big-endian system, though, how do you do that? When programming, it is still important to write code that runs correctly on systems with either byte order (see for example The byte order fallacy). But without access to a big-endian machine, how does one test it? QEMU provides a convenient solution. With its user mode emulation we can easily run a binary on an emulated big-endian system, and we can use GCC to cross-compile to that system. Hans Wennborg If you want to make sure your code isn't arbitrarily restricted to little-endian, running a few tests this way is worth it.
I don't like to cover current events" very much, but the American government just revealed a truly bewildering policy effectively banning import of new consumer router models. This is ridiculous for many reasons, but if this does indeed come to pass it may be beneficial to learn how to homebrew" a router. Fortunately, you can make a router out of basically anything resembling a computer. Noah Bailey I genuinely can't believe making your own router with Linux or BSD might become a much more widespread thing in the US. I'm not saying it's a bad thing - it'll teach some people something new - but it just feels so absurd.
Why do so many people keep falling for the same trick over and over again? With an over $400 billion gap between the money invested in AI data centers and the actual revenue these products generate, Silicon Valley slowly returned to the tested and trusted playbook: advertising. Now, ads are starting to appear in pull requests generated by Copilot. According to Melbourne-based software developer Zach Manson, a team member used the AI to fix a simple typo in a pull request. Copilot did the job, but it also took the liberty of editing the PR's description to include this message: Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast." David Uzondu at Neowin It turns out that Microsoft has added ads to over 1.5 million Copilot pull requests on GitHub, and they're even appearing on GitLab, one of the GitHub alternatives. The reasoning is clear, too, of course: AI" companies and investors have poured ungodly amounts of money in AI" that is impossible to recover, even with paying customers. As such, the logical next step is ads, and many AI" companies are already starting to add advertising to their pachinko machines. It was only a matter of time before Copilot would start inserting ads into the pull requests it ejaculates over all kinds of projects. This isn't the first time a once-free service turns on its users, but it's definitely one of the quickest turnarounds I've ever seen. Usually it takes much longer before companies reach the stage of putting ads in their products to plug any financial bleeding, but with the amount of money poured into this useless black hole, it really shouldn't be surprising we're already there. I'm sure Copilot's competitors, like Claude, will soon follow suit. They're enshittifying Git, and developers are just letting it happen. No wonder worker exploitation is so rampant in Silicon Valley.
By reimplementing these features using capabilities, we made the kernel simpler by moving complex scheme and namespace management out of it which improved security and stability by reducing the attack surface and possible bugs. At the same time, we gained a means to support more sandboxing features using the CWD file descriptor. This project leads the way for future sandboxing support in Redox OS. As the OS continues to move toward capability-based security, it will be able to provide more modern security features. Ibuki Omatsu Redox seems to be making the right decisions at, crucially, the right time.