Welcome news coming out of Jolla, the company that develops Sailfish OS. Up until now, if you bought their Jolla C2 smartphone, you had to pay a yearly subscription fee in order to get updates (with the first year included in the purchase price). Today they've announced their dropping this construction, and they now guarantee five years of free updates. We're happy to announce that from now onwards long-term Sailfish OS updates are included free-of-charge to all Jolla C2 devices for a minimum of 5 years. This applies also to everybody who have already purchased the Jolla C2. Announcement at the Jolla forums People don't like subscriptions, and I wouldn't be surprised if Jolla was simply running into a lot of resistance to this subscription model from potential customers. Nobody likes subscriptions, and I think that counts doubly so for the kinds of people interested in buying a phone like the C2 with Sailfish OS.
With the possibility that Google is going to make some big changes to the open source status of Android, the importance of smartphones that don't run either iOS or (some form of) Android is definitely increasing. Linux on smartphones is not as complete as iOS or Android, and I personally think one of the primary reasons for that is a lack of easy access to devices that don't require manual installation or other forms of hackery, only to then end up with a partially supported device because the device in question was never originally designed to run regular Linux. A few companies are trying to change this, developing Linux-first smartphones instead. One of the newcomers here is Liberux, a Spanish company who just unveiled the crowdfunding campaign for their Liberux Nexx, a Debian-powered smartphone with excellent specifications and some unique additions you won't find on any other smartphone. It's powered by an octa-core Rockchip RK3588S (four Cortex-A76 cores and four Cortex-A55 cores up to 2.4GHz), 32GB LPDDR4x RAM, tons of expendable storage, and a 6.34'' 2400*1080 OLED display. At the top of the device sit something you won't find on many other smartphones: dedicated hardware switches to physically cut power to the modem, Wi-Fi/Bluetooth chip, and the microhone/camera array. When all three switches are disabled, a number of other features, like GPS and sensors, are also turned off. On top of all this, various internal components are designed to be replaceable and possibly even upgradeable, with manufacturing of the device taking place in Europe - which probably refers to assembly, but still. The device is supposed to become open source, too. It will run Debian 13 with a customised version of the mobile GNOME Shell using a standard Linux kernel. Android applications will also be supported using Waydroid, which you'll most likely have to rely on for things like banking and other application categories exclusive to iOS and Android. Liberux promises that any development done on both the Linux distribution and other related applications will be done openly, which is something we can hold them to quite easily. I'm always weary of crowdfunding campaigns, and all the usual caveats, warnings, and concerns still apply here. I'm highlighting this campaign because I feel like many of the kinds of people who read OSNews are longing for a modern, capable smartphone that runs not iOS or Android, but proper Linux, even if Linux on smartphones isn't quite there yet to go toe-to-toe with the two duopolists. For more information on the device and the people involved, be sure to read LINMOB.net's excellent interview with Liberux. Liberux has told me they want to send over a review device once development has reached a point where that's possible. So, assuming the crowdfunding campaign is successful, you can look forward to a review of the Liberux Nexx on OSNews somewhere between now and mid-2026.
Kian Bradley was downloading something using BitTorrent, and noticed that quite a few trackers were dead. Most of the trackers were totally dead. Either the hosts were down or the domains weren't being used. That got me thinking. What if I picked up one of these dead domains? How many clients would try to connect? Kian Bradley It turns out the answer is a lot.
Accessibility is something that doesn't get nearly enough attention, especially considering because not only will we need accessibility features eventually as we grow older, but also because a lot of accessibility features are just helpful even if you don't technically need them. Given these facts, it's a shame that accessibility is usually an afterthought, doubly so on open source desktops, a problem we recently talked about. But what if you don't just need to use a few applications as, say, a blind person, but also actually program as a blind person? Acidic Light, accessibility engineer at KDE e.V., has published a blog post about how screen readers actually work, and what it's like to program while blind, and the conclusions are not exactly great. I truly feel that, based on my experience with KDE and my experience actually delving into the weeds with AccessKit in a custom UI system, that accessibility programming just isn't accessible. Unless you happen to already understand the way each platform works, trying to find resources on how to actually let a screen reader know your UI exists is just painful. It's going to involve reading code other people have already written. It's going to involve hours, if not days, if not weeks of research and painful debugging. You likely won't be able to ask many people for help, because they'll know as much as you do. Acidic Light If the people who know most what is needed to make a program accessible have so many problems actually making programs accessible, because the tooling, documentation, and institutional knowledge just isn't there, what hope do other programmers have to make their code accessible? If a blind programmer can't scratch their own itch, so to speak, we're never going to reach a point where accessibility becomes a given. I'm very happy awareness of accessibility is growing, but I feel like this isn't the first time we've seen an increase in accessibility awareness only for it to eventually fizzle out without meaningful improvements for those that need it the most. I really hope it sticks this time.
A new version of Plasma is here, and it feels even more like /home, as it becomes smoother, friendlier and more helpful. Plasma 6.4 improves on nearly every front, with progress being made in accessibility, color rendering, tablet support, window management, and more. KDE Plasma 6.4 release announcement KDE Plasma 6.4 comes with a big improvement in window and virtual desktop management, allowing you to create entirely custom tiled configurations per virtual desktop. Accessibility was another focus of this release, as we talked about a few weeks ago, bringing number pad mouse pointer navigation, improved desktop zoom, screen readers improvements, better contrast i the dark theme, tons of little legibility improvements across the desktop environment and its applications, and more. Furthermore, there's now finally a dedicated page in Settings for animations, so you no longer have to dig your way through the oddly placed and obtuse Desktop Effects page. Notifications have been improved as well, with new additions like a speed graph in file transfer notifications or Plasma notifying you when you're trying to use a muted microphone input. KRunner can now visualise colours when searching for a hex code, Spectacle has received some love, various widgets have been touched up, and much more. There's a brand new HDR wizard, support for Extended Dynamic Range, and the addition of the P010 video color format. System Monitor will now show usage information for Intel and AMD GPUs, and Info Center will show raw sensor data from the sensors in your device. There's a ton more, as this is a fairly major release. You can download and compile KDE Plasma 6.4 now, or just wait a few days until it lands in your distribution's repository.
I hate how these months keep going down like vodka-martinis on an Italian beach, but at least we get another progress report for Haiku every time. Aside from the usual small changes and bug fixes, the most important of which is probably allowing the EXT4 driver to read and write again, there's this little paragraph at the end which definitely stands out. This month was a bit lighter than usual, it seems most of the developers (myself included) were busy with other things... However, HaikuPorts remained quite active: most months, at this point, there are more commits to HaikuPorts than Haiku, and sometimes by a significant margin, too (for May, it was 52 in Haiku vs. 258 in HaikuPorts!). I think overall this is a sign of Haiku's growing maturity: the system seems stable enough that the porters can do their work without uncovering too many bugs in Haiku that interrupt or halt their progress. Haiku activity report for May I definitely hope that this positive read is correct, as it would be a shame for the project to run into declining activity and contributions just as it seems to be serving as a solid base for quite a wide variety of applications. I've definitely been seeing more and more people giving Haiku a try lately and coming away impressed, but of course, that's just anecdotal and I have no idea if that means Haiku has reached a certain point of maturity. One thing that definitely does indicate Haiku is a lot more stable and generally usable than most people think is the massive amount solid ports the platform can handle, from Firefox to LibreOffice, and everything in between. I think a lot of people would be surprised by just how far they can get with their day-to-day computing needs with Haiku, assuming their hardware can boot Haiku and is properly supported, of course. My opinion on Haiku has not changed, but I'm a random idiot you shouldn't be listening to. The cold and harsh truth is that old people like me who want their BeOS boomerware but in 2025, are a small minority who are impossible to please. The Haiku team's focus on getting modern software ported to Haiku, instead or trying to convince people to code brand new native Haiku applications, is objectively the correct choice to ensure the viability of the platform going forward. If Haiku wishes to fully outgrow its hobby status, looking towards the future is a far better approach than clinging to the past, and unsurprisingly, Haiku's developers are more than smart enough to realise that.
Wayland this, Liquid Glass that - but what if you just want a nice, comforting text-based environment? Sure, you can just boot straight into a terminal, or perhaps get fancy about it with Screen or whatever, but what if you want a text-based environment, but don't want to give up windows, menus, your mouse? How about a graphical user interface made up entirely of text? It looks exactly like what you'd think this would look like, and I find it absolutely fascinating. I'm not entirely sure how usable it is or who or what use case it's optimised for, but I adore the dedication to the cause. It works on both Linux and FreeBSD, and most likely other systems as well.
A new image loading machinery, called glycin, has been in the works for a while. It is already used by GNOME's default Image Viewer (Loupe), as well as by a bunch of other apps. Glycin provides many security benefits over existing solutions due to the use of the Rust programming language and sandboxing. Distributions will now be able to use the security benefits and broader format support of glycin for other GNOME apps, thumbnailers, and GNOME Shell, whithout changing any existing software. This is made possible by a new option for GNOME's legacy image-loading library, GdkPixbuf, to use glycin internally. Sophie Herold Clearly, this is an improvement over the previous image loading library, but there's a bit of a catch that is in line with GNOME's increased reliance on systemd features: glycin only works on Linux due to its sandbox mechanisms and how it communicates with its loaders. However, there's no need to fret this time, and that's why I'm posting this item - you just know this tiny little tidbit will find their way into internet discussion forums and social media as another example of GNOME not caring about non-Linux users. While some of the sandboxing and communication features in glycin can be made to work on the BSDs and perhaps macOS, it won't be perfect. As such, great care has been taken to ensure non-Linux platforms can continue to use GdkPixbuf just fine, since support for other platforms is part of the goals of this change before traditional loaders are removed. As a general solution for other platforms, I am planning a mechanism to compile the loaders into the library. This will not provide sandboxing and format extendability without recompilation. But since most loaders are written in Rust, this is still a huge step-up security wise. Contributions for support on other platforms are welcome. For GdkPixbuf users, this will not pose an immediate issue since traditional gdk-pixbuf loaders are not going away until all platforms it supported, are supported by glycin. Sophie Herold There are a few other issues, as any change like this tends to cause, like the list of supported images formats. Glycin supports AVIF, BMP, DDS, Farbfeld, QOI, GIF, HEIC, ICO, JPEG, JPEG XL, OpenEXR, PNG, PNM, SVG, TGA, TIFF, and WEBP out of the box, but some of the subformats within each of these might potentially not work entirely correctly due to incorrect implementations by, say, camera manufacturers. A special case here is the TIFF format, which apparently still has a number of issues and might end up relying on a fallback. Glycin brings a ton of benefits, such as improved colour support for things like HDR and wider colour gamut displays, better metadata support, basic image editing functions out of the box, increased performance, as well as the benefits inherent in using a memory-safe language like Rust.
Recently, a Reddit user discovered a rare RCA Spectra 70/35 computer control panel from 1966 in their family's old collapsed garage, posting photos of the pre-moon landing mainframe component to the retrobattlestations" subreddit that celebrates vintage computers. After cleaning the panel and fixing most keyswitches, the original poster noted that actually running it would require 1,500lbs of mainframe"-the rest of the computer system that's missing. Benj Edwards at Ars Technica Apparently, no photos of this panel existed online, and it may be one of the few - if only - surviving' example of such a panel. Of course, it's effectively useless without all of the other chunks that make up the entire Spectra mainframe, but it's still an interesting find. The person who found it intends to turn it into what is essentially a piece of home decor, but maybe we'll get lucky and someone else out there who has been collecting parts and pieces to assemble a working RCA Spectra 70/35 can do something more productive with it.
Are you still using Windows 7, 8, 8.1, or a 32 bit version of Windows, relying on LibreOffice for your sexy office tasks of writing TPS reports and calculating and tabulating juicy, plump numbers? Bad news: the next version of LibreOffice will remove support for these platforms. Buried deep in the release notes of the second beta for LibreOffice 25.8, it reads: Support for Windows 7 and 8/8.1 was removed. Support for x86 (32-bit) Windows builds is deprecated. LibreOffice 25.8 beta 2 release notes I honestly doubt many people actually still rely on LibreOffice on these platforms, and even if for some unfathomable reason you do, you are probably also okay with sticking with an older version of LibreOffice to keep your weird setup going a few years longer. You do you.
Since Void Linux uses a rolling release model, there's not much to report on in the form of new releases and major new features, so I'm taking the release of version 0.60 of XBPS, Void Linux' package manager, to cheat my way into talking about this excellent Linux distribution. I always think of Void as the BSD of Linux distributions", which should give you some vague hint as to what it's going for. XBPS 0.60 doesn't come packed with major new features either, and mostly fixes a ton of bugs, addresses few memory leaks, and changes the way held dependencies and directory removal/creation works when reinstalling a packages, just to name a few. There's also some performance improvements, as there were apparently some problems in that department due to the increasing number of virtual packages in the Void repository. If you're looking for a more traditional, hands-on Linux distribution, Void is an excellent choice. It's my back-up for if (let's face it: when) Fedora messes something up.
Disk images have been valuable tools marred by poor performance. In the wrong circumstances, an encrypted sparse image (UDSP) stored on the blazingly fast internal SSD of an Apple silicon Mac may write files no faster than 100 MB/s, typical for a cheap hard drive. One of the important new features introduced in macOS 26 Tahoe is a new disk image format that can achieve near-native speeds: ASIF, documented here. This has been detailed as a major improvement in lightweight virtualisation, where it promises to overcome the most significant performance limitation of VMs running on Apple silicon Macs. However, ASIF disk images are available for general use, and even work in macOS Sequoia. This article shows what they can do. Howard Oakley Exactly what it says on the tin.
With the release of Android 16, Google changed how it developed Android. Development is now taking place behind closed doors, with the code dropped after the corresponding version has been released to Pixel devices. Well, it turns out this wasn't the only thing Google has changed about Android development. As the developers of CalyxOS, a popular de-Googled Android ROM, dove into the Android 16 AOSP source code, they realised something very important was missing: the device-specific source code for modern Pixel devices. Android 16 was released to AOSP yesterday but with a one big difference than typical releases: Google did not publish any device-specific source code for supported, modern Pixel devices. In previous years, Google released full device trees alongside new Android versions. This allowed developers to build and boot AOSP on Pixel hardware relatively easily. With Android 16, only the platform/framework code has been released. The device trees are missing, at least for now. This means AOSP 16 cannot currently be built or run on any recent Pixel device easily just using official source. It's unclear whether this is a delay or a policy change. Either way, it seriously disrupts custom ROM development and our porting efforts. CalyxOS on Reddit If this is truly a policy change, it's a big one that affects custom ROM developers considerably. Pixel devices were special" among custom ROM developers because support for them was part of AOSP releases, so they were well-supported by projects like CalyxOS, GrapheneOS, and LineageOS, including all the hardware components, and with quick updates. Without access to the Pixel-specific source code for the Pixel 6 to Pixel 9a, these devices will now have to be treated like any other Android phone as far as ROM developers go, meaning it'll take a lot more work and time to get them to work properly with new major Android releases. Google did not announce this potential policy change, and this has some in the custom Android ROM community on edge. I've been talking to people in the custom ROM community, and the story goes that a few months ago, at least one of these communities was approached by a journalist who wanted to talk to them. This journalist claimed that Google intends to discontinue the Android Open Source Project, with the first step Google would take being no longer releasing the device-specific Pixel source code (something nobody knew would happen until yesterday). The fact that this first step has now become a reality lends some credence to the journalist's claim that Google is discontinuing AOSP. However, since such tips are not uncommon, and since there was no way to verify, the custom ROM developers in question didn't really know what to do with it. During the writing of this article over the past 12 hours, Google itself has also responded to what is apparently a growing, now public concern in the wider Android community. Seang Chau, Google VP and GM of Android Platform, published a Tweet, disclaiming Google has any intentions to close up shop for AOSP. We're seeing some speculation that AOSP is being discontinued. To be clear, AOSP is NOT going away. AOSP was built on the foundation of being an open platform for device implementations, SoC vendors, and instruction set architectures. AOSP needs a reference target that is flexible, configurable, and affordable - independent of any particular hardware, including those from Google. For years, developers have been building Cuttlefish (available on GitHub as the reference device for AOSP) and GSI targets from source. We continue to make those available for testing and development purposes. Seang Chau This seems like a solid denial from Google, but it leaves a lot of room for Google to make a wide variety of changes to Android's development and open source status without actually killing off AOSP entirely. Since Android is licensed under the Apache 2.0 license, Google is free to make Pixel Android" - its own Android variant - closed source, leaving AOSP up until that point available under the Apache 2.0 license. This is reminiscent of what Oracle did with Solaris. Of course, any modifications to the Linux kernel upon which Android is built will remain open source, since the Linux kernel is licensed under the GPLv2. If Google were indeed intending to do this, what could happen is that Google takes Android closed source from here on out, spinning off whatever remains of AOSP up until that point into a separate company or project, as potentially ordered during the antitrust case against Google in the United States. This would leave Google free to continue developing its own Pixel Android" entirely as proprietary software - save for the Linux kernel - while leaving AOSP in the state it's in right now outside of Google. This technically means AOSP is not going away", as Chau claims. Of course, other parties would then be free to continue working on and contributing to AOSP, but AOSP itself would no longer benefit from the work done by Google. Again, this feels very similar to how illumos and OpenIndiana are built atop the last open source release of Solaris from 2010, without any of the additional work Oracle has done on Solaris since then. As you can tell, there's a lot of speculation here, because even if all of this is true, it seems the ongoing court case and any rulings that come of it will play a major role in Google's decision-making process. The Android Open Source Project has been gutted over the years, with Google leaving more and more parts of it to languish, while moving a lot of code and functionality into proprietary components like Google Mobile Services and Google Play Services. Taking Pixel Android" closed source almost feels like the natural next step in the process of gutting AOSP that's been ongoing for well over a decade. As it stands today, a default AOSP installation requires a lot of additional components and applications before it can be considered a complete mobile
GNOME has announced it'll be increasing its dependency on systemd, the popular init system used by most (popular) Linux distributions. While GNOME already had a few relatively inconsequential dependencies on systemd, it was effectively not a huge problem to run GNOME on operating systems that don't have systemd, which most notably includes the various flavours of BSD. That's going to change. There's going to be two changes, one of which is relatively minor, and one of which will pose much bigger problems. The minor change involves GDM becoming dependent on systemd's userdb infrastructure in order to clean up a lot of GDM's code involved in multi-seat setups and remote login. Currently, this works through a series of hacks that the GDM developers are going to clean up, switching to using systemd-userdb to dynamically allocate user accounts, and then runs each login screen as a unique user". To aid non-systemd environments during this transition, GDM will get a temporary alternate code path that enables you to run GDM without systemd-userdb. So if you compile GDM against elogind, GDM will use an alternative trick to enable multiple graphical sessions under the same user. This trick will remain in place at least until GNOME 50, but its future after that is uncertain. The second change is much more involved. Next, the bigger change. Since GNOME 3.34, gnome-session uses the systemd user instance to start and manage the various GNOME session services. When systemd is unavailable, gnome-session falls back to a builtin service manager. This builtin service manager uses .desktop files to start up the various GNOME session services, and then monitors them for failure. This code was initially implemented for GNOME 2.24, and is starting to show its age. It has received very minimal attention in the 17 years since it was first written. Really, there's no reason to keep maintaining a bespoke and somewhat primitive service manager when we have systemd at our disposal. The only reason this code hasn't completely bit rotted is the fact that GDM's aforementioned hacks break systemd and so we rely on the builtin service manager to launch the login screen. Well, that has now changed. The hacks in GDM are gone, and the login screen's session is managed by systemd. This means that the builtin service manager will now be completely unused and untested. Moreover: we'd like to implement a session save/restore feature, but the builtin service manager interferes with that. For this reason, the code is being removed. Adrian Vovk Mitigating this change will be a lot more involved for operating systems that don't use systemd, and the blog post goes into detail into what, exactly, needs to be done in systemd-less environments. There's quite a few systemd components and other little tidbits that you will need to find or create alternatives for, and considering you'll need to have all of it in place roughly by GNOME 50, roughly a year from now, I can imagine this causing quite a few headaches for platforms like the BSDs and Linux distributions using init systems other than systemd. With these changes, GNOME further solidifies itself as a Linux desktop only - and lest anyone forget, that's entirely within their right to do. Systemd haters can jump up and down all they want, but in the end, they have no right to demand that GNOME developers spend precious time and resources testing GNOME on and developing it for platforms that they themselves do not use. They're clearly targeting the trifecta of Linux, system, and Wayland, and that's their choice to make, not anyone else's. Still, if operating systems like OpenBSD and FreeBSD, or Linux distributions without systemd intend to continue offering a fully functional GNOME desktop, they're going to have some work to do.
And I've got another custom hobby operating system for you today: Munal OS. An experimental operating system fully written in Rust, with a unikernel design, cooperative scheduling and a security model based on WASM sandboxing. Munal OS GitHub page Munal OS has no bootloader, but is instead compiled into a single EFI binary that contains all it needs to function, including a few applications. Since Munal OS relies on a PCI driver that communicates with QEMU via the VirtIO 1.1 specification for things like input and graphics, it can't yet run on real hardware. It has its own UI toolkit, and comes with applications like a basic web browser, a text editor, and a Python terminal.
Xeneva is an operating system for both x86_64 and ARM64 architectures, built from the ground up. The Kernel is known as Aurora' with hybrid kernel design and the entire operating system is known as Xeneva'. XenevaOS GitHub page It's remarkably complete, with driver loading and linking, up to SSE 3 support, USB3 and Intel HD audio support, networking, and a whole lot more of the basics that make up a modern complete operating system. On top of all this, it also has a compositing window manager, a desktop environment, a terminal with VT100 support, Freetype2 font rendering, and much more. It also comes with a few basic applications like a file manager, calculator, audio player, and so on. It's written in C (and some C++), and uniquely, can only be built in a Windows environment, something you don't see very often. It definitely looks quite impressive.
Today, we're bringing you Android 16, rolling out first to supported Pixel devices with more phone brands to come later this year. This is the earliest Android has launched a major release in the last few years, which ensures you get the latest updates as soon as possible on your devices. Android 16 lays the foundation for our new Material 3 Expressive design, with features that make Android more accessible and easy to use. Seang Chau at the Google blog Android 16 doesn't seem like a very big release, and that's because for most users, it really isn't. There's some neat features in here, like improved notification grouping, live notifications, a slew of protection features for people who run increased risk (think journalists or victims of abuse), and proper desktop-style windowing on tablets, which seems like the tentpole feature for now. The Material 3 Expressive design is not really here yet, though as that will come in subsequent Android 16 updates. The release for devices coincides with the release of the source code, which is no longer released as part of the development process, but dumped across the fence at release time. This means that those of us using a de-Googled Android ROM - I use GrapheneOS - will have to wait a bit longer than we're used to before getting the new version.
As part of its WWDC announcements, Apple has unveiled Containerization, which uses macOS' virtualisation framework to run Linux containers on Apple Silicon Macs. Containerization executes each Linux container inside of its own lightweight virtual machine. Clients can create dedicated IP addresses for every container to remove the need for individual port forwarding. Containers achieve sub-second start times using an optimized Linux kernel configuration and a minimal root filesystem with a lightweight init system. vminitd is a small init system, which is a subproject within Containerization. vminitd is spawned as the initial process inside of the virtual machine and provides a GRPC API over vsock. The API allows the runtime environment to be configured and containerized processes to be launched. vminitd provides I/O, signals, and events to the calling process when a process is ran. Containerization GitHub page Alongside this new tool, Apple also released container, which creates and runs OCI-compliant container images. Yes, both of these names are horribly generic and are definitely going to lead to confusion in online discussions and writing, but the tools themselves seem quite nice. People stuck on macOS who need to do Linux work can now easily get their work done on macOS - if you're okay with using Electron for developers, of course, which is what containers really are. Clearly, nobody can ignore Linux, not even Apple or Microsoft.
macOS Tahoe is the final software update that Intel-based Macs will get, as Apple works to phase them out following its transition to Apple silicon. During its Platforms State of the Union event, Apple said that Intel Macs won't get macOS 27, coming next year, though there could still be updates that add security fixes. Juli Clover at MacRumors Not particularly surprising, but definitely not great for someone who bought one of those ungodly expensive Intel Mac Pro only a few years ago - it wasn't taken off the shelves until 2023. That's a hard pill to swallow, and definitely something I do not think should be legal.
For years now - it feels more like decades, honestly - Apple has been trying a variety of approaches to make the iPad more friendly to power users, most notably by introducing, and subsequently abandoning, various multitasking models. After its most recent attempts - Stage Manager - fell on deaf ears, the company has thrown its hands up in the air and just implemented what we all wanted on the iPad anyway: a normal windowing environment. Apple today revealed an overhaul of iPad multitasking, introducing a completely new windowing system, a macOS-style Menu Bar, a pointer, and more. The centerpiece of the multitasking improvements is a new macOS-style windowing system. Apps still launch in full-screen by default, preserving the familiar iPad experience, but users can now resize apps into windows using a new grab handle. If an app was previously used in a windowed state, it will remember that layout and reopen the same way next time. Hartley Charlton at MacRumors The new window manager includes tiling features, Expose, support for multiple displays, and swiping twice on the home button will minimise all open windows. It's literally the macOS way of managing windows transplanted onto the iPad, with some small affordances for touch input. This is excellent news, and should make the multitasking features of the iPad, which, at this point, is as powerful as a MacBook, much more accessible and effortless than all those hidden gesture-based features from before. The amount of RAM in your iPad seems to determine how many active windows you can have open before the older ones get put to sleep, from four on the oldest iPad Pro models, to many more on the most recent models. Any windows above that limit will still be visible, but will just be a screenshot of their most recent state until you interact with them again. Any windows above a limit of twelve will be pushed to the recents screen instead. In addition, and almost just as important, iPadOS 26 also introduces proper background processes, allowing applications to actually keep running in the background instead of being put to sleep. Anyone who has ever done any serious work on an iPad that involves long processes like exporting a video will consider this a godsend. Now all we need is a proper terminal and Xcode and the iPad can be a real computer.
FreeBSD 14.3 has been released, an important point release for those of us using the FreeBSD 14.x branch. This release brings 802.11ac (Wi-Fi 5) support to many modern laptop wireless chips, OCI container images are now available in Docker and GitHub repositories, and a number of cornerstone packages have been updated to their latest versions.
If you ever wanted to know what it was like to be an engineer at Google during the early to late 2000s, here you go. Now even though Google is fundamentally a spyware advertising company (some 80% of its revenue is advertising; the proportion was even higher back then), we Engineers were kept carefully away from that reality, as much as meat eaters are kept away from videos of the meat industry: don't think about it, just enjoy your steak. If you think about it it will stop being enjoyable, so we just churned along, pretending to work for an engineering company rather than for a giant machine with the sole goal of manipulating people into buying cruft. The ads and business teams were on different floors, and we never talked to them. Elilla Even back then, Google knew full well that what they were doing and working towards was deeply problematic and ethically dubious, at best, and reading about how young, impressionable Google engineers at the time figured that out by themselves is kind of heartbreaking. In those days, Google tried really hard to cultivate an image of being different than Apple or Microsoft, a place where employees were treated better and had more freedom, working for a company trying to make the web a better place. Of course, none of that was actually true, but for a short while back then, a lot of people fell for it - yes, including you, even if you now say you didn't - and reading about the experiences from people on the inside at the time, it was never actually true.
Apple at WWDC announced iOS 26, introducing a comprehensive visual redesign built around its new Liquid Glass" concept, alongside expanded Apple Intelligence capabilities, updates to core communication apps, and more. Liquid Glass is a translucent material that reflects and refracts surroundings to create dynamic, responsive interface elements, according to Apple. The new design language transforms the Lock Screen, where the time fluidly adapts to available space in wallpapers, and spatial scenes add 3D effects when users move their iPhone. Meanwhile, app icons and widgets gain new customization options, including a striking clear appearance. Tim Hardwick at MacRumors Apple also posted a video on YouTube where you can see the new design language in motion, which gives a bit of a better idea of what it's actually like. Of course, before you believe anyone who's writing about this new Liquid Glass design language, the only true way to form a coherent opinion of a user interface is through usage, so keep that in mind. Looking at the video, the good part that immediately jumps out at me about this Liquid Glass stuff is the animations informing you where stuff is coming from and where it's going. These are the sort of affordances I was writing about almost 20 years ago, when Compiz' animations and effects made windows and virtual desktops feel like real" objects that had a physical presence in a space. Apple's Liquid Glass seems to have the same effect, and I'm here for it. The transparency, though, I'm not a huge fan of. Depending on the content shown beneath the glass user interface elements, contrast can suffer, making things incredibly hard to read. While the glassy refraction effects looks neat, I would've much rather seen a focus on blurred glass, which makes a lack of contrast much less likely to occur. I think we're going to be seeing a lot of screenshots, videos, and thinkpieces about how this much transparency is going to hurt readability. I love it when an operating system gets a design language overhaul, and in this case, Apple is applying it across the board, to all of its operating systems. This may be the perfect moment for me to grit my teeth, hold my nose, and get my hands on a Mac just so I can write about Liquid Glass once it lands.
Quite often, I wonder how much nostalgia plays part in our perception of past events. Luckily, with software, you can go back" and retest it, and so there's no need for any illusions and misconceptions. To wit, I decided to reinstall and try Windows 7 again (as a virtual machine, but still), to see whether my impressions of the dross we call modern" software today are justified. Igor Ljubuncic The conclusion is that, yes, you can still get quite far today with Windows 7, and I honestly don't fault anyone for longing for those days. Windows 7 sits dead smack in the middle between the dreadfulness of Windows XP and pre-patches Vista on one extreme, and the ad-infested, AI"-slop that are Windows 10 and 11. Its Aero look also happens to be experiencing somewhat of a revival, with both Apple and Google borrowing heavily from it for their latest software releases. Transparent blurred glass is making a comeback, but I doubt the current crop of designers at Apple and Google will be able to top just how nice Aero Glass looked in Windows 7. Still, I don't think you should be using an out-of-support version of Windows for anything more than retrocomputing and as a curiosity, for obvious reasons we're all aware of. With the end of support for Windows 10 - still used by two-thirds of Window users - approaching quickly, a lot of people are going to have to make the same choice that fans of Windows 7 made years ago: keep using what I like, risks and all, or move on to what I don't like, but is at least maintained and supported? That is, assuming you can even make that choice in the first place, since in the current economic uncertainty, most definitely cannot. Maybe the Windows world will dodge a bullet, and the circumstances force Microsoft to extend support for Windows 10, like they did with Office applications. Let's see if they blink, again.
NetBSD is an OS that I installed only a couple of times over the years, so I'm not very familiar with its installer, sysinst. This fact was actually what led to this article (or the whole series rather): Talking to a NetBSD developer at EuroBSDcon 2023, I mentioned my impression that NetBSD was harder to install than it needed to be. He was interested in my perspective as a relative newcomer, and so I promised to take a closer look and write about it. While it certainly took me long enough, I finally get to do this. So let's take a look at NetBSD's installer, shall we? The version explored here is NetBSD 10.1 on amd64. Eerie Linux An excellent deep, deep dive into the NetBSD installer. The two earlier installments cover FreeBSD's and OpenBSD's installers.
We've cleared another month by the skin of our teeth, so it's time for another month of progress in Redox, the Rust-based operating system. They've got a big one for us this month, as Redox can now run X11 applications in its Orbital display server, working in much the same way as XWayland. This X11 support includes DRI, but it doesn't yet fully support graphics acceleration. Related to the X11 effort is the brand new port of GTK3 and the arrival of Mesa3D EGL. Moving on, there's the usual massive list of bugfixes and low-level changes, such as the introduction of the /var directory and subdirectories for compliance with the FHS, a fix to make the live image work when there's no other working storage driver, and a ton more. Of course, there's the usual list of relibc fixes, as well as a ton of updated and improved ports.
Starting 20 June 2025, new rules and regulations in the European Union covering, among other thins, smartphones and tablets, will have some far-reaching consequences for device makers - consequences that, coincidentally, will work out pretty great for consumers within the European Union. The following ecodesign requirements" will come into force on 20 June: Especially the requirements around repairability and the long-term availability of operating system updates will affect us consumers quite positively. While Android OEMs have improved their update policies somewhat, they're still lagging behind Apple considerably, especially if you opt for lower-end devices or devices from smaller manufacturers. These new requirements will make getting Android updates a consumer right, not an optional service if the OEM happens to feel like it. Which they usually don't. I'm sure countless OEMs will try to weasel their way through supposed cracks and gaps in the exact wording of the rules, but the EU has shown not to take too kindly to corporations, big and small, trying to comply maliciously.
London-based Builder.ai, once valued at $1.5 billion and backed by Microsoft and Qatar's sovereign wealth fund, has filed for bankruptcy after reports that its AI-powered" app development platform was actually operated by Indian engineers, said to be around 700 of them, pretending to be artificial intelligence. The startup, which raised over $445 million from investors including Microsoft and the Qatar Investment Authority, promised to make software development as easy as ordering pizza" through its AI assistant Natasha." However, as per the reports, the company's technology was largely smoke and mirrors, human developers in India manually wrote code based on customer requests while the company marketed their work as AI-generated output. The Times of India I hope those 700 engineers manage to get something out of this, but I doubt it. I wouldn't be surprised if they were unaware they were part of the AI" scam.
As part of Microsoft's ongoing commitment to compliance with the Digital Markets Act, we are making the following changes to Windows 10, Windows 11, and Microsoft apps in the European Economic Area (EEA). We'll update this post as these changes are shipped, first in Windows Insider builds and then in retail builds. Windows Insider Program Team It's time for more changes to make Windows suck just a little bit less, but only for those of us who live in the European Economic Area (the EU plus Iceland, Liechtenstein, and Norway), courtesy of basic consumer protection laws like the Digital Markets Act. Windows users in other parts of the world will not get these changes, so if you don't live in the EU/EEA, feel free to look away to remain blissfully ignorant. In the EU/EEA, Edge will no longer bug you to be set as the default browser, unless you actually open Edge. In addition, other Microsoft applications won't bug you to install Edge if you've removed it from your system. Setting a browser as default will now also register more filetypes. Whereas in other parts of the world setting, say, Firefox as your default browser in Windows will only register it as the default for http, https, .htm, and .html, it will register the following additional defaults: ftp, read, .mht, .mhtml, .shtml, .svg, .xht, .xhtml, and .xml. Users in the EU/EEA can now also remove the Microsoft Store, without affecting updates or the ability for developers to the Microsoft Store Web Installer for their applications. You can now also have multiple online search providers in Windows Search, and countless Microsoft applications and Windows components will no longer default to opening Edge for web content, opting to use your default browser instead. These are all very welcome improvements for European Windows users. It's almost like consumer protection laws work.
Ice-T is a terminal emulator, allowing Atari computers with extended memory (128KB or more) to connect to remote dialup and Telnet hosts, such as Unix shells and BBSs. A limited version for machines without extended memory is also available. Ice-T 2.8.0 release announcement Version 2.8.0 was released a few days ago, the first new release in almost twelve years. It comes with a ton of improvements, such as VT-102 support, limited ANSI coloured text support, macros, and a lot more.
Fvwm3, the venerable, solid, configurable, no-nonsense window manager for X, has been updated: fvwm3 1.1.3 has been released. While the version number indicates that this is a minor release, there's one reason why 1.1.3 is actually a much bigger deal than the version number suggests: it switches the build system from autotools to meson. Fvwm is very old, and has been using autotools since 1996 (before then it was using handcrafted makefiles), but the release of autotools 2.70, which came eight years after the previous release, the amount of changes in autotools proved to be a major headache for fvwm. Since the amount of work would be considerable, the project decided to look at alternatives to autotools, and after considering CMake and meson, the latter was chosen. This was chosen primary because X11 itself is transitioning its projects from autotools to meson. Additionally, there has been good help from the wider community around meson's adoption. In terms of speed", the parallelised nature of not using make does mean compilation speeds are improved, even on lower-end systems. Thomas Adam To ensure you don't need Python 3 just to build fvwm3, you can use muon starting with muon version 0.13. Muon is written in C, and only requires a C compiler to be built. Fvwm3's transition from autotools to Meson started with version 1.1.1, and with 1.1.3 autotools has been completely deprecated. As for actual changes to fvwm3 itself, this point release is exactly what you'd expect - a few bug fixes, as well as some minor changes to FvwmRearrange.
The first prototype was ready in just six months. By October 1986, the project was announced, and in January 1987, the first NEWS workstation, the NWS 800 series, officially launched. It ran 4.2BSD UNIX and featured a Motorola 68020 CPU. Its performance rivaled that of traditional super minicomputers, but with a dramatically lower price point ranging from 950,000 to 2.75 million (approximately $6,555 to $18,975 USD in 1987). Competing UNIX workstations typically cost closer to 10 million (around $69,000 USD). NEWS caught on quickly in universities and R&D labs, where cost sensitive researchers needed real performance. The venture team had invested 400 million into development (about $2.76 million USD), and remarkably, they recouped those costs within just two months of launch. That same year, Sony introduced a lower cost version called POP NEWS (PWS 1550). With a GUI shell named NEWS Desk, a document sharing format called CDFF (Common Document File Format), and a focus on Japanese language desktop publishing, PopNEWS aimed to make UNIX more accessible to general business users. Targeted at the Desktop Publishing market, it showed Sony's desire to bridge consumer and professional segments in ways no other UNIX vendor was trying at the time. Obsolete Sony's Newsletter I've been fascinated by Sony's NEWS workstations, and especially the NEWS-OS operating system, for a long time now. Real hardware is hard to find and prohibitively expensive, but some of these Sony NEWS workstations can be emulated through MAME. Sadly, as far as I can tell, you can only emulate NEWS-OS up to version 4.x, as I haven't been able to find any information about emulating version 5.x and the final version, 6.x. If anyone knows anything about how to emulate these, if at all possible, please do share with the rest of us. What's interesting about Sony's UNIX workstation efforts from the '80s and '90s is that they played an important role in the early development of the PlayStation. The early development kits for the PlayStation were modified NEWS workstations, with added PlayStation hardware. To further add to the importance of the NEWS line for gaming, Nintendo used them to develop several influential and popular first-party SNES titles, which isn't surprising considering Nintendo and Sony originally worked together on bringing a CD-ROM drive to the SNES, which would later morph into the PlayStation as Nintendo cancelled the agreement at the last second.
What if you want to find out more about the PS/2 Model 280? You head out to Google, type it in as a query, and realise the little AI" summary that's above the fold is clearly wrong. Then you run the same query again, multiple times, and notice that each time, the AI" overview gives a different wrong answer, with made-up details it's pulling out of its metaphorical ass. Eventually, after endless tries, Google does stumble upon the right answer: there never was a PS/2 Model 280, and every time the AI" pretended that there was, it made up the whole thing. Google's AI" is making up a different type of computer out of thin air every time you ask it about the PS/2 Model 280, including entirely bonkers claims that it had a 286 with memory expandable up to 128MB of RAM (the 286 can't have more than 16). Only about 1 in 10 times does the query yield the correct answer that there is no Model 280 at all. An expert will immediately notice discrepancies in the hallucinated answers, and will follow for example the List of IBM PS/2 Models article on Wikipedia. Which will very quickly establish that there is no Model 280. The (non-expert) users who would most benefit from an AI search summary will be the ones most likely misled by it. How much would you value a research assistant who gives you a different answer every time you ask, and although sometimes the answer may be correct, the incorrect answers look, if anything, more real" than the correct ones? Michal Necasek at the OS/2 Museum This is only about a non-existent model of PS/2, which doesn't matter much in the grand scheme of things. However, what if someone is trying to find information about how to use a dangerous power tool? What if someone asks the Google AI" about how to perform a certain home improvement procedure involving electricity? What if you try to repair your car following the instructions provided by AI"? What if your mother follows the instructions listed in the leaflet that came with her new medication, which was translated" using AI", and contains dangerous errors? My father is currently undertaking a long diagnostic process to figure out what kind of age-related condition he has, which happens to involve a ton of tests and interviews by specialists. Since my parents are Dutch and moved to Sweden a few years ago, language is an issue, and as such, they rely on interpreters and my Swedish wife's presence to overcome that barrier. A few months ago, though, they received the Swedish readout of an interview with a specialist, and pasted it into Google Translate to translate it to Dutch, since my wife and I were not available to translate it properly. Reading through the translation, it all seemed perfectly fine; exactly the kind of fact-based, point-by-point readout doctors and medical specialists make to be shared with the patient, other involved specialists, and for future reference. However, somewhere halfway through, the translation suddenly said, completely out of nowhere: The patient was combative and non-cooperative" (translated into English). My parents, who can't read Swedish and couldn't double-check this, were obviously taken aback and very upset, since this weird interjection had absolutely no basis in reality. This readout covered a basic question-and-answer interview about symptoms, and at no point during the conversation with the friendly and kind doctor was there any strife or modicum of disagreement. Still, being into their '70s and going through a complex and stressful diagnostic process in a foreign healthcare system, it's not unsurprising my parents got upset. When they shared this with the rest of our family, I immediately thought there must've been some sort of translation error introduced by Google Translate, because not only does the sentence in question not match my parents and the doctor in question at all, it would also be incredibly unprofessional. Even if the sentence were an accurate description of the patient-doctor interaction, it would never be shared with the patient in such a manner. So, trying to calm everyone down by suggesting it was most likely a Google Translate error, I asked my parents to send me the source text so my wife and I could pour over it to discover where Google Translate went wrong, and if, perhaps, there was a spelling error in the source, or maybe some Swedish turn of phrase that could easily be misinterpreted even by a human translator. After pouring over the documents for a while, we came to a startling conclusion that was so, so much worse. Google Translate made up the sentence out of thin air. This wasn't Google Translate taking a sentence and mangling it into something that didn't make any sense. This wasn't a spelling error that tripped up the numbskull AI". This wasn't a case of a weird Swedish expression that requires a human translator to properly interpret and localise into Dutch. None of the usual Google Translate limitations were at play here. It just made up a very confrontational sentence out of thin air, and dumped it in between two other sentence that were properly present in the source text. Now, I can only guess at what happened here, but my guess is that the preceding sentence in the source readout was very similar to a ton of other sentences in medical texts ingested by Google's AI", and in some of the training material, that sentence was followed by some variation of patient was combative and non-cooperative". Since AI" here is really just glorified autocomplete, it did exactly what autocomplete does: it made shit up that wasn't there, thereby almost causing a major disagreement between a licensed medical professional and a patient. Luckily for the medical professional and the patient in question, we caught it in time, and my family had a good laugh about it, but the next person this happens to might not be so lucky. Someone visiting a
While it's still early days and it's not recommended for non-technical audiences, GNOME OS is now ready for developers and early adopters who know how to deal with occasional bugs (and importantly, file those bugs when they occur). Tobias Bernard This is great news, and means GNOME OS is progressing nicely. I'm a proponent of this and KDE's equivalent project, because it allows the people working on GNOME and KDE to really showcase their work in optimal, controlled conditions. While I don't see myself switching to a Flatpak-based, immutable distribution because they tend to not align with what I want out of an operating system, they'll serve as great showcases. There is a risk associated with these projects, though, as I highlighted the last time we talked about them. Once such official" GNOME and KDE Linux distributions exist, the projects run a real risk of only really caring about how well GNOME and KDE work there, while not caring as much, or even at all, how well they run everywhere else. I'm not sure how they intend to prevent this from happening, but from here, I can already see the drama erupting. I hope this is something they take into consideration. We'll have to wait and see if my worries are founded or not.
Of course you can run Doom on a $10,000+ Apple server running IBM AIX. Of course you can. Well, you can now. Now, let's go ahead and get the grumbling out of the way. No, the ANS is not running Linux or NetBSD. No, this is not a backport of NCommander's AIX Doom, because that runs on AIX 4.3. The Apple Network Server could run no version of AIX later than 4.1.5 and there are substantial technical differences. (As it happens, the very fact it won't run on an ANS was what prompted me to embark on this port in the first place.) And no, this is not merely an exercise in flogging a geriatric compiler into building Doom Generic, though we'll necessarily do that as part of the conversion. There's no AIX sound driver for ANS audio, so this port is mute, but at the end we'll have a Doom executable that runs well on the ANS console under CDE and has no other system prerequisites. We'll even test it on one of IBM's PowerPC AIX laptops as well. Because we should. Cameron Kaiser Excellent reading, as always, from Cameron Kaiser.
A short while ago, we talked about the hellish hiring process at a Silicon Valley startup, and today we've got another one. Apparently, it's an open secret that the hiring process at Canonical is a complete dumpster fire. I left Google in April 2024, and have thus been casually looking for a new job during 2024. A good friend of mine is currently working at Canonical, and he told me that it's quite a nice company with a great working environment. Unfortunately, the internet is full of people who had a poor experience: Glassdoor shows that only 15% had a positive interview experience, famous internet denizens like sara rambled on the topic, reddit, hackernews, indeed and blind all say it's terrible, ... but the idea of being decently paid to do security work on a popular Linux distribution was really appealing to me. Julien Voisin What follows is Byzantine and ridiculous, and all ultimately unnecessary since it turns out Mark Shuttleworth interviews applicants at the end of this horrid process and yays or nays people on vibes alone. You have to read it to believe it. One interesting note that I do appreciate is that Voisin used their rights under the GDPR to force Canonical to hand over the feedback about his application since the GDPR considers it personal information. Delicious.
At the Linux Application Summit (LAS) in April, Sebastian Wick said that, by many metrics, Flatpak is doing great. The Flatpak application-packaging format is popular with upstream developers, and with many users. More and more applications are being published in the Flathub application store, and the format is even being adopted by Linux distributions like Fedora. However, he worried that work on the Flatpak project itself had stagnated, and that there were too few developers able to review and merge code beyond basic maintenance. Joe Brockmeier at LWN After reading this article and the long list of problems the Flatpak project is facing, I can't really agree that Flatpak is doing great". Apparently, Flatpak is in maintenance mode, while major problems remain untouched, because nobody is working on the big-ticket items anymore. This seems like a big problem for a project that's still facing a myriad of major issues. For instance, Flatpak still uses PulseAudio instead of Pipewire, which means that if a Flatpak applications needs permission to play audio, it also automatically gets permission to use the microphone. NVIDIA drivers also pose a big problem, network namespacing in Flatpak is kind of ugly", you can't specify backwards-compatible permissions, and tons more problems. There's a lot of ideas and proposed solutions, but nobody to implement them, leaving Flatpak stagnated. Now that Flatpak is adopted by quite a few popular desktop Linux distributions, it doesn't seem particularly great that it's having such issues with finding enough manpower to keep improving it. There's a clear push, especially among developers of end-user focused applications, for everyone to use Flatpak, but is that push really a wise idea if the project has stagnated? Go into any thread where people discuss the use of Flatpaks, and there's bound to be people experiencing problems, inevitably followed by suggested fixes to use third-party tools to break the already rather porous sandbox. Flatpak feels like a project that's far from done or feature-complete, causing normal, every-day users to experience countless problems and issues. Reading straight fromt he horse's mouth that the project has stagnated and isn't being actively developed anymore is incredibly worrying.
And the copilot" branding. A real copilot? That's a peer. That's a certified operator who can fly the bird if you pass out from bad taco bell. They train. They practice. They review checklists with you. GitHub Copilot is more like some guy who played Arma 3 for 200 hours and thinks he can land a 747. He read the manual once. In Mandarin. Backwards. And now he's shouting over your shoulder, Let me code that bit real quick, I saw it in a Slashdot comment!" At that point, you're not working with a copilot. You're playing Russian roulette with a loaded dependency graph. You want to be a real programmer? Use your head. Respect the machine. Or get out of the cockpit. Jj at Blogmobly The world has no clue yet that we're about to enter a period of incredible decline in software quality. AI" is going to do more damage to this industry than ten Electron frameworks and 100 managers combined.
Opera Mini was first released in 2005 as a web browser for mobile phones, with the ability to load full websites by sending most of the work to an external server. It was a massive hit, but it started to fade out of relevance once smartphones entered mainstream use. Opera Mini still exists today as a web browser for iPhone and Android-it's now just a tweaked version of the regular Opera mobile browser, and you shouldn't use Opera browsers. However, the original Java ME-based version is still functional, and you can even use it on modern computers. Corbin Davenport I remember using Opera Mini back in the day on my PocketPC and Palm devices. It wasn't my main browser on those devices, but if some site I really needed was acting up, Opera Mini could be a lifesaver, but as we all remember, the mobile web before the arrival of the iPhone was a trashfire. Interestingly enough, we circled back to the mobile web being a trashfire, but at least we can block ads now to make it bearable. Since Opera Mini is just a Java application, the client part of the equation will probably remain executable for a long time, but once Opera decides to close the server side of things, it will stop being useful. Perhaps one day someone will reverse-engineer the protocol and APIs, paving the way for a custom server we can all run as part of the retrocomputing hobby. There's always someone crazy and dedicated enough.
The next Apple operating systems will be identified by year, rather than with a version number, according to people with knowledge of the matter. That means the current iOS 18 will give way to iOS 26," said the people, who asked not to be identified because the plan is still private. Other updates will be known as iPadOS 26, macOS 26, watchOS 26, tvOS 26 and visionOS 26. Apple is making the change to bring consistency to its branding and move away from an approach that can be confusing to customers and developers. Today's operating systems - including iOS 18, watchOS 12, macOS 15 and visionOS 2 - use different numbers because their initial versions didn't debut at the same time. Mark Gurman at Bloomberg OK.
If you use Unix today, you can enjoy relatively long file names on more or less any filesystem that you care to name. But it wasn't always this way. Research V7 had 14-byte filenames, and the System III/System V lineage continued this restriction until it merged with BSD Unix, which had significantly increased this limit as part of moving to a new filesystem (initially called the Fast File System', for good reasons). You might wonder where this unusual number came from, and for that matter, what the file name limit was on very early Unixes (it was 8 bytes, which surprised me; I vaguely assumed that it had been 14 from the start). Chris Siebenmann I love these historical explanations for seemingly arbitrary limitations.
One of the ways in which Windows (and macOS) trails behind the Linux and BSD world is the complete lack of centralised, standardised application management. Windows users still have to scour the web to download sketchy installers straight from the Windows 95 days, amassing a veritable collection updaters in the process, which either continuously run in the background, or annoy you with update pop-ups when you launch an application. It's an archaic nightmare users of supposedly modern computers should not have to be dealing with. Microsoft has tried to remedy this, but in true Microsoft fashion, it did so halfheartedly, for instance with the Windows Package Manager, better known as winget. Instead of building an actual package manager, Microsoft basically just created a glorified script that downloads the same installers you download manually, and runs them in unattended mode in the background - it's a download manager masquerading as a proper application management framework. To complicate matters, winget is only available as a command-line tool, meaning 99% of Windows users won't be using it. There's no graphical frontend in Windows, and it's not integrated into Windows Update, so even if you strictly use winget to install your applications - which will be hard, as there's only about 1400 applications that use it - you still don't have a centralised place to upgrade your entire operating system and all of its applications. It's a mess, and Microsoft intends to address it. Again. This time, they're finally doing what should have been the goal from the start: allowing applications to be updated through Windows Update. Built on the Windows Update stack, the orchestration platform aims to provide developers and product teams building apps and management tools with an API for onboarding their update(s) that supports the needs of their installers. The orchestrator will coordinate across all onboarded products that are updated on Windows 11, in addition to Windows Update, to provide IT admins and users with a consistent management plane and experience, respectively. Angie Chen on the Windows IT Pro Blog Sounds good, but hold on a minute - orchestration platform"? So this isn't the existing winget, but integrated into Windows Update, where it should've been all along? No, what we're looking at here is Microsoft's competitor to Microsoft's winget inside Microsoft's Windows Update, oh and there's also the Windows Store. In other words, once this rolls out, it'll be yet another way to manage applications, existing inside Windows Update, and alongside winget (and the Windows Store). They way it works is surprisingly similar to winget: application developers can register an update executable with the orchestrator, and the orchestrator will periodically run this update executable to check for updates. In other words, this looks a hell of a lot like a mere download manager for existing updaters. What it's definitively not, however, is winget - so if you're a Windows application developer, you now not only have to register your application to work with winget, but also register it with this new orchestrator to work with Windows Update. This thing is so incredibly Microsoft.
It's been 9 years since we disrupted Genode's API. Back then, we changed the execution model of components, consistently applied the dependency-injection pattern to shun global side effects, and largely removed C-isms like format strings and pointers. These changes ultimately paved the ground for sophisticated systems like Sculpt OS. Since then, we identified several potential areas for further safety improvements, unlocked by the evolution of the C++ core language and inspired by the popularization of sum types for error propagation by the Rust community. With the current release, we uplift the framework API to foster a programming style that leaves no possible error condition unconsidered, reaching for a new level of rock-solidness of the framework. Section The Great API hardening explains how we achieved that. The revisited framework API comes in tandem with a new tool chain based on GCC 14 and binutils 2.44. Genode OS Framework 25.05 release notes This new release also brings a lot of progress on the integration of the TCP/IP stacks ported from Linux and lwIP, improvements to the Intel and VESA drivers, better power management of their Intel GPU multiplexer, and more. They've also added support for touchscreen gestures, file modification times support milliseconds now, and support for the seL4 kernel has been improved. Many of these changes will find their way into the next SculptOS release, or, in some cases, were already added.
An incredibly primitive operating system, with just two instructions: compile (1) and execute (0). It is heavily inspired by Frank Sergeant 3-Instruction Forth and is a strip down exercise following up SectorForth, SectorLisp, SectorC (the C compiler used here) and milliForth. Here is the full OS code in 46 bytes of 8086 assembly opcodes. 10biForthOS sourcehut page Yes, the entire operating system easily fits right here, inside an OSNews quote block: 50b8 8e00 31d8 e8ff 0017 003c 0575 00ea5000 3c00 7401 eb02 e8ee 0005 0588 eb47b8e6 0200 d231 14cd e480 7580 c3f4 10biForthOS sourcehut page How do you actually use this operating system? Once the operating system is loaded at boot, it listens on the serial port for instructions. You can then send the instruction 1 followed by a byte of an assembly opcode which will be compiled into a fixed location in memory. The instruction 0 will then execute the program. There's also a version with keyboard support, as well as a much bigger version compiled for x86-64. Something like this inevitably raises the question what an operating system really is, and if this extremely limited and minimalist thing can be considered as one. I'm not going to deep into this existential discussion, mostly because I land firmly on the side that this is indeed just as much an operating system as, say, Windows or MorphOS. This bit of code, when booted, allows you to operate the system. It's an operating system.
Microsoft's Recall feature, which takes screenshots of the contents of your screen every few seconds, saves them, and then runs text and image recognition to extract information from them, has had a rocky start. Even now that it's out there and Microsoft deems it ready for everyone to use, it has huge security and privacy gaps, and one of them is that applications that contain sensitive information, such as the Windows Signal application, cannot opt out' of having their contents scraped. Signal was rather unhappy with this massive privacy risk, and decided to do something about it. It's called screen security, and is Windows-only because it's specifically designed to counter Windows Recall. If you attempt to take a screenshot of Signal Desktop when screen security is enabled, nothing will appear. This limitation can be frustrating, but it might look familiar to you if you've ever had the audacity to try and take a screenshot of a movie or TV show on Windows. According to Microsoft's official developer documentation, setting the correct Digital Rights Management (DRM) flag on the application window will ensure that content won't show up in Recall or any other screenshot application." So that's exactly what Signal Desktop is now doing on Windows 11 by default. Joshua Lund on the Signal blog Microsoft cares more about enforcing the rights of massive corporations than it does about respecting the privacy of its users. As such, everything is in place in Windows to ensure neither you nor Recall can take screenshots of, I don't know, the Bee Movie, but nothing has been put in place to protect your private and sensitive messages in a service like Signal. This really tells you all you need to know about who Microsoft truly cares about, and it sure as hell isn't you, the user. What Signal is doing is absolutely brilliant. By turning Windows' digital rights management features against Recall to protect the privacy of Signal users, Signal has made it impossible - or at least very hard - for Microsoft to address this. Of course, this also means that taking screenshots of the Signal application on Windows for legitimate purposes is more cumbersome now, but since you can temporarily turn screen security off to take a screenshot means it's not impossible. I almost want other Windows developers to employ this same trick, just to make Recall less valuable, but that's probably not a great idea considering how much it would annoy users just trying to take legitimate screenshots. My uneducated guess is that this is exactly why Microsoft isn't providing developers with the kind of fine-grained controls to let Recall know what it can and cannot take screenshots of: Microsoft must know Recall is a feature for shareholders, not for users, and that users will ask developers to opt-out of any Recall snooping if such APIs were officially available. Microsoft wants to make it has hard as possible for applications to opt out of being sucked into the privacy black hole that is Recall, but in doing so, might be pushing developers to use DRM to achieve the same goal. Just delicious. Signal also signed off with a scathing indictment of AI" as a whole. Take a screenshot every few seconds" legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?" - but more sophisticated threats are on the horizon. The integration of AI agents with pervasive permissions, questionable security hygiene, and an insatiable hunger for data has the potential to break the blood-brain barrier between applications and operating systems. This poses a significant threat to Signal, and to every privacy-preserving application in general. Joshua Lund on the Signal blog Heed this warning.
Windows NT 4 doesn't virtualise well. This guide shows how to do it with Proxmox with a minimal amount of pain. Chris Jones Nothing to add, other than I love the linked website's design.
plwm is a highly customizable X11 dynamic tiling window manager written in Prolog. Main goals of the project are: high code & documentation quality; powerful yet easy customization; covering most common needs of tiling WM users; and to stay small, easy to use and hack on. plwm GitHub page Tiling window managers are a dime-a-dozen, but the ones using a unique or uncommon programming language do tend to stand out.
Highlights of Linux 6.15 include Rust support for hrtimer and ARMv7, a new setcpuid= boot parameter for x86 CPUs, support for sched_ext to count and report internal events, x86 Intel and AMD PMU enhancements, nested virtualization support for VGICv3 on ARM, and support for emulating FEAT_PMUv3 on Apple Silicon. Marius Nestor at 9To5Linux On top of these highlights, there's also a ton of other changes, from the usual additions of new drivers, to better support for RISC-V, and so much more.
A Linux kernel driver that turns a rotary phone dial into an evdev input device. Stefan Wiehler The year of Linux on the desktop is finally here. Thanks to Oleksandr Natalenko for pointing this gem out.
The Amiga, a once-dominant force in the personal computer world, continues to hold a special place in the hearts of many. But with limited next-gen hardware available and dwindling AmigaOS4 support, the future of this beloved platform seemed uncertain. That is, until four Dutch passionate individuals, Dave, Harald, Paul, and Marco, decided to take matters into their own hands. Driven by a shared love for the Amiga and a desire to see it thrive, they embarked on an ambitious project: to create a new, low-cost next-gen Amiga mainboard. Mirari's Our Story page Experience has taught me to be... Careful of news of new hardware from the Amiga world, but for once I have strong reasons to believe this one is actually the real deal. The development story - from the initial KiCad renders to the first five, fully functional prototype boards - seems to be on track, software support for Amiga OS is in development, Linux is working great already, and since today, MorphOS also boots on the board. It's called the Mirari, and it's very Dutch. So, what are we looking at here? The Mirari is a micro-ATX board, sporting either a PowerPC T10x2 processor (2-4 e5500 cores) up to 1.5GHz or a PowerPC T2081 processor (4 dual-threaded e6500 cores with Altivec 2.0) up to 1.8GHz, both designed by NXP in The Netherlands. It supports DDR3 memory, PCIe 2.0 (3.0 for the 4x slot when using the T2081), SATA and NVMe, the usual array of USB 2.0 and 3.2 ports, audio jacks, Ethernet, and so on. No, this is not a massive powerhouse that can take on the latest x86 or ARM machines, but it's more than enough to power Amiga OS 4 or MorphOS, and aims to be actually affordable. Being at the prototype stage means they're not for sale quite yet, but the fact they have a 100% yield so far and are comfortable enough to send one of the prototypes to a MorphOS developer, who then got MorphOS booting rather quickly, is a good sign. I also like the focus on affordability, which is often a problem in the Amiga world. I hope they make it to production, because I want one real bad.