Within in the last release cycle we worked on adding and extending the support for the i.MX8MP SoC as also found in one of the SoM options for theMNT Pocket Reformand are happy to show-case a first preview version of Sculpt running on this handy computing device. Josef Sontgen If you have a Pocket Reform - I reviewed its bigger sibling earlier this year - you can now run Genode on it. Not everything is working flawlessly yet - most notably audio and NVMe need work - but networking is operational, so you can actually browse the web. I'm not sure how much overlap there is between Genode users and Pocket Reform owners, but at least both groups now know it's an option.
Today is Black Friday", which is the day where a lot of retailers, both online and offline, pretend to have massive discounts on things they either raised the prices for a few weeks ago, or for useless garbage they bought in bulk that'll end up in a landfill within a year. Technology media happily partakes in this event, going full-mask off posting an endless stream of stories" promoting these discounts. They're writing ads for fake discounts, often for products from the very companies they're supposed to report on, and dress them up as normal articles. It's sad and revealing, highlighting just how much of the technology media landscape is owned by giant media conglomerates. OSNews does not partake. We're independent, answer to nobody, and are mostly funded directly by you, our readers. If you want to keep it this way, and keep OSNews free from the tripe you see on every other technology site around this time, consider supporting us through Patreon, making a one-time donation through Ko-Fi, or buying some merch. That's it. That's our extra special discount bonanza extravaganza Black Friday super coverage.
The Cinnamon Desktop, the GTK desktop environment developed by the Linux Mint project, has just released version 6.4. The focus of this release is on nips and tucks in the default theme, dialogs, menus, and other user interface elements. They seem to have taken a few pages out of GNOME's book, especially when it comes to dialogs and the OSD, which honestly makes sense considering Cinnamon is also GTK and most Cinnamon users will be running a ton of GNOME/Libadwaita applications. There's also a new night light feature to reduce eyestrain, vastly improved options for power profiles and management, and more. Cinnamon 6.4 will be part of Linux Mint's next major release, coming in late December, but is most likely already making its way to various other distributions' repositories.
Recently, I've been moving away from macOS to Linux, and have settled on using KDE Plasma as my desktop environment. For the most part I've been comfortable with the change, but it's always the small things that get me. For example, the Mail app built into macOS provides an Unsubscribe" button for emails. Apparently this is also supported in some webmail clients, but I'm not interested in accessing my email that way.Unfortunately, I haven't found an X11 or Wayland email client that supports this sort of functionality, so I decided to implement it myself. And anyway, I'm trying out Kontact for my mail at the moment, which supports plugins. So why not use this as an opportunity to build one? datagirl.xyz Writing a Kmail plugin like this feels a bit like an arcane art, because the process is not documented as well as it could be, and I doubt that other than KDE developers themselves, very few people are interested in writing these kinds of plugins. In fact, I can't find a single one listed on the KDE Store, and searching around I can't find anything either, other than the ones that come with KDE. It seems like this particular plugin interface is designed more to make it easy for KDE developers to extend and alter Kmail than it is for third parties to do so - and that's fine. Still, this means that if some third party does want to write such a plugin, there's some sleuthing and hacking to be done, and that's exactly the process this article details. In the end, we end up with a working unsubscribe plugin, with the code on git so others can learn from it. While this may not interest a large number of people, it's vital to have information like this out on the web for those precious few to find - so excellent work.
A three-year fight to help support game preservation has come to a sad end today. The US copyright office has denied a request for a DMCA exemption that would allow libraries to remotely share digital access to preserved video games. Dustin Bailey at GamesRadar This was always going to end in favour of the massive gaming industry with effectively bottomless bank accounts and more lawyers than god. The gist is that Section 1201 of the DMCA prevents libraries from circumventing the copy protection to make games available remotely. Much like books, libraries loan out books not just for research purposes, but also for entertainment purposes, and that's where the issue lies, according to the Copyright Office, who wrote there would be a significant risk that preserved video games would be used for recreational purposes". The games industry doesn't care about old titles nobody wants to buy anymore and no consumer is interested in. There's a long tail of games that have no monetary value whatsoever, and there's a relatively small number of very popular older games that the industry wants to keep repackaging and reselling forever - I mean, we can't have a new Nintendo console without the opportunity to buy Mario Bros. for the 67th time. That'd be ludicrous. In order to protect the continued free profits from those few popular retro titles, the endless list of other games only a few nerds are interested in are sacrificed.
There have beensome past rumblingson the internetabout a capacitor being installed backwardsin Apple'sMacintosh LC III. The LC III was a pizza box" Mac model produced from early 1993 to early 1994, mainly targeted at the education market. It also manifested as various consumer Performa models: the 450, 460, 466, and 467. Clearly, Apple never initiated a huge recall of the LC III, so I think there is some skepticism in the community about this whole issue. Let's look at the situation in more detail and understand the circuit. Did Apple actually make a mistake? Doug Brown Even I had heard of these claims, and I'm not particularly interested in Apple retrocomputing, other than whatever comes by on Adrian Black or whatever. As such, it surprises me that there hasn't been any definitive answer to this question - with the amount of interest in classic Macs you'd think this would simply be a settled issue and everyone would know about it. This vintage of Macs pretty much require recaps by now, so I assumed if Apple indeed soldered on a capacitor backwards, it'd just be something listed in the various recapping guides. It took some very minor digging with the multimeter, but yes, one of the capacitors on this family of boards is soldered on the wrong way, with the positive terminal where the negative terminal should be. It seems the error does not lie with whomever soldered the capacitors on the boards - or whomever set the machine that did so - because the silkscreen is labeled incorrectly, too. The reason it doesn't seem to be noticeable problem during the expected lifespan of the computer is because it was rated at 16V, but was only taking in -5V. So, if you plan on recapping one of these classic Macs - you might as well fix the error.
The moment a lot of us has been fearing may be soon upon us. Among the various remedies proposed by the United States Department of Justice to address Google's monopoly abuse, there's also banning Google from spending money to become the default search engine on other devices, platforms, or applications. We strongly urge the Court to consider remedies that improve search competition without harming independent browsers and browser engines," a Mozilla spokesperson tells PCMag. Mozilla points to a key but less eye-catching proposal from the DOJ to regulate Google's search business, which a judgeruledas a monopoly in August. In their recommendations, federal prosecutors urged the court to ban Google from offering something of value" to third-party companies to make Google the default search engine over their software or devices. Michael Kan at PC Mag Obviously Mozilla is urging the courts to reconsider this remedy, because it would instantly cut more than 80% of Mozilla's revenue. As I've been saying for years now, the reason Firefox seems to be getting worse is because of Mozilla is desperately trying to find other sources of revenue, and they seem to think advertising is their best bet - even going so far as working together with Facebook. Imagine how much more invasive and user-hostile these attempts are going to get if Mozilla suddenly loses 80% of its revenue? For so, so many years now I've been warning everyone about just how fragile the future of Firefox was, and every one of my worries and predictions have become reality. If Mozilla now loses 80% of its funding, which platform Firefox officially supports do you think will feel the sting of inevitable budget cuts, scope reductions, and even more layoffs first? The future of especially Firefox on Linux is hanging by a thread, and with everyone lulled into a false sense of complacency by Chrome and its many shady skins, nobody in the Linux community seems to have done anything to prepare for this near inevitability. With no proper, fully-featured replacements in the works, Linux distributions, especially ones with strict open source requirements, will most likely be forced to ship with de-Googled Chromium variants by default once Firefox becomes incompatible with such requirements. And no matter how much you take Google out of Chromium, it's still effectively a Google product, leaving most Linux users entirely at the whim of big tech for the most important application they have. We're about to enter a very, very messy time for browsing on Linux.
There are so many ecological, environmental, and climate problems and disasters taking place all over the world that it's sometimes hard to see the burning forests through the charred tree stumps. As at best middle-income individuals living in this corporate line-must-go-up hellscape, there's only so much we can do turn the rising tides of fascism and leave at least a semblance of a livable world for our children and grandchildren. Of course, the most elementary thing we can do is not vote for science-denying death cults who believe everything is some non-existent entity's grand plan, but other than that, what's really our impact if we drive a little less or use paper straws, when some wealthy robber baron flying his private jet to Florida to kiss the gaudy gold ring to signal his obedience does more damage to our world in one flight than we do in a year of driving to our underpaid, expendable job? Income, financial, health, and other circumstances allowing, all we can do are the little things to make ourselves feel better, usually in areas in which we are knowledgeable. In technology, it might seem like there's not a whole lot we can do, but actually there's quite a few steps we can take. One of the biggest things you, as an individual knowledgeable about and interested in tech, can do to give the elite and ruling class the finger is to move away from big tech, their products, and their services - no more Apple, Amazon, Microsoft, Google, or Amazon. This is often a long, tedious, and difficult process, as most of us will discover that we rely on a lot more big tech products than we initially thought. It's like an onion that looks shiny and tasty on the outside, but is rotting from the inside - the more layers you peel away, the dirtier and nastier it gets. Also you start crying. I've been in the process of eradicating as much of big tech out of my life for a long time now. Since four or five years ago, all my desktop and laptop PCs run Linux, from my dual-Xeon workstation to my high-end gaming PC (ignore that spare parts PC that runs Windows just for League of Legends. That stupid game is my guilty pleasure and I will not give it up), from my XPS 13 laptop to my little Home Assistant thin client. I've never ordered a single thing from Amazon and have no Prime subscription or whatever it is, so that one was a freebie. Apple I banished from my life long ago, so that's another freebie. Sadly, that other device most of us carry with us remained solidly in the big tech camp, as I've been using an Android phone for a long time, filled to the brim with Google products, applications, and services. There really isn't a viable alternative to the Android and iOS duopoly. Or is there? Well, in a roundabout way, there is an alternative to iOS and Google's Android. You can't do much to take the Apple out of an iPhone, but there's a lot you can do to take the Google out of an Android phone. Unless or until an independent third platform ever manages to take serious hold - godspeed, our saviour - de-Googled Android, as it's called, is your best bet at having a fully functional, modern smartphone that's as free from big tech as you want it to be, without leaving you with a barely usable, barebones experience. While you can install a de-Googled ROM yourself, as there's countless to choose from, this is not an option for everyone, since not everyone has the skills, time, and/or supported devices to do so. Murena, Fairphone, and sustainable mining This is where Murena comes in. Murena is a French company - founded by Gael Duval, of Mandrake Linux fame - that develops /e/OS, a de-Googled Android using microG (which Murena also supports financially), which it makes available for anyone to install on supported devices, while also selling various devices with /e/OS preinstalled. Murena goes one step further, however, by also offering something called Murena Workspace - a branded Nextcloud offering that works seamlessly with /e/OS. In other words, if you buy an /e/OS smartphone from Murena, you get the complete package of smartphone, mobile operating system, and cloud services that's very similar to buying a regular Android phone or an iPhone. To help me test this complete package of smartphone, de-Googled Android, and cloud services, Murena loaned me a Fairphone 5 with /e/OS preinstalled, and while this article mostly focuses on the /e/OS experience, we should first talk a little bit about the relationship between Murena and Fairphone. Murena and Fairphone are partners, and Murena has been selling /e/OS Fairphones for a while now. Most of us will be familiar with Fairphone - it's a Dutch company focused on designing and selling smartphones and related accessories that are are user-repairable and long-lasting, while also trying everything within their power to give full insight into their supply chain. This is important, because every smartphone contains quite a few materials that are unsustainably mined. Many mines are destructive to the environment, have horrible working conditions, or even sink as low as employing children. Even companies priding themselves on being environmentally responsible and sustainable, like Apple, are guilty of partaking in and propping up such mining endeavours. As consumers, there isn't much we can do - the network of supply chains involved in making a smartphone is incredibly complex and opaque, and there's basically nothing normal people can do to really fully know on whose underpaid or even underage shoulders their smartphone is built. This holiday season, Murena and Fairphone are collaborating on exactly this issue of the conditions in mines used to acquire the metals and minerals in our phones. Instead of offering big discounts (that barely eat into margins and often follow sharp price increases right before the holidays), Murena and Fairphone will donate
Every now and then, news from the club I'm too cool to join, the plan9/9front community, pierces the veil of coolness and enters our normal world. This time, someone accidentally made a package manager for 9front. I've been growing tired of manually handling random software, so I decided to find a simple way to automate the process and ended up making a sort of package manager" for 9front^1. It's really just a set of shell scripts that act as a frontend forgitand keep a simple database of package names and URLs. Running thepkginitscript will ask for a location to store the source files for installed packages (/sys/pkgby default) which will then be created if non-existent. And that's it! No, really. Now you can provide a URL for a git repository topkg/add. Kelly bubstance" Glenn As I somehow expected from 9front, it's quite a simple and elegant system. I'm not sure how well it would handle more complex package operations, but I doubt many 9front systems are complex to begin with, so this may just be enough to take some of the tedium out of managing software on 9front, as the author originally intended. One day I will be cool enough to use 9front. I just have to stay cool.
The author of this article, Dr. Casey Lawrence, mentions the opt-out checkbox is hard to find, and they aren't kidding. On Windows, here's the full snaking path you have to take through Word's settings to get to the checkbox: File > Options > Trust Center > Trust Center Settings > Privacy Options > Privacy Settings > Optional Connected Experiences > Uncheck box: Turn on optional connected experiences". That is absolutely bananas. No normal person is ever going to find this checkbox. Anyway, remember how the AI" believers kept saying hey, it's on the internet so scraping your stuff and violating your copyright is totally legal you guys!"? Well, what about when you're using Word, installed on your own PC, to write private documents, containing, say, sensitive health information? Or detailed plans about your company's competitor to Azure or Microsoft Office? Or correspondence with lawyers about an antirust lawsuit against Microsoft? Or a report on Microsoft's illegal activity you're trying to report as a whistleblower? Is that stuff fair game for the gobbledygook generators too? This AI" nonsense has to stop. How is any of this even remotely legal?
A month and a bit ago,I wondered if I could cope with a terminal-only computer. The only way to really find out was to give it a go. My goal was to see what it was like to use a terminal-only computer for mypersonalcomputing for two weeks, and more if I fancied it. Neil's blog I tried to do this too, once. Once. Doing everything from the terminal just isn't viable for me, mostly because I didn't grow up with it. Our family's first computer ran MS-DOS (with a Windows 3.1 installation we never used), and I'm pretty sure the experience of using MS-DOS as my first CLI ruined me for life. My mental model for computing didn't start forming properly until Windows 95 came out, and as such, computing is inherently graphical for me, and no matter how many amazing CLI and TUI applications are out there - and there are many, many amazing ones - my brain just isn't compatible with it. There are a few tasks I prefer doing with the command line, like updating my computers or editing system files using Nano, but for everything else I'm just faster and more comfortable with a graphical user interface. This comes down to not knowing most commands by heart, and often not even knowing the options and flags for the most basic of commands, meaning even very basic operations that people comfortable using the command line do without even thinking, take me ages. I'm glad any modern Linux distribution - I use Fedora KDE on all my computers - offers both paths for almost anything you could do on your computer, and unless I specifically opt to do so, I literally - literally literally - never have to touch the command line.
I had to dive into our archive all the way back to 2017 to find the last reference to the MaXX Interactive Desktop, and it seems this wasn't exactly my fault - the project has been on hiatus since 2020, and is only now coming back to life, as MaXXdesktop v2.2.0 (nickname Octane) Alpha-1 has been released, alongside a promising and ambitious roadmap for the future of the project. For the uninitiated - MaXX is a Linux reimplementation of the IRIX Interactive Desktop with some modernisations and other niceties to make it work properly on modern Linux (and FreeBSD) machines. MaXX has a unique history in that its creator and lead developer, Eric Masson, managed to secure a special license agreement with SGI way back in 2005, under which he was allowed to recreate, from scratch, the IRIX Interactive Desktop on Linux, including the use of SGI's trademarks and IRIX' unique look and feel. It's important to note that he did not get access to any code - he was only allowed to reverse-engineer and recreate it, and because some of the code falls under this license agreement and some doesn't, MaXX is not entirely open source; parts of it are, but not all of it. Any new code written that doesn't fall under the license agreement is released as open source though, and the goal is to, over time, make everything open source. And as you can tell from this v2.2.0 screenshot, MaXX looks stunning even at 4K. This new alpha version contains the first changes to adopt the freedesktop.org application specifications, a new Expose-like window overview, tweaks to the modernised version of the IRIX look and feel (the classic one is also included as an option), desktop notifications, performance improvements, various modernisations to the window manager, and so, so much more. For the final release of 2.2.0 and later releases, more changes are planned, like brand new configuration and system management panels, a quick search tool, a new file manager, and a ton more. MaXX runs on RHEL/Rocky and Ubuntu, and probably more Linux distributions, and FreeBSD, and is entirely free.
This is a Silicon Graphics workstation from 1995. Specifically, it is an Teal' Indigo 2 (as opposed to a Purple' Indigo 2, which came later). Ordinarily that's rare enough - these things were about 30,000 brand new. A close look at the case badge though, marks this out as a Teal' POWER Indigo 2 - where instead of the usual MIPS R4600 or R4400SC CPU modules, we have the rare, unusual, expensive and short-lived MIPS R8000 module. Jonathan Pallant It's rare these days to find an article about exotic hardware that has this many detailed photographs - most people just default to making videos now. Even if the actual contents of the article aren't interesting, this is some real good hardware pornography, and I salute the author for taking the time to both take and publish these photos in a traditional way. That being said, what makes this particular SGI Indigo 2 so special? The R8000 is not a CPU in the traditional sense. It is a processor, but that processor is comprised of many individual chips, some of which you can see and some of which are hidden under the heatsink. The MIPS R8000 was apparently an attempt to wrestle back the Floating-Point crown from rivals. Some accounts report that at 75 MHz, it has around ten times the double-precision floating point throughput of an equivalent Pentium. However, code had to be specially optimised to take best advantage of it and most code wasn't. It lasted on the market for around 18 months, before bring replaced by the MIPS R10K in the Purple' Indigo 2. Jonathan Pallant And here we see the first little bits of writing on the wall for the future of all the architectures trying to combat the rising tide of x86. SGI's MIPS, Sun's SPARC, HP's PA-RISC, and other processors would stumble along for a few more years after this R8000 module came on the market, but looking back, all of these companies knew which way the wind was blowing, and many of them would sign onto Intel's Itanium effort. Itanium would fail spectacularly, but the cat was out of the bag, and SGI, Sun, and HP would all be making standard Xeon and Opteron workstations within a a few years. Absolutely amazing to see this rare of a machine and module lovingly looked after.
This is the first post in what will hopefully become aseries of postsabout a virtual machine I'm developing as a hobby project called Bismuth. This post will touch on some of the design fundamentals and goals, with future posts going into more detail on each. But to explain how I got here I first have to tell you about Bismuth, the kernel. Eniko Fox It's not every day the a developer of an awesome video game details a project they're working on that also happens to be excellent material for OSNews. Eniko Fox, one of the developers of the recently released Kitsune Tails, has also been working on an operating system and virtual machine in her spare time, and has recently been detailing the experience in, well, more detail. This one here is the first article in the series, and a few days ago she published the second part about memory safety in the VM. The first article goes into the origins of the project, as well as the design goals for the virtual machine. It started out as an operating systems development side project, but once it was time to develop things like the MMU and virtual memory mapping, Fox started wondering if programs couldn't simply run inside a virtual machine atop the kernel instead. This is how the actual Bismuth virtual machine was conceived. Fox wants the virtual machine to care about memory safety, and that's what the second article goes into. Since the VM is written in C, which is anything but memory-safe, she's opting for implementing a form of sandboxing - which also happens to be the point in the development story where my limited knowledge starts to fail me and things get a little too complicated for me. I can't even internalise how links work in Markdown, after all (square or regular brackets first? Also Markdown sucks as a writing tool but that's a story for another time). For those of you more capable than me - so basically most of you - Fox' series is a great series to follow along as she further develops the Bismuth VM.
Valve, entirely going against the popular definition ofVendor, is still actively working on improving and maintaining the kernel for their Steam Deck hardware. Let's see what they're up to in this6.8cycle. Samuel Dionne-Riel Just a quick look at what, exactly, Valve does with the Steam Deck Linux kernel - nothing more, nothing less. It's nice to have simple, straightforward posts sometimes.
Ah, the Common Hardware Reference Platform, IBM's and Apple's ill-fated attempt at taking on the PC market with a reference PowerPC platform anybody could build and expand upon while remaining (mostly) compatible with one another. Sadly, like so many other things Apple was trying to do before Steve Jobs returned, it never took off, and even Apple itself never implemented CHRP in any meaningful way. Only a few random IBM and Motorola computers ever fully implemented it, and Apple didn't get any further than basic CHRP support in Mac OS 8, and some PowerPC Macs were based on CHRP, without actually being compatible with it. We're roughly three decades down the line now, and pretty much everyone except weird nerds like us have forgotten CHRP was ever even a thing, but Linux has continued to support CHRP all this time. This support, too, though, is coming to an end, as Michael Ellerman has informed the Linux kernel community that they're thinking of getting rid of it. Only a very small number of machines are supported by CHRP in Linux: the IBM B50, bplan/Genesi's Pegasos/Pegasos2 boards, the Total Impact briQ, and maybe some Motorola machines, and that's it. Ellerman notes that these machines seem to have zero active users, and anyone wanting to bring CHRP support back can always go back in the git history. CHRP is one of the many, many footnotes in computing history, and with so few machines out there that supported it, and so few machines Linux' CHRP support could even be used for, it makes perfect sense to remove this from the kernel, while obviously keeping it in git's history in case anyone wants to work with it on their hardware in the future. Still, it's always fun to see references to such old, obscure hardware and platforms in 2024, even if it's technically sad news.
Windows 10's free, guaranteed security updatesstop in October 2025, less than a year from now. Windows 10 users with supported PCs have been offered the Windows 11 upgrade plenty of times before. But now Microsoft is apparently making a fresh push to get users to upgrade, sending them full-screen reminders recommending they buy new computers. Andrew Cunningham at Ars Technica That deadline sure feels like it's breathing down Microsoft's neck. Most Windows users are still using Windows 10, and all of those hundreds of millions (billions?) of computers will become unsupported less than a year from now, which is going to be a major headache for Microsoft once the unaddressed security issues start piling up. CrowdStrike is fresh in Microsoft's minds, and the company made a ton of promises about changing its security culture and implementing new features and best practices to stop it from ever happening again. That's going to be some very tough promises to keep when the majority of Windows users are no longer getting any support. The obvious solution here is to accept the fact that if people haven't upgraded to Windows 11 by now, they're not going to until forced to do so because their computer breaks or becomes too slow and Windows 11 comes preinstalled on their new computer. No amount of annoying fullscreen ads interrupting people's work or pleasure are going to get people to buy a new PC just for some halfbaked AI" nonsense or whatever - in fact, it might just put even more people off from upgrading in the first place. Microsoft needs to face the music and simply extend the end-of-support deadline for Windows 10. Not doing so is massively irresponsible to a level rarely seen from big tech, and if they refuse to do so I strongly believe authorities should get involved and force the company to extend the deadline. You simply cannot leave this many users with insecure, non-maintained operating systems that they rely on every day to get their work done.
VMS Software, the company migrating OpenVMS to x86 (well, virtualised x86, at least) has announced the release of OpenVMS 9.2-3, which brings with a number of new features and changes. It won't surprise you to hear that many of the changes are about virtualisation and enterprise networking stuff, like adding passthrough support for fibre channel when running OpenVMS in VMware, a new VGA/keyboard-based guest console, automatic configuration of TCP/IP and OpenSSH during installation, improved performance for virtualised network interfaces on VMware and KVM, and much more. Gaining access to OpenVMS requires requesting a community license, after which OpenVMs will be delivered in the form of a preinstalled virtual disk image, complete with a number of development tools.
I've linked to quite a few posts by OpenBSD developer Solene Rapenne on OSNews, mostly about her work for and knowledge of OpenBSD. However, she recently posted about her decision to leave the OpenBSD team, and it mostly comes down to the fact she hasn't been using OpenBSD for a while now due to a myriad of problems she's encountering. Posts like these are generally not that fun to link to, and I've been debating about this for a few days now, but I think highlighting such problems, especially when detailed by a now-former OpenBSD developer, is an important thing to do. Hardware compatibility is an issue because OpenBSD has no Bluetooth support, its gamepad support is fractured and limited, and most of all, battery life and heat are a major issue, as Solene notes that OpenBSD draws more power than alternatives, by a good margin". For her devops work, she also needs to run a lot of software in virtual machines, and this seems to be a big problem on OpenBSD, as performance in this area seems limited. Lastly, OpenBSD seems to be having stability issues and crashes a lot for her, and while this in an of itself is a big problem already, it's compounded by the fact that OpenBSD's file system is quite outdated, and most crashes will lead to corrupted or lost files, since the file system doesn't have any features to mitigate this. I went through a similar, but obviously much shorter and far less well-informed experience with OpenBSD myself. It's such a neat, understandable, and well-thought out operating system, but its limitations are obvious, and they will start to bother you sooner or later if you're trying to use it as a general purpose operating system. While it's entirely understandable because OpenBSD's main goal is not the desktop, it still sucks because everything else about the operating system is so damn nice and welcoming. Solene found her alternative in Linux and Qubes OS: I moved from OpenBSD to Qubes OS for almost everything (except playing video games) on which I run Fedora virtual machines (approximately 20 VM simultaneously in average). This provides me better security than OpenBSD could provide me as I am able to separate every context into different spaces, this is absolutely hardcore for most users, but I just can't go back to a traditional system after this. Solene Rapenne She lists quite a few Linux features she particularly likes and why, such as cgroups, systemd, modern file systems like Btrfs and ZFS, SELinux, and more. It's quite rare to see someone of her calibre so openly list the shortcomings of the system she clearly otherwise loves and put a lot of effort in, and move to what is generally looked at with some disdain within the community she came from. It also highlights that issues with running OpenBSD as a general purpose operating system are not confined to less experienced users such as myself, but extend towards extremely experienced and knowledgeable people like actual OpenBSD developers. I'm definitely not advocating for OpenBSD to change course or make a hard pivot to becoming a desktop operating system, but I do think that even within the confines of a server operating system there's room for at least things like a much improved and faster file system that provides the modern features server users expect, too.
One of my favourite devices that never took on in the home is the thin client. Whenever I look at a fully functional Sun Microsystems thin client setup, with Sun Rays, a Solaris server, and the smartcards instantly loading up your desktop the moment you slide it in the Ray's slot, my mind wonders about the future we could've had in our homes - a powerful, expandable, capable server in the basement, running every family member's software, and thin clients all throughout the house where family members can plug their smartcard into to load up their stuff. This is the future they took from us. Well, not entirely. They took this future, made it infinitely worse by replacing that big server in our basement with massive datacentres far away from us in the cloud", and threw it back in our faces as a shittier inevitability we all have to deal with. The fact this model relies on subscriptions is, of course, entirely coincidental and not all the main driving force behind taking our software away from us and hiding it stronghold datacentres. So anyway Microsoft is launching a thin client that connects to a Windows VM running in the cloud. They took the perfection Sun gave us, shoved it down their throats, regurgitated it like a cow, and are now presenting it to us as the new shiny. It's called the Windows 365 Link, and it connects to, as the name implies, Windows 365. Here's part of the enterprise marketing speak: Today, as users take advantage of virtualization offerings delivered on an array of devices, they can face complex sign-in processes, peripheral incompatibility, and latency issues. Windows 365 Link helps address these issues, particularly in shared workspace scenarios. It's compact, lightweight, and designed to maximize productivity with its highly responsive performance. It takes seconds to boot and instantly wakes from sleep, allowing users to quickly get started or pick up where they left off on their Cloud PC. With dual 4K monitor support, four USB ports, an Ethernet port, Wi-Fi 6E, and Bluetooth 5.3, Windows 365 Link offers seamless connectivity with both wired and wireless peripherals. Anthony Smith at the Windows IT Pro Blog This is just a thin client, but worse, since it seemingly can only connect to Microsoft's cloud", without the ability to connect to a server on-premises, which is a very common use case. In fact, you can't even use another vendor's tooling, so if you want to switch from Windows 365 to some other provider later down the line, you seemingly can't - unless there's some BIOS switches or whatever you can flip. At the very least, Microsoft intends for other vendors to also make Link devices, so perhaps competition will bring the price down to a more manageble level than $349. Unless an enterprise environment is already so deep into the Microsoft ecosystem that they don't even rely on things like Citrix or any of the other countless providers of similar services, why would you buy thousands of these for your employees, only to lock your entire company into Windows 365? I'm no IT manager, obviously, so perhaps I'm way off base here, but this thing seems like a hard sell when there are so, so many alternative services, and so many thin client devices to choose from that can use any of those services.
FLTK 1.4.0 has been released. This new version of the Fast Light Toolkit contains some major improvements, such as Wayland support on both Linux and FreeBSD. X11 and Wayland are both supported by default, and applications using FLTK will launch using Wayland if available, and otherwise fall back to starting with X11. This new release also brings HiDPI support on Linux and Windows, and improves said support on macOS. Those are the headline features, but there's more changes here, of course, as well as the usual round of bugfixes. Right after the release of 1.4.0, a quick bugfix release, version 1.4.0-1, was released to address an issue in 1.4.0 - a build error on a single test program on Windows, when using Visual Studio. Not exactly a major bug, but great to see the team fix it so rapidly.
Way back in April of this year, I linked to a question and answer about why some parts of the Windows 98 installer looked older than the other parts. It turns out that in between the MS-DOS (the blue part) and Windows 98 parts of the installation process, the installer boots into a small version of Windows 3.1. Raymond Chen posted an article detailing this process for Windows 95, and why, exactly, Microsoft had to resort to splitting the installer between MS-DOS, Windows 3.1, and Windows 95. The answer is, as always, backwards compatibility. Since Windows 95 could be installed from MS-DOS, Windows 3.1, and Windows 95 (to fix an existing installation), the installer needed to be able to work on all three. The easiest solution would be to write the installer as an MS-DOS program, since that works on all three of these starting points, but that would mean an ugly installer, even though Windows 95 was supposed to be most people's first experience with a graphical user interface. This is why Microsoft ended up with the tiered installation process - to support all possible starting points in the most graphical way possible. Chen also mentions another fun fact that is somewhat related to this: the first version of Excel for Windows was shipped with a version of the Windows 2.1 runtime, so that even people without Windows could still run Excel. Even back then, Microsoft took backwards compatibility seriously, and made sure people who hadn't upgraded from MS-DOS to Windows 2.x yet - meaning, everyone - could still enjoy the spreadsheet lifestyle. I say we pass some EU law forcing Microsoft to bring this back. The next version of Excel should contain whatever is needed to run it on MS-DOS. Make it happen, Brussels.
Speaking of Google, the United States Department of Justice is pushing for Google to sell off Chrome. Top Justice Department antitrust officials have decided to ask a judge to forceAlphabet Inc.'s Google to sell off its Chrome browser in what would be a historic crackdown on one of the biggest tech companies in the world. The department will ask the judge, who ruled in August that Googleillegally monopolizedthe search market, to require measures related to artificial intelligence and its Android smartphone operating system, according to people familiar with the plans. Leah Nylen and Josh Sisco Let's take a look at the history and current state of independent browsers, shall we? Netscape is obviously dead, Firefox is hanging on by a thread (which is inconspicuously shaped like a giant sack of money from Google), Opera is dead (its shady Chrome skin doesn't count), Brave is cryptotrash run by a homophobe, and Vivaldi, while an actually good and capable Chrome skin with a ton of fun features, still isn't profitable, so who knows how long they'll last. As an independent company, Chrome wouldn't survive. It seems the DoJ understands this, too, because they're clearly using the words sell off", which would indicate selling Chrome to someone else instead of just spinning it off into a separate company. But who has both the cash and the interest in buying Chrome, without also being a terrible tech company with terrible business incentives that might make Chrome even more terrible than it already is? Through Chrome, Google has sucked all the air out of whatever was left of the browser market back when they first announced the browser. An independent Chrome won't survive, and Chrome in anyone else's hands might have the potential to be even worse. A final option out of left field would be turning Chrome and Chromium into a truly independent foundation or something, without a profit motive, focused solely on developing the Chromium engine, but that, too, would be easily abused by financial interests. I think the most likely outcome is one none of us want: absolutely nothing will happen. There's a new administration coming to Washington, and if the recent proposed picks for government positions are anything to go by, America will be incredibly lucky if they get someone smarter than a disemboweled frog on a stick to run the DoJ. More likely than not, Google's lawyers will walk all over whatever's left of the DoJ after 20 January, or Pichai will simply kiss some more gaudy gold rings to make the case go away.
Mishaal Rahman, who has a history of being right about Google and Android-related matters, is reporting that Google is intending to standardise its consumer operating system efforts onto a single platform: Android. To better compete with the iPad as well as manage engineering resources more effectively, Google wants to unify its operating system efforts. Instead of merging Android and Chrome OS into a new operating system like rumors suggested in the past, however, a source told me that Google is instead working on fully migrating Chrome OS over to Android. While we don't know what this means for the Chrome OS or Chromebook brands, we did hear that Google wants future Chromebooks" to ship with the Android OS in the future. That's why I believe thatGoogle's rumored new Pixel Laptopwill run a new version of desktop Android as opposed to the Chrome OS that you're likely familiar with. Mishaal Rahman at Android Authority The fact both Chrome OS and Android exist, and are competing with each other in some segments - most notably tablets - hasn't done either operating system any favours. I doubt many people even know Chrome OS tablets are a thing, and I doubt many people would say Android tablets are an objectively better choice than an iPad. I personally definitely prefer Android on tablets over iOS on tablets, but I fully recognise that for 95% of tablet buyers, the iPad is the better, and often also more affordable, choice. Google has been struggling with Android on tablets for about as long as they've existed, and now it seems that the company is going to focus all of its efforts on just Android, leaving Chrome OS to slowly be consumed and replaced by it. In June, Google already announced it was going to replace both the kernel and several subsystems in Chrome OS with their Android counterparts, and now they're also building a new version of Chrome for Android with extensions supports - to match Chrome on Chrome OS - as well as a terminal application for Android that gives access to a local Linux virtual machine, much like is available on Chrome OS. As mentioned, laptops running Android will also be making an entrance, including a Pixel laptop straight from Google. The next big update for Android 15 contains a ton of new proper windowing features, and there's more coming: improved keyboard and mouse support, as well as external monitors, virtual desktops, and a lot more. As anyone who has ever attempted to run Android on a desktop or laptop knows, there's definitely a ton of work Google needs to do to make Android palatable to consumers on that front. Of course, this being Google, any of these rumours or plans could change at any time without any sense of logic behind it, as managers fulfill their quotas, get promoted, or leave the company.
In recent weeks, law enforcement in the United States discovered, to their dismay, that iPhones were automatically rebooting themselves after a few days of inactivity, thereby denying them access to the contents of these phones. After a lot of speculation online, Jiska Classen dove into this story to find out what was going on, and through reverse-engineering they discovered that this was a new security feature built by Apple as part of iOS 18.1, to further make stolen iPhones useless for both thieves as well as law enforcement officers. It's a rather clever feature. TheSecure Enclave Processor inside the iPhone keeps track of when the phone was last unlocked, and if that period exceeds 72 hours, the SEP will inform a kernel module. This kernel module will then, in turn, tell the phone to gracefully reboot, meaning no data is lost in this process. If the phone for whatever reason does not reboot and remains powered on, the module will assume the phone's been tampered with somehow and kernel-panic. Interestingly, if the reboot takes place properly, an analytics report stating how long the phone was not unlocked will be sent to Apple. The reason this is such a powerful feature is that a locked iPhone is entirely useless to anyone who doesn't have the right code or biometrics to unlock it. Everything on the device is encrypted, and only properly unlocking it will decrypt the phone's contents; in fact, a locked phone can't even join a Wi-Fi network, because the stored passwords are encrypted (and I'm assuming that a locked phone does not provide access to any methods of joining an open network either). When you have a SIM card without any pincode, the iPhone will connect to the cellular network, but any notifications or calls coming in will effectively be empty, since incoming phone numbers can't be linked to any of the still-encrypted contacts, and while the phone can tell it's received notifications, it can't show you any of their contents. A thief who's now holding this phone can't do much with it if it locks itself like this after a few days, and law enforcement won't be able to access the phone either. This is a big deal in places where arrests based purely on skin colour or ethnicity or whatever are common, like in the United States (and in Europe too, just to a far lesser degree), or in places where people have to fear the authorities for other reasons, like in totalitarian dictatorships like Russia, China or Iran, where any hint of dissent can end you in harsh prisons. Apple is always at the forefront with features such as these, with Google and Android drunkenly stumbling into the open door a year later with copies that take ages to propagate through the Android user base. I'm legitimately thankful for Apple raising awareness of the need of features such as these - even if they're too cowardly to enable them in places like China - as it's quite clear a lot more people need to start caring about these things, with recent developments and all.
Another excellent guide from friend of the website Stefano Marinelli. A client of mine has several Windows Server VMs, which I had not migrated to FreeBSD/bhyve until a few weeks ago. These VMs were originally installed with the traditional BIOS boot mode, not UEFI, on Proxmox. Fortunately, their virtual disks are on ZFS, which allowed me to test and achieve the final result in just a few steps. This is because Windows VMs (server or otherwise) often installed on KVM (Proxmox, etc.), especially older ones, are non-UEFI, using the traditional BIOS boot mode. bhyve doesn't support this setup, but Windows allows changing the boot mode, and I could perform the migration directly on the target FreeBSD server. Stefano Marinelli I link to guides like these because finding such detailed guides born out of experience, written by actual humans with actual experience - instead of bots on content farms - is remarkably hard. There's more than enough similar content like this out there covering Windows or popular Linux distributions like Red Hat, but the BSDs tend to fall a bit short here. As such, promoting people writing such content is something I'll happily do. Marinelli also happens to host the Matrix server (as part of his BSD Cafe effort) that houses the OSNews Matrix room, accessible by becoming an OSNews Patreon.
Version 6.12 of the Linux kernel has been released. The main feature consists of the merger of the real-time PREEMPT_RT scheduler, most likely one of the longest-running merger sagas in Linux' history. This means that Linux now fully supports both soft and hard real-time capabilities natively, which is a major step forward for the platform, especially when looking at embedded development. It's now no longer needed to draw in real-time support from outside the kernel. Linux 6.12 also brings a huge number of improvements for graphics drivers, for both Intel and AMD's graphics cards. With 6.12, Linux now supports the Intel Xe2 integrated GPU as well as Intel's upcoming discrete Battlemage" GPUs by default, and it contains more AMD RDNA4 support for those upcoming GPUs. DRM panics messages in 6.12 will show a QR code you can scan for more information, a feature written in Rust, and initial support for the Raspberry Pi 5 finally hit mainline too. Of course, there's a lot more in here, like the usual LoongArch and ARM improvements, new drivers, and so on. and if you're a regular Linux user you'll see 6.12 make it to your distribution within a few weeks or months.
A few weeks ago I linked to a story by Misty De Meo, in which they explored what happened to the various eccentric Japanese PC platforms. One of the platforms mentioned was FM Towns, made by Fujitsu, which came with its own graphical operating system from the era of Windows 3.x. I had never heard of this one before, but it looks incredibly interesting, with some unique UI ideas I'd love to explore, if only I could read Japanese. Since learning Japanese is a serious life-long commitment, I can safely say that's not going to happen. It seems I'm not the only one interested in FM Towns, as a new project called Free Towns OS (or Tsugaru OS in Japanese) aims to provide an open source replacement for the Free Towns operating system. The goal of this project is to write a copyright-free FM Towns OS to run free games and the re-released games, or why not a brand-new game for FM Towns. without concerns of violating copyrights of the files included in the original Towns OS. Let's see how far we can go! But, so far so good. Now Tsugaru OS is capable of running the three probably the most popular free games, Panic Ball 2, VSGP, and Sky Duel. All playable without single file from the original Towns OS. Free Towns OS GitHub page That's a pretty good milestone already. The project aims to eventually also be able to run on real hardware instead of just emulators, but further than that, it's difficult for me to extract more information from the descriptions since not every paragraph has been translated to English just yet. Finding English information on FM Towns OS in general is hard, so I'm also not entirely sure just how much the project has already been able to recreate. I definitely hope this effort attracts more interest, hopefully also from outside of Japan so we can get a translated version people outside of Japan can use.
This option is for users that want to create a Windows 11 on Arm virtual machine on supported hardware using an ISO file or to install Windows 11 on Arm directly without a DVD or USB flash drive. The ISO file can also be used to manually create bootable installation media (USB flash drive) to install Windows 11 on Arm, but it may be necessary to include drivers from the device manufacturer for the installation media to be successfully bootable. This download is a multi-edition ISO which uses your product key to unlock the correct edition. Windows on ARM ISO download Oddly enough, up until now, Microsoft hadn't published a Windows 11 on ARM ISO yet. With this new ISO, ARM users can do a fresh install, and create Windows on ARM virtual machines. Not the biggest news in the world, but it's a little bit surprising it's taken them this long to publish this ISO file.
Valve has been holding on to a special surprise forHalf-Life 2fans to celebrate the game crossing its 20th birthday. Today, the company shipped the 20th Anniversary Update for the iconic Gordon Freeman adventure from 2004, combining the base experience and all episodes into one, bringing developer commentary, Steam Workshop support, and much more. Adding to all that, the game is completely free to claim on Steam right now too. Pulasthi Ariyasinghe at NeoWin Valve even made a nice web page with fun animated characters for it (they're just video loops). Definitely a nice surprise for those of us who've already played the game a million times, and for those of us who haven't yet for some reason and can now claim the game for free. This update also fixes some more bugs, adds a ton of new graphics settings, allows you to choose between different styles for certain visual effects, aim assist for controller users has been massively updated, and so much more. For a 20 year old game, such a free update is not something that happens very often, so good on Valve for doing this. I can barely believe it's been 20 years, and that we still have no conclusion or even continuation to the story that so abruptly ended with Episode Two. I honestly doubt we'll ever going to see a Half-Life 3 or even an Episode Three, simply because at this point the expectations would be so bonkers high there's no way Valve could meet them. On top of that, why waste time, money, and possibly reputation and goodwill on Half-Life 3, when you can just sit on the couch and watch the Steam gravy train roll into the station? Because that's a hell of a lot of gravy.
Remember Darwin? It's the core of Apple's macOS, and the company has always - sometimes intermittently - released its source code as open source. Not much ever really happens with Darwin, and any attempts at building a usable operating system on top of Darwin have failed. There was OpenDarwin, which at one point could run a GNOME desktop, but in 2006 it shut itself down, stating: Over the past few years, OpenDarwin has become a mere hosting facility for Mac OS X related projects. The original notions of developing the Mac OS X and Darwin sources has not panned out. Availability of sources, interaction with Apple representatives, difficulty building and tracking sources, and a lack of interest from the community have all contributed to this. Administering a system to host other people's projects is not what the remaining OpenDarwin contributors had signed up for and have been doing this thankless task far longer than they expected. It is time for OpenDarwin to go dark. OpenDarwin announcement from 2006 (archived) Any other attempts at making Darwin work as a standalone operating system were further frustrated by the fact that Apple stopped releasing bootable Darwin images, so Darwin never amounted to much more than Apple throwing some code over the fence every now and then for some cheap goodwill among the few people who still believe Apple cares about open source. However, the dream is still alive - the idea that you could use Darwin to build a general purpose operating system, perhaps one with some semblance of compatibility with macOS software, is an attractive one. Enter PureDarwin. This project has been around for a while now, releasing an X11-capable build of Darwin somewhere in 2015, followed long, long after that by a CLI-only build in 2020. A few days ago, the project announced an ambitious change in direction, with a plan and roadmap for turning PureDarwin into a general purpose operating system. The PureDarwin project, originally created to bring Apple's open-source Darwin OS to more people, is heading in a fresh new direction with some clear short-term and long-term goals. These new plans are all about breathing new life into PureDarwin. In the short term, we're focused on getting some solid basics in place with graphical interfaces using MATE Desktop and LightDM, so users can get a functional and accessible experience sooner rather than later. Looking further down the line, the long-term goals-shown in some early wireframes-are about creating a fully featured, polished desktop experience that's easy to use and visually appealing. Plus, a new versioning system will make it clear how PureDarwin is progressing independently from Apple's Darwin updates, making it easier for everyone to keep track. This refreshed direction sets PureDarwin up to grow from its roots into a user-centered operating system. PureDarwin announcement These plans and roadmap sound quite well thought-out to me. I especially like that they first focus on getting a solid MATE desktop running before shifting to building a more custom desktop environment, as this makes it much easier - relatively speaking - to get people up and running with Darwin. Once Darwin with MATE is halfway usable, it can serve its job as a development platform for the more custom desktop environment they have planned. It won't surprise you, by the way, that the sketches for the custom desktop environment are very Apple-y. As part of the goals of creating a usable MATE desktop and then a more custom desktop environment, a whole bunch of low-level things need to be handled. All the kexts (drivers) required for Darwin to boot need to be built, and CoreFoundation needs to be updated, a process that was already under way. On top of that, the project wants to focus on getting Wayland to work, make Darwin buildable under BSD/Linux, and develop an installer. Beyond those goals, the project has an even bigger, tentative ambition: API compatibility with macOS. They make it very clear they're not at all focused on this right now, and consider it more of a pie-in-the-sky goal for the the distant future. It's an interesting ambition we've seen tried various times before, and it surely won't be even remotely easy to get it to a level where it could do much more than run some command-line utilities. Darling, a similar project to run macOS binaries on Linux in the style of Wine, has only recently been able to run some small, very basic GUI applications. I like all of these goals, and especially getting it to a state where you can download a Darwin ISO running MATE should be entirely realistic to achieve in a short timeframe. A custom desktop environment is a lot more work of course, all depending on how much they intend to reuse from the Linux graphics and desktop stack. Anything beyond that, and it becomes much murkier, obviously. As always, it's all going to come down to just how many active and enthusiastic contributors they can attract, and more importantly retain once the initial excitement of this announcement wears off.
Update: that was quick! GitHub banned the AI" company's account. Only GitHub gets to spam AI on GitHub, thank you very much. Most of the time, products with AI" features just elicit sighs, especially when the product category in question really doesn't need to have anything to do with AI" in any way, shape, or form. More often than not, though, such features are not only optional and easily ignorable, and we can always simply choose not to buy or use said products in the first place. I mean, over the last few days I've migrated my Pixel 8 Pro from stock Google Android to GrapheneOS as the final part of my platform transition away from big tech, and Google's insistence on shoving AI" into everything certainly helped in spurring this along. But what are you supposed to do if an AI" product forces itself upon you? What if you can't run away from it? What if, one day, you open your GitHub repository and see a bunch of useless PRs from an AI" bot who claims to help you fix issues, without you asking it to do so? Well, that's what's happening to a bunch of GitHub users who were unpleasantly surprised to see garbage, useless merge requests from a random startup testing out some AI" tool that attempts to automatically fix' open issues on GitHub. The proposed fixes' are accompanied by a disclaimer: Disclaimer: The commit was created by Latta AI and you should never copy paste this code before you check the correctness of generated code. Solution might not be complete, you should use this code as an inspiration only. This issue was tried to solve for free by Latta AI -https://latta.ai/ourmission If you no longer want Latta AI to attempt solving issues on your repository, you can block this account. Example of a public open issue with the AI" spam Let me remind you: this tool, called Latta AI", is doing all of this unprompted, without consent, and the commits generally seem bogus and useless, too, in that they don't actually fix any of the issues. To make matters worse, your GitHub repository will then automatically appear as part of its marketing - again without any consent or permission from the owners of the GitHub projects in question. Clicking through to the GitHub repositories listed on the front page will reveal a lot about how developers are responding: they're not amused. Every link I clicked on had Latta AI's commit and comment marked as spam, abuse, or just outright deleted. We're talking public open issues here, so it's not like developers aren't looking for input and possible fixes from third parties - they just want that input and those possible fixes to come from real humans, not some jank code generator that's making us destroy the planet even faster. This is what the future of AI" really looks like. It's going to make spam even easier to make, even more pervasive, and even cheaper, and it's going to infest everything. Nothing will be safe from these monkeys on typewriters, and considering what the spread of misinformation by human-powered troll farms can do, I don't think we're remotely ready for what AI" is going to mean for our society. I can assure you lying about brown people eating cats and dogs will be remembered as quaint before this nonsense is over.
One of the things I've consistently heard from just about anyone involved in Android development are laments about the sorry state of the Android Emulator included in Google's Android Studio. It seems that particularly its performance is not great, with people often resorting to third-party options or real devices. Well, it seems the Android development team at Google has taken this to heart, and has spent six months focusing almost solely on fixing up the Android Emulator. We know how critical the stability, reliability, and performance of the Android Emulator is to your everyday work as an Android developer. After listening to valuable feedback about stability, reliability, and performance, the Android Studio team took a step back from large feature work on the Android Emulator for six months and started an initiative called Project Quartz. This initiative was made up of several workstreams aimed at reducing crashes, speeding up startup time, closing out bugs, and setting up better ways to detect and prevent issues in the future. Neville Sicard-Gregory at the Android Developers Blog Steps taken include moving to a newer version of Qt for the user interface of the emulator, improving thegraphics rendering system used in the Android Emulator, and adding a whole bunch of tests to their existing test suite. The end result is that the number of crashes in the Android Emulator dropped by 30%, which, if bourne out out in the real world, will have a material impact for Android developers. During the Project Quartz effort, Google also cut the number of open issues by 44%, but they do note only 17% of those were fixed during Project Quartz, with the remainder being obsoleted or previously fixed issues. If you download or update to the latest version of Android Studio, you'll get the new and improved Android Emulator as well.
Today, we are excited to announce the launch of .NET 9, the most productive, modern, secure, intelligent, and performant release of .NET yet. It's the result of another year of effort on the part of thousands of developers from around the world. This new release includes thousands of performance, security, and functional improvements. You will find sweeping enhancements across the entire .NET stack from the programming languages, developer tools, and workloads enabling you to build with a unified platform and easily infuse your apps with AI. .NET Team at the .Net Blog All I know is that these are very important words, and a very important release, for thousands and thousands of unknown developers slaving away in obscurity, creating, maintaining, and fixing endless amounts of corporate software very few of us ever actually get to see very often. They toil away for meager pay in the 21st century version of the coal mines of the 19th century, without any recognition, appreciation, or applause. They work long hours, make their way through the urban planning hell that is modern America, and come home to make some gruel and drink water from lead pipes, waiting for the sweet relief of what little sleep they manage to get, only to do it all over again the next day. ...I may have a bit of a skewed perception of reality for most IT people. In all seriousness, .NET is a hugely popular set of tools and frameworks, and while it's probably not the most sexy topic in the tech world, any new release matters to a ton of people. .NET 9.0. This new version's main focus seems to be performance, with over 1000 performance-related changes tot he various components that make up .NET. In a blog post about these performance improvements, Stephen Toub explains in great detail what some of the improvements are, and where the benefits lie. Of course, there's an insane amount of talk about AI" features in .NET 9, and apparently .NET MAUI is seeing a surge in popularity on Android, if you believe Microsoft (30$" increase in developer usage" means little when you don't provide a baseline). .NET MAUI is Microsoft's cross-platform framework for building applications for Android, iOS, macOS, and Windows. Among other things, .NET MAUI 9 provides more access to platform-native features, as well as benefiting from some of the performance improvements. There's also a paragraph about .NET 9 development on Windows, just in case you thought the .NET team forgot Windows existed. With .NET 9, your Windows apps will have access to the latest OS features and capabilities while ensuring they are more performant and accessible than ever before. Whether you are starting a new modern app withWinUI 3and theWindows App SDKor modernizing your existingWPFandWinFormsapplications, your Windows apps run best on .NET 9. We have been collaborating closely with the Windows developer community to bring features that you have been requesting. This includes Native AOT support for WinUI 3 for smaller and more performant apps, modern theming enhancements with Fluent UI for WPF, and WinForms gets a boost with a new Dark Mode, modern icon APIs, and improved asynchronous API access withControl.InvokeAsync. .NET Team at the .Net Blog There's way more on top of all of this, from changes to the languages .NET uses to new releases of the various developer tools, like Visual Studio.
quBSD is a FreeBSD jails/bhyve wrapper which implements a Qubes inspired containerization schema. Written in shell, based on zfs, and uses the underlying FreeBSD tools. quBSD GitHub page quBSD really seems to build upon the best FreeBSD has to offer. Neat.
Speaking of Steam, the Linux version of Valve's gaming platform has just received a pretty substantial set of fixes for crashes, and Timothee TTimo" Besset, who works for Valve on Linux support, has published a blog post with more details about what kind of crashes they've been fixing. TheSteam client update on November 5thmentions Fixed some miscellaneous common crashes." in the Linux notes, which I wanted to give a bit of background on. There's more than one fix that made it in under the somewhat generic header, but the one change that made the most significant impact to Steam client stability on Linux has been a revamping of how we are approaching thesetenvandgetenvfunctions. One of my colleagues rightly dubbedsetenvthe worst Linux API". It's such a simple, common API, available on all platforms that it was a little difficult to convince ourselves just how bad it is. I highly encourage anyone who writes software that will run on Linux at some point toread through RachelByTheBay"s very engaging poston the subject. Timothee TTimo" Besset This indeed seems to be a specific Linux problem, and due to the variability in Linux systems - different distributions, extensive user customisation, and so on - debugging information was more difficult to parse than on Windows and macOS. After a lot of work grouping the debug information to try and make sense of it all, it turned out that the two functions in question were causing issues in threads other than those that used them. They had to resort to several solutions, from reducing the reliance on setenv and refactoring it with exevpe, to reducing the reliance on getenv through caching, to introducing an environment manager' that pre-allocates large enough value buffers at startup for fixed environment variable names, before any threading has started". It was especially this last one that had a major impact on reducing the number of crashes with Steam on Linux. Besset does note that these functions are still used far too often, but that at this point it's out of their control because that usage comes from the libraries of the operating system, like x11, xcb, dbus, and so on. Besset also mentions that it would be much better if this issue can be addressed in glibc, and in the comments, a user by the name of Adhemerval reports that this is indeed something the glibc team is working on.
Steamhas finally stopped working on several older Windows operating systems, following a warning from Valve that it planned to drop support earlier this year. With little fanfare, Windows 7 and Windows 8 gaming on Steam is no longer possible following the most recent Steam client update on November 5. Ben Stockton at PCGamesN It's honestly wild that Valve supported Windows 7 and 8 for this long for Steam in the first place. They've been out of support for a long time, and at this point in time, less than 0.3% of Steam users were using Windows 7 or 8. Investing any resources in continuing to support them would be financially irresponsible, while also aiding a tiny bit in allowing people to use such unsupported, insecure systems to this day. I'm sure at least one of you is still rocking Windows 7 or 8 as your daily driver operating system, so I'm sorry if you don't want to hear this, but it's really, really time to move on. Buying a Windows 10 or 11 license on eBay or whatever costs a few euros at most - if you're not eligible for one the free upgrade programs Microsoft ran - and especially Windows 10 should run just fine on pretty much anything Windows 7 or 8 runs on. Do note that with Windows 10, though, you'll be back in the same boat next year.
More bad news from Mozilla. The Mozilla Foundation, the nonprofit arm of the Firefox browser maker Mozilla, has laid off 30% of its employees as the organization says it faces a relentless onslaught of change." Announcing the layoffs in an email to all employees on October 30, the Mozilla Foundation's executive director Nabiha Syed confirmed that two of the foundation's major divisions -advocacyandglobal programs- are no longer a part of our structure." Zack Whittaker at TechCrunch This means Mozilla will no longer be advocating for an open web, privacy, and related ideals, which fits right in with the organisation's steady decline into an ad-driven effort that also happens to be making a web browser used by, I'm sorry to say, effectively nobody. I just don't know how many more signs people need to see before realising that the future of Firefox is very much at stake, and that we're probably only a few years away from losing the only non-big tech browser out there. This should be a much bigger concern than it seems to be to especially the Linux and BSD world, who rely heavily on Firefox, without a valid alternative to shift to once the browser's no longer compatible with the various open source requirements enforced by Linux distributions and the BSDs. What this could also signal is that the sword of Damocles dangling above Mozilla's head is about to come down, and that the people involved know more than we do. Google is effectively bankrolling Mozilla - for about 80% of its revenue - but that deal has come under increasing scrutiny from regulars, and Google itself, too, must be wondering why they're wasting money supporting a browser nobody's using. We're very close to a web ruled by Google and Apple. If that prospect doesn't utterly terrify you, I honestly wonder what you're doing here, reading this.
Earlier this year, a proposal was made to replace the primary edition of Fedora from the GNOME variant to the KDE variant. This proposal, while serious, was mostly intended to stir up discussion about the position of the Fedora KDE spin within the larger Fedora community, and it seems this has had its intended effect. A different, but related proposal, to make Fedora KDE equal in status to the Fedora GNOME variant, has been accepted. The original proposal read: After a few months of being live, the proposal has now been unanimously accepted, which means that starting with Fedora 42, the GNOME and KDE versions will have equal status, and thus will receive equal marketing and positioning on the website. Considering how many people really enjoy Fedora KDE, this is a great outcome, and probably the fairest way to handle the situation for a distribution as popular as Fedora. I use Fedora KDE on all my machines, so for me, this is great news.
LXQt, the desktop environment that is to KDE what Xfce is to GNOME, has released version 2.1.0, and while the version number change seems average, it's got a big ace up its sleeve: you can now run LXQt in a Wayland session, and they claim it works quite well, too, and it supports a wide variety of compositors. Through its new componentlxqt-wayland-session, LXQt 2.1.0 supports 7 Wayland sessions (with Labwc, KWin, Wayfire, Hyprland, Sway, River and Niri), has two Wayland back-ends inlxqt-panel(one forkwin_waylandand the other general), and will add more later. All LXQt components that are not limited to X11 - i.e., most components - work fine on Wayland. The sessions are available in the new sectionWayland Settingsinside LXQt Session Settings. At least one supported Wayland compositor should be installed in addition tolxqt-wayland-sessionfor it to be used. There is still hard work to do, but all of the current LXQt Wayland sessions are quite usable; their differences are about what the supported Wayland compositors provide. LXQt 2.1.0 release announcement This is great news for LXQt, as it ensures the desktop environment is ready to keep up with what modern Linux distributions provide. Crucially and in line with what we've come to expect from LXQt, X11 support is a core part of the project, and they even go so far as to say the X11 session will be supported indefinitely", which should set people preferring to stay on X11 at ease. I personally may have gleefully left X11 in the dustbin of history, but many among us haven't, and it's welcome to see LXQt's clear promise here. Many of the other improvements in this release are tied to Wayland, making sure the various components work and Wayland settings can be adjusted. On top of that, there's the usual list of bug fixes and smaller changes, too.
The current version of Windows on ARM contains Prism, Microsoft's emulator that allows x86-64 code to run on ARM processors. While it was already relatively decent on the recent Snapdragon X platform, it could still be very hit-or-miss with what applications it would run, and especially games seemed to be problematic. As such, Microsoft has pushed out a major update to Prism that adds support for a whole bunch of extensions to the x86 architecture. This new support in Prism is already in limited use today in the retail version of Windows 11, version 24H2, where it enables the ability to runAdobe Premiere Pro 25on Arm. Starting with Build 27744, the support is being opened to any x64 application under emulation. You may find some games or creative apps that were blocked due to CPU requirements before will be able to run using Prism on this build of Windows. At a technical level, the virtual CPU used by x64 emulated applications through Prism will now have support for additional extensions to the x86 instruction set architecture. These extensions include AVX and AVX2, as well as BMI, FMA, F16C, and others, that are not required to run Windows but have become sufficiently commonplace that some apps expect them to be present. You can see some of the new features in the output of a tool likeCoreinfo64.exe. Amanda Langowski and Brandon LeBlanc on the Windows Blog Hopefully this makes running existing x86 applications that don't yet have an ARM version a more reliable affair for Windows on ARM users.
A long, long time ago, back when running BeOS as my main operating system had finally become impossible, I had a short stint running QNX as my one and only operating system. In 2004, before I joined OSNews and became its managing editor, I also wrote and published an article about QNX on OSNews, which is cringe-inducing to read over two decades later (although I was only 20 when I wrote that - I should be kind to my young self). Sadly, the included screenshots have not survived the several transitions OSNews has gone through since 2004. Anyway, back in those days, it was entirely possible to use QNX as a general purpose desktop operating system, mostly because of two things. First, the incredible Photon MicroGUI, an excellent and unique graphical environment that was a joy to use, and two, because of a small but dedicated community of enthousiasts, some of which QNX employees, who ported a ton of open source applications, from basic open source tools to behemoths like Thunderbird, the Mozilla Suite, and Firefox, to QNX. It even came with an easy-to-use package manager and associated GUI to install all of these applications without much hassle. Using QNX like this was a joy. It really felt like a tightly controlled, carefully crafted user experience, despite desktop use being so low on the priority list for the company that it might as well have not been on there at all. Not long after, I think a few of the people inside QNX involved with the QNX desktop community left the company, and the entire thing just fizzled out afterwards when the company was acquired by Harman Kardon. Not long after, it became clear the company lost all interest, a feeling only solidified once Blackberry acquired the company. Somewhere in between the company released some of its code under some not-quite-open-source license, accompanied by a rather lacklustre push to get the community interested again. This, too, fizzled out. Well, it seems the company is trying to reverse course, and has started courting the enthusiast community once again. This time, it's called QNX Everywhere, and it involves making QNX available for non-commercial use for anyone who wants it. No, it's not open source, and yes, it requires some hoops to jump through still, but it's better than nothing. In addition, QNX also put a bunch of open source demos, applications, frameworks, and libraries on GitLab. One of the most welcome new efforts is a bootable QNX image for the Raspberry Pi 4 (and only the 4, sadly, which I don't own). It comes with a basic set of demo application you can run from the command line, including a graphical web browser, but sadly, it does not seem to come with Photon microGUI or any modern equivalent. I'm guessing Photon hasn't seen a ton of work since its golden days two decades ago, which might explain why it's not here. There's also a list of current open source ports, which includes chunks of toolkits like GTK and Qt, and a whole bunch of other stuff. Honestly, as cool as this is, it seems it's mostly aimed at embedded developers instead of weird people who want to use QNX as a general purpose operating system, which makes total sense from QNX' perspective. I hope Photon microGUI will make a return at some point, and it would be awesome - but I expect unlikely - if QNX could be released as open source, so that it would be more likely a community of enthusiasts could spring up around it. For now, without much for a non-developer like me to do with it, it's not making me run out to buy a Raspberry Pi 4 just yet.
Old-school Apple fans probably remember a time, just before the iPhone became a massive gaming platform in its own right, when Apple released a wide range of games designed for late-model clickwheel iPods. While those clickwheel-controlled titles didn't exactly set the gaming world on fire, they represent an important historical stepping stone in Apple's long journey through the game industry. Today, though, these clickwheel iPod games are on the verge of becoming lost media-impossible to buy or redownload from iTunes and protected on existing devices by incredibly strong Apple DRM. Now, the classic iPod community is engaged in a quest to preserve these games in a way that will let enthusiasts enjoy these titles on real hardware for years to come. Kyle Orland at Ars Technica A nice effort, of course, and I'm glad someone is putting time and energy into preserving these games and making them accessible to a wider audience. As is usual with Apple, these small games were heavily encumbered with DRM, being locked to both the the original iTunes account that bought them, but also to the specific hardware identifier of the iPod they were initially synchronised to using iTunes. A clever way around this DRM exists, and it involves collectors and enthusiasts creating reauthorising their iTunes accounts to the same iTunes installation, and thus adding their respective iPod games to that single iTunes installation. Any other iPods can then be synced to that master account. The iPod Clickwheel Games Preservation Project takes this approach to the next level, by setting up a Windows virtual machine with iTunes installed in it, which can then be shared freely around the web for people to the games to their collection. This is a rather remarkably clever method of ensuring these games remain accessible, but obviously does require knowledge of setting up Qemu and USB passthrough. I personally never owned an iPod - I was a MiniDisc fanatic until my Android phone took over the role of music player - so I also had no clue these games even existed. I assume most of them weren't exactly great to control with the limited input method of the iPod, but that doesn't mean there won't be huge numbers of people who have fond memories of playing these games when they were younger - and thus, they are worth preserving. We can only hope that one day, someone will create a virtual machine that can run the actual iPod operating system, called Pixo OS.
Nothing is sacred. With this update, we are introducing the ability to rewrite content in Notepad with the help of generative AI. You can rephrase sentences, adjust the tone, and modify the length of your content based on your preferences to refine your text. Dave Grochocki at the Windows Insider Blog This is the reason everything is going to shit.
Today, Microsoftannouncedthe general availability of Windows Server IoT 2025. This new release includes several improvements, including advanced multilayer security, hybrid cloud agility, AI,performance enhancements, and more. Microsoft claims that Windows Server IoT 2025 will be able to handle the most demanding workloads, including AI and machine learning. It now has built-in support forGPU partitioningand the ability to process large datasets across distributed environments. With Live Migration and High Availability, it also offers a high-performance platform for both traditional applications and advanced AI workloads. Pradeep Viswanathan at Neowin Windows Server IoT 2025 brings the same benefits, new features, and improvements as the just-released regular Windows Server 2025. I must admit I'm a little unclear as to what Windows Server IoT has to offer over the regular edition, and reading the various Microsoft marketing materials and documents don't really make it any clearer for me either, since I'm not particularly well-versed in all that enterprise networking lingo.
NetBSD is an open-source, Unix-like operating system known for its portability, lightweight design, and robustness across a wide array of hardware platforms. Initially released in 1993, NetBSD was one of the first open-source operating systems based on the Berkeley Software Distribution (BSD) lineage, alongside FreeBSD and OpenBSD. NetBSD's development has been led by a collaborative community and is particularly recognized for its clean" and well-documented codebase, a factor that has made it a popular choice among users interested in systems programming and cross-platform compatibility. Andre Machado I'm not really sure what to make of this article, since it mostly reads like an advertisement for NetBSD, but considering NetBSD is one of the lesser-talked about variants of an operating system family that already sadly plays second fiddle to the Linux behemoth, I don't think giving it some additional attention is really hurting anybody. The article is still gives a solid overview of the history and strengths of NetBSD, which makes it a good introduction. I have personally never tried NetBSD, but it's on my list of systems to try out on my PA-RISC workstation since from what I've heard it's the only BSD which can possibly load up X11 on the Visualize FX10pro graphics card it has (OpenBSD can only boot to a console on this GPU). While I could probably coax some cobbled-together Linux installation into booting X11 on it, where's the fun in that? Do any of you lovely readers use NetBSD for anything? FreeBSD and even OpenBSD are quite well represented as general purpose operating systems in the kinds of circles we all frequent, but I rarely hear about people using NetBSD other than explicitly because it supports some outdated, arcane architecture in 2024.
Another month lies behind us, so another monthly update from Redox is upon us. The biggest piece of news this time is undoubtedly that Redox now runs on RISC-V - a major achievement. Andrey Turkin has done extensive work on RISC-V support in the kernel, toolchain and elsewhere. Thanks very much Andrey for the excellent work! Jeremy Soller has incorporated RISC-V support into the toolchain and build process, has begun some refactoring of the kernel and device drivers to better handle all the supported architectures, and has gotten the Orbital Desktop working when running in QEMU. Ribbon and Ron Williams That's not all, though. Redox on the Raspberry Pi 4 boots to the GUI login screen, but needs more work on especially USB support to become a fully usable target. The application store from the COSMIC desktop environment has been ported, and as part of this effort, Redox also adopted FreeDesktop standards to make package installation easier - and it just makes sense to do so, with more and more of COSMIC making its way to Redox. Of course, there's also a slew of smaller improvements to the kernel, various drivers including the ACPI driver, RedoxFS, Relibc, and a lot more. The progress Redox is making is astounding, and while that's partly because it's easier to make progress when there's a lot of low-hanging fruit as there inevitably will be in a relatively new operating system, it's still quite an achievement. I feel very positive about the future of Redox, and I can't wait until it reaches a point where more general purpose use becomes viable.
Microsoft has confirmed the general availability of Windows Server 2025, which, as a long-term servicing channel (LTSC) release, will be supported for almost ten years. This article describes some of the newest developments in Windows Server 2025, which boasts advanced features that improve security, performance, and flexibility. With faster storage options and the ability to integrate with hybrid cloud environments, managing your infrastructure is now more streamlined. Windows Server 2025 builds on the strong foundation of its predecessor while introducing a range of innovative enhancements to adapt to your needs. What's new in Windows Server 2025 article It should come as no surprise that Windows Server 2025 comes loaded with a ton of new features and improvements. I already covered some of those, such as DTrace by default, NVMe and storage improvements, hotpatching, and more. Other new features we haven't discussed yet are a massive list of changes and improvements to Active Directory, a feature-on-demand feature for Azure Arc, support for Bluetooth keyboards, mice, and other peripherals, and tons of Hyper-V improvements. SMB is also seeing so many improvements it's hard to pick just a few to highlight, and software-defined networking is also touted as a major aspect of Server 2025. With SDN you can separate the network control plane from the data plane, giving administrators more flexibility in managing their network. I can just keep going listing all of the changes, but you get the idea - there's a lot here. You can try Windows Server 2025 for free for 180 days, as a VM in Azure, a local virtual machine image, or installed locally through an ISO image.
Some months ago, I got really fed up with C. Like, I don'thateC. Hating programming languages is silly. But it was way too much effort to do simple things like lists/hashmaps and other simple data structures and such. I decided to try this language calledOdin, which is one of these Better C" languages. And I ended up liking itso muchthat I moved my gameArtificial Ragefrom C to Odin. Since Odin has support for Raylib too (like everything really), it was very easy to move things around. Here's how it all went.. Well, what I remember the very least. Akseli Lahtinen You programmers might've thought you escaped the wrath of Monday on OSNews, but after putting the IT administrators to work in my previous post, it's now time for you to get to work. If you have a C codebase and want to move it to something else, in this case Odin, Lahtinen's article will send you on your way. As someone who barely knows how to write HTML, it's difficult for me to say anything meaningful about the technical details, but I feel like there's a lot of useful, first-hand info here.
It's the start of the work week, so for the IT administrators among us, I have another great article by friend of the website, Stefano Marinelli. This article covers migrating a Proxmox-based setup to FreeBSD with bhyve. The load is not particularly high, and the machines have good performance. Suddenly, however, I received a notification: one of the NVMe drives died abruptly, and the server rebooted. ZFS did its job, and everything remained sufficiently secure, but since it's a leased server and already several years old, I spoke with the client and proposed getting more recent hardware and redoing the setupbasedon aFreeBSDhost. Stefano Marinelli If you're interested in moving one of your own setups, or one of your clients' setups, from Linux to FreeBSD, this is a great place to start and get some ideas, tips, and tricks. Like I said, it's Monday, and you need to get to work.