Remember when Qualcomm promised Linux would be a first-tier platform alongside Windows for its Snapdragon X Elite, almost a year ago now? Well, the Snapdragon X laptop have been out in the market for a while running Windows, but Linux support is still a complete crapshoot, despite the lofty promises by Qualcomm. Tuxedo, a European Linux OEM who promised to ship a Snapdragon X laptop running Linux, has posted an update on its progress, and it's not looking good. While Tuxedo did reach a major milestone last week by sending the laptop's device tree to the LKML, that's where the good news ends. The next step is to support additional components of the ARM notebook within the device tree. This includes all USB functionalities, including USB4, external monitor connectivity via HDMI, and audio features, such as the headset jack. Additionally, driver testing is on the agenda. Unfortunately, a planned collaboration with Qualcomm, the manufacturer of the Snapdragon X Elite, did not materialize. However, we are in contact with the ARM specialists at Linaro and have sent test devices to them. We hope to receive valuable feedback from their developers and the community in the near future. Tuxedo's website This seems to indicate that Qualcomm isn't as interested in Linux support after all, which may be because the Snapdragon X machines haven't exactly taken over the laptop market as Microsoft and Qualcomm had hoped. The market for these things is probably not large enough for Qualcomm to justify investing in Linux support, especially when Windows on ARM is apparently not up to snuff yet either. In case you are unaware of why device trees are such a big thing in ARM land, it's because ARM devices do not have a nice ACPI table for operating systems to read system information from. Whereas x86 devices have their hardware components laid out in a nice ACPI table in UEFI, ARM devices do not, meaning that the Linux kernel needs to know specifically which device you're using so it can load the correct device tree. On x86, this isn't necessary, as the Linux kernel can just read the ACPI table, which works 99% of the time to get it to boot, even if specific components might not be supported (yet). On ARM, without a device tree, the Linux kernel doesn't know what to do. That's one of the major reasons why it's so hard for ARM to take off in the same way x86 once did. It's just not designed to be infinitely intercompatible and interoperable as we've come to expect from the x86 world, and I don't think anybody has any vested interest in changing that. I had hoped Microsoft might throw its weight around here, but it seems that's not happening either. The ARM desktop/laptop revolution seems mostly confined to Apple for now.
Another month, another month of Redox improvements and bug fixes. This month saw a ton of work on process management as part of the NLnet grant, massive improvements to the USB stack, including a USB hub driver, as well as the usual kernel and driver improvements. On top of all this work, there's the usual long list of bugfixes and smaller improvements.
Software gets more complicated. All of this complexity is there for a reason. But what happened to specializing? When a house is being built, tons of people are involved: architects, civil engineers, plumbers, electricians, bricklayers, interior designers, roofers, surveyors, pavers, you name it. You don't expect a single person, or even a whole single company, to be able to do all of those. Vitor M. de Sousa Pereira I've always found that software development gets a ton of special treatment and leeway in quality expectations, and this has allowed the kind of stuff the linked article is writing about to become the norm. Corporations can demand so much from developers and programmers to the point where expecting quality is wholly unreasonable, because there's basically no consequences for delivering a shit product. Bugs, crashes, security issues, lack of documentation, horrid localisation - it's all par for the course in software, yet we would not tolerate any of that in almost any other type of product. While I'm sure some of this can be attributed to developers themselves, most of it seems to stem from incompetent managers imposing impossible deadlines downwards and setting unrealistic expectations upwards - you know, kick down, lick up - creating a perfect storm of incompetence. We all know it, we all experience it every day, and we all hate it - but we've just accepted it. As consumers, as developers, as regulatory bodies. It's too late to fix this now. Software development will forever exist as a sort of no man's land of quality expectations, free from regulations, warranties, and consumer protections, and imposing them now after the fact is never going to be accepted by the industry and won't ever make it through any lawmaking process of any country, and we all suffer from it, both as users of software and as makers of it.
Apple's Darwin operating system is the Unix-like core underpinning macOS, iOS, and all of Apple's modern OS platforms. At its heart lies the XNU kernel - an acronym humorously standing for X is Not Unix." XNU is a unique hybrid kernel that combines a Mach microkernel core with components of BSD Unix. This design inherits the rich legacy of Mach (originating from 1980s microkernel research) and the robust stability and POSIX compliance of BSD. The result is a kernel architecture that balances modularity and performance by blending microkernel message-passing techniques with a monolithic Unix kernel structure. We'll go through a chronological exploration of Darwin and XNU's evolution - from Mach and BSD origins to the modern kernel features in macOS on Apple Silicon and iOS on iPhones. We'll follow this with a deep dive into the architectural milestones, analyze XNU's internal design (Mach-BSD interaction, IPC, scheduling, memory management, virtualization), and examine how the kernel and key user-space components have adapted to new devices and requirements over time. Tanuj Ravi Rao Despite its popularity and open source kernel, it's quite rare to see detailed deep-dives into the underpinnings of macOS. It always surprised me that nobody took whatever Apple threw across the fence every macOS release and ran with it - much further than run existing open source desktops but worse" we never got when it comes to Darwin distributions (although this might change) - so perhaps having more approachable articles like these out and about get people interested.
This is a very small blog post about my first reverse engineering project, in which I don't really reverse engineer anything yet, but I am just getting started! A family member asked me to add additional book data to the LeapStart he bought for his son, this is the starting point here. leloubil's blog We've all seen toy, child-focused computers like these, and I always find them deeply fascinating. I'm not buying them for my own kids - they'll get their start on a real" computer I'll set up for them to explore and break - but I see their value, and they're probably a better choice than giving a kid a tablet or whatever (which my wife and I are opposed to for our kids). What fascinates me about them is, of course, what software, and more specifically, what operating system they run. It turns out this one most likely runs on something called C/OS-II, one of the many relatively obscure embedded operating systems you never hear about. C/OS is a full-featured embedded operating system originally developed by Micrim. In addition to the two highly popular kernels, C/OS features support for TCP/IP, USB-Device, USB-Host, and Modbus, as well as a robust File System. C/OS GitHub page The documentation provides a lot more detail about its capabilities, so if you're interested in learning more, that's your starting point.
Good news for Windows users, and for once there's not a hint of sarcasm here: Microsoft has started rolling out Windows Hotpatch to the client versions of Windows. This feature, which comes from the server versions of Windows, allows the operating system to install patches to in-memory processes, removing the need for a number of restarts. Obviously, this is hugely beneficial for users, as they won't have to deal with constant reboots whenever a new bunch of Windows updates are pushed. There are some limitations and other things you should know. First, the way the system works is that every quarter, installations with Hotpatch enabled will receive a quarterly baseline update that requires a reboot, followed by two months of hotpatches which do not require a reboot. Hotpatches can only be security updates; new features and enhancements are rolled up into the quarterly baseline updates. In other words, while this will not completely eliminate reboots, it will cut the number of reboots per year down from twelve to just four, which is substantial, and very welcome in especially corporate environments. The biggest limitation, however, is that Windows Hotpatch will only make it to one client version of Windows, Enterprise version 24H2, so users of the Home or Professional version are out of luck for now. On top of that, you're going to need a Microsoft subscription, use Microsoft Intune, and an Intel/AMD-based system (Hotpatch will come to ARM later). I hope it'll make its way to Windows 11 Home and Professional, too, because I'm fairly sure quite a few of you using Windows would love to set this up on your own machines.
This question was asked during my Boot Camp presentation last fall in Boston, and over the past 35 years dozens of times people have asked, how big is VMS? That translates into how many lines of code are in VMS"? I thought it was time to at least make a stab at pursuing some insight into the answer. I wrote some command procedures to count the number of source lines in .B32, .B64, .C, .MAR, .M64, and .S files. Not counted are blank lines and lines beginning with the standard comment characters and miscellaneous directives for the particular language. Clair Grant As always with the lines of code' metric, there's some real arbitrariness going on, and in this case that means things like excluding networking, which to me seems like a core part of an operating system, but alas, choices need to be made. The final tally for lines of code, as per the definition used in the article, in the most recent version of OpenVMS, version 9.2-3, is almost 1.9 million. Do with that information as you please. What's really fascinating, though, are the deltas between the versions investigated in this article: V6.2 (May 1995, port to Alpha), V7.2 (February 1999, kernel threads, 64-bit APIs, Galaxy, and more), V8.2 (February 2005, port to Itanium), V9.2-3 (december 2024, port to x86). Going from one version to the next, roughly 400000 lines of code were added each time - the article doesn't theorise about the consistency of this number, and I suspect it's mostly just a fun coincidence, but it does jump out.
Microsoft is celebrating its 50th anniversary, and in honour of this milestone, Bill Gates has published a blog post about the first code the company ever wrote. In 1975, Paul Allen and I created Microsoft because we believed in our vision of a computer on every desk and in every home. Five decades later, Microsoft continues to innovate new ways to make life easier and work more productive. Making it 50 years is a huge accomplishment, and we couldn't have done it without incredible leaders like Steve Ballmer and Satya Nadella-along with the many people who have worked at Microsoft over the years. Bill Gates There's obviously no denying the impact Microsoft has had on the computer industry and the world as a whole, and a lot of that impact is not exactly what you would call positive. I find the fact that the blog post by Gates is nothing but JavaScript that slows down some browsers and devices, breaks page up/page down navigation for some people, does not allow for text selection, and whose source code is just a bunch of scripts without any of the actual text is a biting metaphor for the role Microsoft has played in the industry. Making today's celebrations even more biting is the fact that Microsoft's role in the ongoing genocide in Gaza is causing a lot of unrest within the company. Twice now today, presentations and talks by Microsoft's current and former CEOs have been interrupted by Microsoft employees protesting Microsoft's contributions to the genocide in Gaza, and before the day's over there will probably be more incidents like these. One of the Microsoft employees who protested, Ibtihal Aboussad, also sent an email to thousands of Microsoft employees, detailing why Microsoft employees are protesting today. My name is Ibtihal, and for the past 3.5 years, I've been a software engineer on Microsoft's AI Platform org. I spoke up today because after learning that my org was powering the genocide of my people in Palestine, I saw no other moral choice. This is especially true when I've witnessed how Microsoft has tried to quell and suppress any dissent from my coworkers who tried to raise this issue. For the past year and a half, our Arab, Palestinian, and Muslim community at Microsoft has been silenced, intimidated, harassed, and doxxed, with impunity from Microsoft. Attempts at speaking up at best fell on deaf ears, and at worst, led to the firing of two employees for simply holding a vigil. There was simply no other way to make our voices heard. Ibtihal Aboussad It goes without saying that Ibtihal Aboussad can probably go and clean out her desk after this, but giving up what must be a high-paying job - and possibly risking worse under the current Trump regime - for standing up and protesting an ongoing genocide is nothing but praise-worthy and noble. It obviously won't stop the genocide or make Microsoft even blink, but it's better than doing nothing, and it does painfully highlight how many other Microsoft employees remain silent while the company they work for does an IBM. I don't really care about Microsoft's 50th anniversary. Look at any of the company's current products - Office, Windows, the AI" stuff - and there's clearly nothing left. They're empty shells of what they used to be, hollowed out, their contents replaced with upsells, dark patterns, cruft, and AI" nonsense nobody wants. But hey, at least Microsoft is creating synergies to make eradicating Gazans easier. Here's your party popper.
The AlphaStation 500 is a workstation from Digital, circa 1996. Mine is a 500 MHz model and has an Alpha 21164A processor (aka EV56). And the way it boots is weird. On your common-or-garden PC, there has always been some kind of ROM chip. It holds a piece of firmware known as the BIOS. This ROM chip is available at a well-known location in the processor's address space (remembering that any PC processor boots up in 16-bit, 8088 compatible mode, with a 1 MiB address space, just like an IBM PC 5150) and the processor just starts executing code in it after reset. The Alpha (or at least this AlphaStation 500 - although I think they mostly worked like this) is different. Jonathan theJPster' Pallant A great read, but a little bit over my head considering I'm anything but a programmer or developer. Still, even I managed to get the basic gist and learn quite a bit from this article, and especially the part about how the AlphaStation uses a little jumper to tell the SROM exactly which stream of boot code to send to the processor is fascinating. I'm not sure just how unusual the Alpha's way of booting is, but I'd at least never heard of it.
There isn't a lot to this story beyond the fact that in around 1990 I helped debug someone's Lotus 1-2-3 set up via fax. But it's a good reminder of how important the Zeroth Law of Debugging is (see below). Without some sort of online connection with these folks, and with transatlantic phone calls being very, very expensive (I was in the UK, they were in the US) fax was the obvious answer. John Graham Honestly, this would still be easier today than some of the bug reporting systems I've seen.
If you're elbow-deep in '90s retrocomputing and maintain a fleet of your own personal seemingly identical but definitely completely different Windows 98 machines, Windows 9x QuickInstall is tailor-made just for you. It takes the root file system of an already installed Windows 98 system and packages it, whilst allowing drivers and tools to be slipstreamed at will. For the installer, it uses Linux as a base, paired with some tools to allow hard disk partitioning and formatting, as well as a custom installer with a custom data packing method that is optimized for streaming directly from CD to the hard disk without any seeking. Windows 9x QuickInstall gitHub page What you end up with is an easily customisable packaged Windows 98 installation that can be installed onto computers (or in virtual machines, I guess) at blazing speeds. It's a relatively simple concept, but its implementation is genius and definitely not simple at all. This is a great tool for the retrocomputing community.
Right off the bat, there is not that much use for a Pixel Watch with Windows on it. The project, as the maker says, is for shits and giggles" and more like an April Fool's joke. However, it shows how capable modern smartwatches are, with the Pixel Watch 3 being powered by a processor with four ARM Cortex A53 cores, 2GB of DDR4X memory, and 32GB of storage. Getting Windows to run on Gustave's arm, as you can imagine, took some time and effort of inspecting a rooted boot image, modifying the stock UEFI to run custom UEFI, editing the ACPI table, and patching plenty of other files. The result of all that is a Pixel Watch 3 with Windows PE. Taras Buria at Neowin More of this sort of nonsense, please. This is such a great idea, especially because it's so utterly useless and pointless. However pointless it may be, though, it does show that Windows on ARM is remarkably flexible, as it's been ported to a variety of ARM devices it should never be supposed to run on. With Microsoft's renewed entry into the ARM world with Windows on ARM and Qualcomm, I would've hoped for more standardisation in the ARM space to bring it closer to the widely compatible world of x86. That, sadly, has not yet happened, and I doubt it ever will - it seems like ARM is already too big of a fragmented mess to be consolidated for easy portability for operating systems. Instead, individual crazy awesome people have to manually port Windows to other ARM variants, and that, while cool projects, is kind of sad.
FreeDOS is a free and opensource operating system designed to be compatible with MSDOS. Developed to keep the DOS experience alive even after Microsoft ended support for MSDOS, FreeDOS has grown into a complete environment that not only preserves classic DOS functionality but also introduces modern enhancements. Its simplicity and low resource requirements have made it a cherished resource for retro computing enthusiasts and a practical tool for embedded systems and legacy hardware. Andre Machado A short but useful overview of what FreeDOS is. One of my favourite stories about FreeDOS will always be not just that HP offered it as an option on some of its laptops - supposedly because it couldn't sell laptops without an operating system preinstalled - but also just how convoluted this preinstalled copy of FreeDOS was set up. They shipped several FreeDOS virtual machines on top of a minimal installation of Debian, in a complex web of operating systems and VMs.
Nova Custom, based in The Netherlands, makes laptops focused on privacy, customisation, and freedom. Nova Custom laptops ship with either Linux, Windows, or no operating system, and they're uniquely certified for Qubes OS (the V54 model will be certified soon), the ultra-secure and private operating system. On top of that, Nova Custom laptops come with Dasharo coreboot firmware preinstalled, which is completely open source, instead of a proprietary BIOS. Nova Custom can also disable the Intel Management Engine for you, and you can opt for Dasharo coreboot+Heads for the ultimate in boot security. Nova Custom offers visual customisations, too, including engraving a logo or text of your choice on the metal screen lid and/or palmrest and adding your own boot logo. They also offer privacy customisations like removing the microphone and webcam, installing a privacy screen, and more. A small touch I personally appreciate: Nova Custom offers a long, long list of keyboard layouts, as well as the option to customise the super key. Nova Custom products enjoy 3 years of warranty, as well as updates and spare parts for at least seven years after the launch of a product, which includes everything from motherboard replacements down to sets of screws. Nova Custom laptops can be configured with a wide variety of Intel processor options, as well as a choice between integrated Intel GPUs or Nvidia laptop GPUs. Thanks to Nova Custom for sponsoring OSNews!
RISC OS, the operating system from the United Kingdom originally designed to run on Acorn Computer's Archimedes computers - the first ARM computers - is still actively developed today. Especially since the introduction of the Raspberry Pi, new life was breathed into this ageing operating system, and it has gained quite a bit of steady momentum ever since, with tons of small updates, applications, and new hardware support, including things like support for wireless networking. This development has always been a bit piecemeal, though, and the pace has never been exceptionally fast. Now, though, time really is ticking for RISC OS: popular RISC OS platforms like the Raspberry Pi are moving to 64bit ARM only, and this poses a big problem for RISC OS: most of it is written in pure 32bit ARM assembly. As you can imagine, the supply of capable 32bit ARM boards is going to dwindle over the coming years, which would put RISC OS right back where it was before the launch of the Raspberry Pi: floundering, relying on old hardware. This is obviously not ideal, and as such, RISC OS Open Limited wants to take a big leap to address this. Since 2011, ROOL has successfully delivered dozens of community-funded improvements through its bounty scheme. While this model has enabled steady progress, it is not suited to the scale of work now required to modernise RISC OS. The Moonshots initiative represents a fundamental shift: focused, multi-year development projects undertaken by full-time engineers. The first Moonshot aims to make the RISC OS source code portable and compatible with 64-bit Arm platforms, a prerequisite for future hardware support. ROOL has already scoped the work, identified key milestones, and built cost models based on realistic employment and project management needs. Steve Revill in a ROOL press release They're going to need a dedicated team of several developers working over the course of several years to port RISC OS to 64bit ARM. That's going to require quite a bit of money, manpower, and expertise, and considering ROOL has only collected about 100000 worth of donations over the past 14 years, I can see why they're aiming to go big for this effort. All these giant technology corporations with trillion dollar stock valuations are currently relying on ARM technology, so you'd think they could empty a few socks and cough up a few million to get this effort funded properly, but alas, we all know that's not going to happen. I hope ROOL can make this work. RISC OS is a ton of fun to use, and occupies a unique place in computing history. I would be incredibly sad to see technological progress leave it behind, when what amount to chump change for so many wealthy companies and individuals could save it.
Do you want to install Windows 11 without internet access or without an online Microsoft Account? It seems Microsoft really doesn't want you to, as it has removed a very common and popular way of bypassing this requirement. In the release notes for the latest builds from the Dev and Beta channels, the company notes: We're removing the bypassnro.cmd script from the build to enhance security and user experience of Windows 11. This change ensures that all users exit setup with internet connectivity and a Microsoft Account. Let me blow your minds and state that I don't think online accounts for an operating system are inherently a bad idea. I would love it if I could install Fedora KDE on a new machine, optionally log into some online Fedora Account", and have my customisations and applications synchronise automatically. It would save me some time and effort, and assuming it's all properly encrypted and secured, I don't think the risk factors are particularly high. The keyword here is, of course, optionally. Microsoft wants every Windows 11 user to have a Microsoft Account instead of a local account, and would rather not make it optional at all. Of course, this is still Microsoft, a company wholly incapable of doing anything right when it comes to operating systems, so even making this script available again during installation is stupidly easy. It took a few nerds mere moments to discover you could just make some registry changes during installation, reboot, and have the script return to its rightful place. Oh Microsoft. Never change.
Blue95 is a modern and lightweight desktop experience that is reminiscent of a bygone era of computing. Based on Fedora Atomic Xfce with the Chicago95 theme. Blue95 GitHub page Exactly as it says on the tin. This is by far the easiest way to get the excellent Chigaco95 theme for Xfce set up and working in a polished way, and it also contains a few different application choices from the regular Fedora Xfce desktop to improve the illusion even further.
I've complained about the utter inscrutability of the Windows release process for a long time, with Microsoft seemingly using channels, build numbers, code names, date-based version numbers, and so on interchangeably, making it incredibly hard to keep track of what is being released when. It turns out even Microsoft itself started losing track, because it's now released a roadmap for Windows 11 development. In the roadmap tool - of course it's a tool - you can select a platform, which isn't x86 or ARM, but Windows PC or Copilot+ PC, a version (23H2 or 24H2 for now), a status (In preview, Gradually rolling out, or Generally available), and a channel (Canary, Dev, Beta, or Retail), after which the roadmap tool will list whatever features match those criteria. Do you now see why people might want such a tool to keep track of what the hell is going on with Windows? Anyway, as the date-based version numbers - 23H2 and 24H2 - may already make clear, this seems more like a roadmap about where development's been than where development's going. The problem for Microsoft, of course, is that it maintains several different Windows variants with different feature sets and update schedules, and users, too, can of course opt to stick to certain versions before moving on. The end result is this spaghetti, which makes it hard to untangle when you're getting which feature. Anyway, if you're elbow-deep in the Windows spaghetti, this tool may be of use to you.
How do you create a brain drain and lose your status as eminent destination for scientists and researchers? The United States seems to be sending out questionnaires to researchers at universities and research institutes outside of the United States, asking them about their political leanings. Dutch universities are strongly advising Dutch researches not to respond to the questionnaires, and warn that they are designed to stifle free speech and independent research through intimidation. Universities of the Netherlands (UNL) has also warned researchers about the questionnaire. The USGS questionnaire asks, for example, whether the researcher's organisation works with entities associated with communist, socialist, or totalitarian parties', whether the research project has taken appropriate measures' to defend against gender ideology' and whether the project has measurable benefits for US domestic industries, workforce, or economic sectors'. Universiteit Leiden Researchers trying to enter the United States are also facing intimidation tactics, with the United States government going so far as to refuse entry to scientists critical of the Trump regime: A French scientist was denied entry to the US this month after immigration officers at an airport searched his phone and found messages in which he had expressed criticism of the Trump administration, said a French minister. I learned with concern that a French researcher who was traveling to a conference near Houston was denied entry to the United States before being expelled," Philippe Baptiste, France's minister of higher education and research, said in a statement on Monday to Agence France-Presse published by Le Monde. Robert Mackey at the Guardian Being denied entry is one thing - being arrested and sent to a string of prisons is another, like this Canadian woman: Our next stop was Arizona, the San Luis Regional Detention Center. The transfer process lasted 24 hours, a sleepless, grueling ordeal. This time, men were transported with us. Roughly 50 of us were crammed into a prison bus for the next five hours, packed together - women in the front, men in the back. We were bound in chains that wrapped tightly around our waists, with our cuffed hands secured to our bodies and shackles restraining our feet, forcing every movement into a slow, clinking struggle. Jasmine Mooney at the Guardian If you're a scientist or researcher planning on going to a conference in the US (or, say, a developer wanting to go to a tech conference), you should reconsider. Even if your papers are in order, you could end up on a plane to a concentration camp in El Salvador before you can even call a lawyer - while being told that any judge standing up for your rights should be impeached. The United States' war on free speech, science, and research goes far beyond intimidating individual scientists and researchers. The Trump regime is actively erasing and deleting entire fields of science, most notably anything involving things like climate and gender, and openly attacking and cutting funding to universities that disagree with the Trump regime. Almost immediately after being sworn in as president on 20 January, Trump put his signature to piles of executive orders cancelling or freezing tens of billions of dollars in funding for research and international assistance, and putting the seal on thousands of lay-offs. Orwellian restrictions have been placed on research, including bans on studies that mention particular words relating to sex and gender, race, disability and other protected characteristics. Nature US President Donald Trump's latest war on the climate includes withdrawing support for any research that mentions the word. He has also launched a purge on government websites hosting climate data, in an apparent attempt to make the evidence disappear. Corey J. A. Bradshaw at The Conversation The Trump administration has fired hundreds of workers at the National Oceanic and Atmospheric Administration (Noaa), the US's pre-eminent climate research agency housed within the Department of Commerce, the Guardian has learned. There is no plan or thought into how to continue to deliver science or service on weather, severe storms and events, conservation and management of our coasts and ocean life and much more," he said. Let's not pretend this is about efficiency, quality of work or cost savings because none of those false justifications are remotely true." Dharna Noor and Gabrielle Canon at the Guardian Intimidating current scientists isn't enough, either - the scientists of the future must also suffer: US President Donald Trump has signed an executive orderto dismantle the Department of Education, fulfilling a campaign pledge and a long-cherished goal of some conservatives. In its statement, the American Federation of Teachers said: No-one likes bureaucracy, and everyone's in favour of more efficiency, so let's find ways to accomplish that. But don't use a war on woke' to attack the children living in poverty and the children with disabilities." Ana Faguy at the BBC But what about intimidating university students who don't fall in line with the regime? Well, we can't forget about those, now, can we? After immigration agents detained Columbia University graduate student Mahmoud Khalil over his involvement in pro-Palestine protests on campus, President Donald Trump promised it was just the beginning. The Department of Homeland Security (DHS) has since arrested at least two more students who are in the country on visas - one of whom had recently sued the Trump administration on First Amendment grounds. Gaby Del Valle at The Verge A Cornell University PhD student earlier this month sued the Trump administration seeking to stop the president's order aimed at foreign students accused of antisemitism". Days later, lawyers at the justice department emailed to request that the student surrender" to immigration officials Maanvi Singh at the Guardian These are just a small selection of stories, and I could've picked a dozen more still if I wanted to. The point should be squarely (roundly?) driven home by now: the United States government seems to be doing everything in its power to scare off the very people an economy based on science,
KDE's login manager, SDDM, has its share of problems, and as such, a number of KDE developers are working on replacement to fix many of these long-standing issues. So, what exactly is wrong with SDDM as it exists today? With SDDM, power management is reinvented from scratch with bespoke configuration. We can't integrate with Plasma's network management, power management, volume controls, or brightness controls without reinventing them in the desktop-agnostic backend. SDDM was already having to duplicate too much functionality we have in KDE, which was very frustrating when we're left maintaining it. David Edmundson On top of that, theming is also a big issue with SDDM, as it doesn't adopt any of the existing Plasma themes, wallpapers, and so on, forcing users to manually makes thse changes for SDDM, and forcing theme developers to make custom themes just for SDDM instead of it just adopting Plasma's settings. The new login manager they're working on will instead make use of existing Plasma components and be brought up like Plasma itself, too. For now, the SDDM replacement is roughly at feature parity with SDDM, but it's by no means ready for widespread adoption by distributions or users. Developers interested in trying it out can do so, though, and as it mostly looks like the existing default SDDM setup, you won't even notice anything in day-to-day use.
Up until now, Google developed several components of Android out in the open, as part of AOSP, while developing everything else behind closed doors, only releasing the source code once the final new Android version was released. This meant that Google had to merge the two branches, which lead to problems and issues, so Google decided it's now moving all development of Android behind closed doors. What will change is the frequency of public source code releases for specific Android components. Some components like the build system, update engine, Bluetooth stack, Virtualization framework, and SELinux configuration are currently AOSP-first, meaning they're developed fully in public. Most Android components like the core OS framework are primarily developed internally, although some features, such as the unlocked-only storage area API, are still developed within AOSP. Beginning next week, all Android development will occur within Google's internal branches, and the source code for changes will only be released when Google publishes a new branch containing those changes. As this is already the practice for most Android component changes, Google is simply consolidating its development efforts into a single branch. Mishaal Rahman at Android Authority This brings up a very old debate: if development happens entirely behind closed doors, with only the occasional code drop, is the software in question really open source? Technically, the answer is obviously yes' - there's no requirement that development take place in public. However, I'm fairly sure that when most people think of open source, they think not only of occasionally throwing chunks of code over the proverbial corporate walls, but also of open development, where everybody is free to contribute, pipe in, and follow along. Clearly, this move makes Android more closed, not less so, and it follows in a long string of changes Google has made to Android that make it ever harder to consider AOSP, the Android Open Source Project, a capable, modern mobile operating system. The Android fork of the Linux kernel will always be properly open, of course, but I have my doubts Android in and of itself will remain open source in the narrow definition for much longer, and even if it does, you have to wonder how much value it will have. I mean, Darwin, the open source base underneath macOS and iOS, is technically open source, but nobody cares because Apple made it pretty much worthless in and of itself. Anything of value is stripped out and not only developed behind closed doors, but also not released as open source, ensuring Darwin is nothing but a curiosity we sometimes remember exists. Android could be heading in the same direction. My biggest worry are Android ROMs, most notably for me personally GrapheneOS. I honestly have no idea how this will impact such projects.
Some more light reading: While it was already established that the open source supply chain was often the target of malicious actors, what is stunning is the amount of energy invested by Jia Tan to gain the trust of the maintainer of the xz project, acquire push access to the repository and then among other perfectly legitimate contributions insert - piece by piece - the code for a very sophisticated and obfuscated backdoor. This should be a wake up call for the OSS community. We should consider the open source supply chain a high value target for powerful threat actors, and to collectively find countermeasures against such attacks. In this article, I'll discuss the inner workings of the xz backdoor and how I think we could have mechanically detected it thanks to build reproducibility. Julien Malka It's a very detailed look at the situation and what Nix could to prevent it in the future.
What if you want to use a web browser like Dillo, which lacks JavaScript support and can't play audio or video inside the browser? Dillo doesn't have the capability to play audio or video directly from the browser, however it can easily offload this task to other programs. This page collects some examples of how to do watch videos and listen to audio tracks or podcasts by using an external player program. In particular we will cover mpv with yt-dlp which supports YouTube and Bandcamp among many other sites. Dillo website The way Dillo handles this feels very UNIX-y, in that it will call an external program - mpv and yt-dlp, for instance - to play a YouTube from an Open in mpv" option in the right-click menu for a link. It's nothing earth-shattering or revolutionary, of course, but I very much appreciate that Dillo bakes this functionality right in, allowing you to define any such actions and add them to the context menu.
This whitepaper provides an introduction to and overview of seL4. We explain what seL4 is (and is not) and explore its defining features. We explain what makes seL4 uniquely qualified as the operating-system kernel of choice for security- and safety-critical systems, and generally embedded and cyber-physical systems. In particular, we explain seL4's assurance story, its security- and safety-relevant features, and its benchmark-setting performance. We also discuss typical usage scenarios, including incremental cyber retrofit of legacy systems. Gernot Heiser Some light reading for Monday.
It's been over three years since the last ReactOS release, but today, in honour of the first commit to the project by the oldest, still active contributor, the project released ReactOS 0.4.15. Of course, there's been a steady stream of nightly releases, so it's not like the project stalled or anything, but having a proper release is always nice to have. We are pleased to announce the release of ReactOS 0.4.15! This release offers Plug and Play fixes, audio fixes, memory management fixes, registry healing, improvements to accessories and system tools including Notepad, Paint, RAPPS, the Input Method Editor, and shell improvements. ReactOS 0.14.5 release announcement There's a lot in this one, as the long gap between releases indicates. Thanks to the major changes in the plug and play subsystem of the kernel, ReactOS now supports more third party drivers, and it can now boot from USB and chipsets with EHCI, OHCI, and UHCI controllers. The open source AC'97 driver from the Windows Driver Kit has also been ported to ReactOS to enable sound on VirtualBox and old motherboards. The open source FAT driver from the same WDK has also been ported, which is a massive improvement over the old one. ReactOS can now also make use of custom IMEs, ZIP archive support has been integrated into the shell, and a new default visual style has been chosen. There's a lot more in this release, though, and since it was branched over six months ago, there are a lot of improvements from since that time that are not yet part of this release, like a graphical installers, UEFI and SMP support, new NTFS driver, and a ton more. In other words - don't let the long time between releases fool you; there's a lot going on in the ReactOS world.
Nvidia releasing its Linux graphics driver as open source is already bearing fruit for alternative operating systems. As many people already knows, Nvidia published their kernel driver under MIT license: GitHub - NVIDIA/open-gpu-kernel-modules: NVIDIA Linux open GPU kernel module source (I will call it NVRM). This driver is very portable and its platform-independent part can be compiled for Haiku with minor effort (but it need to implement OS-specific binding code to be actually useful). This is very valuable for Haiku because Linux kernel GPU drivers are very hard to port and it heavily depends on Linux kernel internals. Unfortunately userland OpenGL/Vulkan driver source code is not published. But as part of Mesa 3D project, new Vulkan driver NVK" is being developed and is functional already. Mesa NVK driver is using Nouveau as kernel driver, so it can't be directly used with NVRM kernel driver. NVK source code provides platform abstraction that allows to implement support of other kernel drivers such as NVRM. I finally managed to make initial port NVRM kernel driver to Haiku and added initial NVRM API support to Mesa NVK Vulkan driver, so NVRM and NVK can work together. Some simple Vulkan tests are working. X512 on the Haiku forums Incredibly impressive, and a huge milestone for the Haiku operating system. It supports any Nvidia GPU from the Turing architecture, which I think means Nvidia RTX 20xx and newer, since they have a required microcontroller older GPUs do not have. Of course, this is an early port and a lot of work remains to be done, but it could lead to huge things for Haiku.
SoftBank Group Corp. today announced that it will acquire Ampere Computing, a leading independent silicon design company, in an all-cash transaction valued at $6.5 billion. Under the terms of the agreement, Ampere will operate as a wholly owned subsidiary of SoftBank Group and retain its name. As part of the transaction, Ampere's lead investors - Carlyle and Oracle - are selling their respective positions in Ampere. SoftBank and Ampere Computing press release Despite not really knowing what SoftBank does and what their long-term goals are - I doubt anyone does - I hope this at the very least provides Ampere with the funds needed to expand its business. At this point, the only serious options for Arm-based hardware are either Apple or Qualcomm, and we could really use more players. Ampere's hardware is impressive, but difficult to buy and expensive, and graphics card support is patchy, at best. What Ampere needs is more investment, and more OEMs picking up their chips. An Ampere workstation is incredibly high on my list of machines to test for OSNews (perhaps a System76 model?), and it'd be great if economies of scale worked to bring the prices down, possibly allowing Ampere to developer cheaper, more affordable variants for us mere mortals, too. I would love to build an Arm workstation in much the same way we build regular x86 PCs today, but I feel like that's still far off. I have no idea if SoftBank is the right kind of company to make this possible, but one can dream.
What do SourceHut, GNOME's GitLab, and KDE's GitLab have in common, other than all three of them being forges? Well, it turns out all three of them have been dealing with immense amounts of traffic from AI" scrapers, who are effectively performing DDoS attacks with such ferocity it's bringing down the infrastructures of these major open source projects. Being open source, and thus publicly accessible, means these scrapers have unlimited access, unlike with proprietary projects. These AI" scrapers do not respect robots.txt, and have so many expensive endpoints it's putting insane amounts of pressure on infrastructure. Of course, they use random user agents from an effectively infinite number of IP addresses. Blocking is a game of whack-a-mole you can't win, and so the GNOME project is using a rather nuclear option called Anubis now, which aims to block AI" scrapers with a heavy-handed approach that sometimes blocks real, genuine users as well. The numbers are insane, as Niccolo Venerandi at Libre News details. Over Mastodon, one GNOME sysadmin, Bart Piotrowski, kindly shared some numbers to let people fully understand the scope of the problem. According to him, in around two hours and a half they received 81k total requests, and out of those only 3% passed Anubi's proof of work, hinting at 97% of the traffic being bots - an insane number! Niccolo Venerandi at Libre News Fedora is another project dealing with these attacks, with infrastructure sometimes being down for weeks as a result. Inkscape, LWN, Frama Software, Diaspora, and many more - they're all dealing with the same problem: the vast majority of the traffic to their websites and infrastructure now comes from attacks by AI" scrapers. Sadly, there's doesn't seem to be a reliable way to defend against these attacks just yet, so sysadmins and webmasters are wasting a ton of time, money, and resources fending off the hungry AI" hordes. These AI" companies are raking in billions and billions of dollars from investors and governments the world over, trying to build dead-end text generators while sucking up huge amounts of data and wasting massive amounts of resources from, in this case, open source projects. If no other solutions can be found, the end game here could be that open source projects will start to make their bug reporting tools and code repositories much harder and potentially even impossible to access without jumping through a massive amount of hoops. Everything about this AI" bubble is gross, and I can't wait for this bubble to pop so a semblance of sanity can return to the technology world. Until the next hype train rolls into the station, of course. As is tradition.
There's no escaping Rust, and the language is leaving its mark everywhere. This time around, Chrome has replaced its use of FreeType with Skrifa, a Rust-based replacement. Skrifa is written in Rust, and created as a replacement for FreeType to make font processing in Chrome secure for all our users. Skifra takes advantage of Rust's memory safety, and lets us iterate faster on font technology improvements in Chrome. Moving from FreeType to Skrifa allows us to be both agile and fearless when making changes to our font code. We now spend far less time fixing security bugs, resulting in faster updates, and better code quality. Dominik Rottsches, Rod Sheeter, and Chad Brokaw The move to Skrifa is already complete, and it's being used now by Chrome users on Linux, Android, and ChromeOS, and as a fallback for users on Windows and macOS. The reasons for this change are the same as they always are for replacing existing tools with new tools written in Rust: security. FreeType is a security risk for Chrome, and by replacing it with something written in a memory-safe language like Rust, Google was able to eliminate a whole slew of types of security issues. To ensure rendering correctness, Google performed a ton of pixel comparison tests to compare FreeType output to Skrifa output. On top of that, Google is continuously running similar tests to ensure no quality degradation sneaks into Skrifa as time progresses. Whether anyone likes Rust or not, the reality of the matter is that using Rust provides tangible benefits that reduce cost and lower security risks, and as such, its use will keep increasing, and tried and true tools will continue to be replaced by Rust counterparts.
Long ago, during the time of creation, I confidently waved my hand and allocated a 1GB ESP partition and a 1GB boot partition, thinking to myself with a confident smile that this would surely be more than enough for the foreseeable future. However, this foreseeable future quickly vanished along with my smile. What was bound to happen eventually came, but I didn't expect it to arrive so soon. What could possibly require such a large boot partition? And how should we resolve this? Here, I would like to introduce the boot partition issue I encountered, as well as temporary coping methods and final solutions, mentioning the problems encountered along the way for reference. fernvenue Some of us will definitely run into this issue at some point, so if you're doing a fresh installation it might make sense to allocate a bit more space to your boot partition. If you have a running system and are bumping into the limitations of your boot partition and don't want to reinstall, the linked article provides some possible solutions.
One of the two major open source desktop environments, GNOME, just released version 48, and it's got some very big and welcome improvements. First and foremost there's dynamic triple-buffering, a feature that took over five years of extensive testing to get ready. It will improve the smoothness and fluidity of animations and other movements on the screen, as it did for KDE when it landed there in the middle of last year. GNOME 48 also brings notification stacking, combining notifications from the same source, improvements to the new default image viewer such as image editing features, a number of digital well-being options, as well as the introduction of a new, basic audio player designed explicitly for quickly playing individual audio files. There's also a few changes to GNOME's text editor, and following in KDE's recent footsteps, GNOME 48 also brings HDR support. Another major change are the new default fonts. Finally, Cantarell is gone, replaced by slightly modified versions of Inter and Iosevka. Considering I absolutely adore Inter and installing and setting it as my main font is literally the first thing I do on any system that allows me to, I'm fully behind this change. Inter is exceptional in that it renders great in both high and low DPI environments, and its readability is outstanding. GNOME 48 will make its way to your distribution's repositories soon enough.
Oracle, the company owned by a guy who purchased a huge chunk of the Kingdom of Hawaii from the Americans, has released Java 24. I'll be honest and upfront: I just don't care very much at all about this, as the only interaction I've had with Java over the past, I don't know, 15 years or so, is either because of Minecraft, or because of my obsession with ancient UNIX workstations where Java programs pop up in the weirdest of places. I know Java is massive and used everywhere, but going through the list of changes and improvements does not spark any joy in me at all, and just makes me want to stick my pinky in an electrical socket to make something interesting happen. If you work with Java, you know all of this stuff already anyway, as you've been excitedly trying to impress Nick from accounting with your knowledge of Flexible Constructor Bodies and Quantum-Resistant Module-Lattice-Based Key Encapsulation Mechanisms because he's just so dreamy and you desperately want to ask him out for a hot cup of coffee, but you're not sure if he's married or has a boy or girlfriend so you're just kind of scoping things out a bit too excitedly and now you're worried you might be coming off as too desperate for his attention. Anyway, that's how offices work, right? I've never worked for anyone but myself and office settings induce a deep sense of existential dread in me, so my knowledge of office work, and Java if we're honest, may be based a bit too much on '90s sitcoms and dramas. Whatever, Java 24 is here. Do a happy dance.
As of the 18th of February, OpenVMS, known for its stability and high-availability, 47 years old and ported to 4 different CPU architecture, has a package manager! This article shows you how to use the package manager and talks about a few of its quirks. It's an early beta version, and you do notice that when using it. A small list of things I noticed, coming from a Linux (apt/yum/dnf) background: There seems to be no automatic dependency resolution and the dependencies it does list are incomplete. No update management yet, no removal of packages and no support for your own package repository, only the VSI official one. Service startup or login script changes are not done automatically. Packages with multiple installer files fail and require manual intervention. It does correctly identify the architectures, has search support and makes it way easier to install software. The time saved by downloading, manually copying and starting installation is huge, so even this early beta is a very welcome addition to OpenVMS. Remy van Elst Obviously, a way to install software packages without having to manually download them is a huge step forward for OpenVMS. The listed shortcomings might raise some eyebrows considering most of us are used to package management on Linux/BSD, which is far more advanced. Bear in mind, however, that this is a beta product, and it's quite obvious these missing essential features will be added over time. Luckily it at least lists dependencies, so let's hope actually automating installing them is in the works and will be available soon. I actually have an OpenVMS virtual machine set up and running, but I find using it incredibly difficult - but only because of my own lack of experience with and knowledge about OpenVMS, of course. Any experience of knowledge rooted in UNIX-based and Windows operating systems is useless here, even for the most basic of CLI tasks. If I find the time, I'd love to spend more time with it and get more acquainted with the way it works, including this new package manager.
It's barely been two months after the announcement that Pebble would return with new watches, and they're already here - well, sort of. Pebble has announced two new watches for preorder, the Core 2 Duo and the Core Time 2. The former is effectively a Pebble 2, upgraded with new internals, while the Core Time 2 is very similar, but comes with a colour e-ink display and a metal case. They're up for preorder now at $149 and $225, respectively, with the Core 2 Duo shipping in July, and the Core Time 2 shipping in December. Alongside this unveil, Eric Migicovsky, the creator of Pebble, also published a blog post detailing the trouble Pebble is and will have with making smartwatches for iOS users. Apple effectively makes it impossible for third parties to make a proper smartwatch for iOS, since access to basic functionality you'd come to expect from such a device are locked by Apple, reserved only for its own Apple Watch. As such, Migicovsky makes it explicitly clear that iOS users who want to buy one of these new Pebbles will are going to have a very degraded experience compared to Android users. Not only will Android users with Pebble have access to a ton more functionality, any Pebble features that could exist for both Android and iOS users will always come to Android first, and possibly iOS later. In fact, Migicovksy goes as far as suggesting that if you want a Pebble, you should buy an Android phone. I don't want to see any tweets or blog posts or complaints or whatever later on about this. I'm publishing this now so you can make an informed decision about whether to buy a new watch or not. If you're worried about this, the easiest solution is to buy an Android phone. Eric Migicovsky I have to hand it to Migicovksy - I love the openness about this, and the fact he's making this explicitly clear to any prospective buyers. There's no sugarcoating or PR speak to try and please Tim Cook - he's putting the blame squarely where it belongs: on Apple. It's kind of unreal to see such directness about a new product, but as a Dutch person, it feels quite natural. We need more of this style of communication in the technology world, as it makes it much clearer that you're getting - and not getting. I do hope that Pebble's Android support functions without the need for Google Play Services or other proprietary Google code, since it would be great to have a proper, open source smartwatch fully supported by de-Googled Android.
A few months after 0.27.0 was released, we've got a small update for Enlightenment today, version 0.27.1. It's a short list of bugfixes, and one tiny new feature: you can now use the scroll wheel to change the volume when your cursor is hovering over the mixer controls. That's it. That's the release.
It's taken a Herculean seven-year effort, but GIMP 3.0 has finally been released. There are so many new features, changes, and improvements in this release that it's impossible to highlight all of them. First and foremost, GIMP 3.0 marks the shift to GTK3 - this may be surprising considering GTK4 has been out for a while, but major applications such as GIMP tend to stick to more tried and true toolkit versions. GTK4 also brings with it the prickly discussion concerning a possible adoption of libadwaita, the GNOME-specific augmentations on top of GTK4. The other major change is full support for Wayland, but users of the legacy X11 windowing system don't have to worry just yet, since GIMP 3.0 supports that, too. As far as actual features go, there's a ton here. Non-destructive layer effects is one of the biggest improvements. Another big change introduced in GIMP 3.0 is non-destructive (NDE) filters. In GIMP 2.10, filters were automatically merged onto the layer, which prevented you from making further edits without repeatedly undoing your changes. Now by default, filters stay active once committed. This means you can re-edit most GEGL filters in the menu on the layer dockable without having to revert your work. You can also toggle them on or off, selectively delete them, or even merge them all down destructively. If you prefer the original GIMP 2.10 workflow, you can select the Merge Filters" option when applying a filterinstead. GIMP 3.0 release notes There's also much better color space management, better layer management and control, the user interface has been improved across the board, and support for a ton of file formats have been added, from macOS icons to Amiga ILBM/IFF formats, and much more. GIMP 3.0 also improves compatibility with Photoshop files, and it can import more palette formats, including proprietary ones like Adobe Color Book (ACB) and Adobe Swatch Exchange (ASE). This is just a small selection, as GIMP 3.0 truly is a massive update. It's available for Linux, Windows, and macOS, and if you wait for a few days it'll probably show up in your distribution's package repositories.
Settle down children, it's time for another great article by Cameron Kaiser. This time, they're going to tell us about the DEC Professional 380 running PRO/VENIX. The Pro 380 upgraded to the beefier J-11 (Jaws") CPU from the PDP-11/73, running two to three times faster than the 325 and 350. It had faster RAM and came with more of it, and boasted quicker graphics with double the vertical resolution built right into the logic board. The 380 still has its faults, notably being two-thirds the speed of the 11/73 and having no cache, plus all of the 325/350's incompatibilities. Taken on its merits, though, it's a tank of a machine, a reasonably powerful workstation, and the most practical PDP-adjacent thing you can actually slap on a (large) desk. This particular unit is one of the few artifacts I have left from a massive DEC haul almost twelve years ago. It runs PRO/VENIX, the only official DEC Unix option for the Pros, but in its less common final release (we'll talk about versions of Venix). I don't trust the clanky ST-506 hard drive anymore, so today we'll convert it to solid state and double its base RAM to make it even more professional, and then play around in VENIX some for a taste of old-school classic Unix - after, of course, some history. Cameron Kaiser Detailed, interesting, fascinating, and full of photos as always.
In 1994, a single Macintosh Performa model, the 550, came from the factory with a dedicated, hidden recovery partition that contained a System 7 system folder and a small application that would be set as bootable if the main operating system failed to boot. This application would then run, allowing you to recover your Mac using the system folder inside the recovery partition. This feature was apparently so obscure, few people knew it existed, and nobody had access to the original contents of the recovery partition anymore. It took Doug Brown a lot of searching to find a copy of this recovery partition. The issue is that nobody really knows how this partition is populated with the recovery data, so the only way to explore its contents was to somehow find a Performa 550 hard drive with a specific version of Mac OS that had never been reformatted after leaving the factory. The thing is, this whole functionality was super obscure. It's understandable that people weren't familiar with it. Apple publicly stated it was only included with this one specific Performa model. Their own documentation also said that it would be lost if you reformatted the hard drive. It was hiding in the background, so nobody really knew it was there, let alone thought about saving it. Also, I can say that the first thing a lot of people do when they obtain a classic computer is erase it in order to restore it to the factory state. Little did anyone know, if they reformatted the hard drive on a Performa 550, they could have been wiping out rare data that hadn't been preserved! Doug Brown Brown found a copy, and managed to get the whole original functionality working again. It's a fairly basic way of doing this, but we shouldn't forget we're talking 1994 here, and I don't think any other operating system at the time had the ability to recover from an unbootable state like this. Like Brown, I wonder why it was abandoned so quickly. Perhaps Apple was unwilling to sacrifice the hard drive space? Groundbreaking or not, it's still great to have this recovered and preserved for the ages.
It's rare in this day and age that proprietary operating system vendors like Microsoft and Apple release updates you're more than happy to install, but considering even a broken clock is right twice a day, we've got one for you today. Microsoft released KB5053598 (OS Build 26100.3476) which addresses security issues for your Windows operating system". One of the security issues" this update addresses, is Microsoft's AI" text generator, Copilot. To address this glaring security issue, this update removes Copilot from your Windows installation altogether. Sadly, it's only by mistake, and not by design. We're aware of an issue with the Microsoft Copilot app affecting some devices. The app is unintentionally uninstalled and unpinned from the taskbar. Microsoft is working on a resolution to address this issue. In the meantime, affected users can reinstall the app from the Microsoft Store and manually pin it to the taskbar. Microsoft Support Well, at least until Microsoft fixes" this issue" with KB5053598, consider this update a simple way to get rid of Copilot. Microsoft accidentally cared about its users for once, so cherish this moment - it won't happen again.
It's been a while, but there's a new release of Ironclad, the formally verified, hard real-time capable kernel written in SPARK and Ada. Aside from the usual bugfixes, this release moves Ironclad from multiboot to Limine, adds x86_64 ACPI support for poweroff and reboot, improvements to PTY support, the VFS layer, and much more. The easiest way to try out Ironclad is to download Gloire, a distribution that uses Ironclad and the GNU tools. It can be installed in both a virtual machine and on real hardware.
Mozilla's actions have been rubbing many Firefox fans the wrong way as of late, and inspiring them to look for alternatives. There are many choices for users who are looking for a browser that isn't part of the Chrome monoculture but is full-featured and suitable for day-to-day use. For those who are willing to stay in the Firefox family" there are a number of good options that have taken vastly different approaches. This includes GNU IceCat, Floorp, LibreWolf, and Zen. Joe Brockmeier It's a tough situation, as we're all aware. We don't want the Chrome monoculture to get any worse, but with Mozilla's ever-increasing number of dubious decisions some people have been warning about for years, it's only natural for people to look elsewhere. Once you decide to drop Firefox, there's really nowhere else to go but Chrome and Chrome skins, or the various Firefox skins. As an aside, I really don't think these browsers should be called Firefox forks"; all they really do is change some default settings, add in an extension or two, and make some small UI tweaks. They may qualify as forks in a technical sense, but I think that overstates the differentiation they offer. Late last year, I tried my best to switch to KDE's Falkon web browser, but after a few months the issues, niggles, and shortcomings just started to get under my skin. I switched back to Firefox for a little while, contemplating where to go from there. Recently, I decided to hop onto the Firefox skin train just to get rid of some of the Mozilla telemetry and useless features' they've been adding to Firefox, and after some careful consideration I decided to go with Waterfox. Waterfox strikes a nice balance between the strict choices of LibreWolf - which most users of LibreWolf seem to undo, if my timeline is anything to go by - and the choices Mozilla itself makes. On top of that, Waterfox enables a few very nice KDE integrations Firefox itself and the other Firefox skins don't have, making it a perfect choice for KDE users. Sadly, Waterfox isn't packaged for most Linux distributions, so you'll have to resort to a third-party packager. In the end, none of the Firefox skins really address the core problem, as they're all still just Firefox. The problem with Firefox is Mozilla, and no amount of skins is going to change that.
Google's biggest announcement today, at least as it pertains to Android, is that the Vulkan graphics API is now the official graphics API for Android. Vulkan is a modern, low-overhead, cross-platform 3D graphics and compute API that provides developers with more direct control over the GPU than older APIs like OpenGL. This increased control allows for significantly improved performance, especially in multi-threaded applications, by reducing CPU overhead. In contrast, OpenGL is an older, higher-level API that abstracts away many of the low-level details of the GPU, making it easier to use but potentially less efficient. Essentially, Vulkan prioritizes performance and explicit hardware control, while OpenGL emphasizes ease of use and cross-platform compatibility. Mishaal Rahman at Android Authority Android has supported Vulkan since Android 7.0, released in 2016, so it's not like we're looking at something earth-shattering here. The issue has been, as always with Android, fragmentation: it's taken this long for about 85% of Android devices currently in use to support Vulkan in the first place. In other words, Google might've wanted to standardise on Vulkan much sooner, but if only a relatively small number of Android devices support it, that's going to be a hard sell. In any event, from here on out, every application or game that wants to use the GPU on Android will have to do so through Vulkan, including everything inside Android. It's still going to be a long process, though, as the requirement to use Vulkan will not fully come into effect until Android 17, and even then there will be exceptions for certain applications. Android tends to implement changes like this in phases, and the move to Vulkan is no different. All of this does mean that older devices with GPUs that do not support Vulkan, or at least not properly, will not be able to be updated to the Vulkan-only releases of Android, but let's be real here - those kinds of devices were never going to be updated anyway.
Ted Unangst published dude, where are your syscalls? on flak yesterday, with a neat demonstration of OpenBSD's pinsyscall security feature, whereby only pre-registered addresses are allowed to make system calls. Whether it strengthens or weakens security is up for debate, but regardless it's an interesting, low-level programming challenge. The original demo is fragile for multiple reasons, and requires manually locating and entering addresses for each build. In this article I show how to fix it. To prove that it's robust, I ported an entire, real application to use raw system calls on OpenBSD. Chris Wellons Some light reading for the weekend.
Elon Musk's Tesla is waving a red flag, warning that Donald Trump's trade war risks dooming US electric vehicle makers, triggering job losses, and hurting the economy. In an unsigned letter to the US Trade Representative (USTR), Tesla cautioned that Trump's tariffs could increase costs of manufacturing EVs in the US and forecast that any retaliatory tariffs from other nations could spike costs of exports. Ashley Belanger at Ars Technica Back in 2020, scientists at the University of Twente, The Netherlands, created the smallest string instrument that can produce tones audible by human ears when amplified. Its strings were a mere micrometer thin, or one millionth of a meter, and about half to one millimeter long. Using a system of tiny weights and combs producing tiny vibrations, tones can be created. And yet, this tiny violin still isn't small enough for Tesla.
We've got the Haiku activity report covering February, and aside from the usual slew of bug fixes and minor improvements, there's one massive improvement that deserves attention. waddlesplash continued his ongoing memory management improvements, fixes, and cleanups, implementing more cases of resizing (expanding/shrinking) memory areas when there's a virtual memory reservation adjacent to them (and writing tests for these cases) in the kernel. These changes were the last remaining piece needed before the new malloc implementation for userland (mostly based on OpenBSD's malloc, but with a few additional optimizations and a Haiku-specific process-global cache added) could be merged and turned on by default. There were a number of followup fixes to the kernel and the new allocator's glue" and global caching logic since, but the allocator has been in use in the nightlies for a few weeks with no serious issues. It provides modest performance improvements over the old allocator in most cases, and in some cases that were pathological for the old allocator (GCC LTO appears to have been one), provides order-of-magnitude (or mode) performance improvements. waddlesplash on the Haiku website Haiku also continues replacing implementations of standard C functions with those from musl, Haiku can now be built on FreeBSD and Linux distributions that use musl, C5/C6 C-states were disabled for Intel Skylake to fix boot problems on that platform, and many, many more changes. There's also bad news for fans of Gopher: support for the protocol was removed from WebPositive, Haiku's native web browser.
When I checked where Windows Defender had actually detected the threat, it was in the Fan Control app I use to intelligently cool my PC. Windows Defender had broken it, and that's why my fans were running amok. For others, the threat was detected in Razer Synapse, SteelSeries Engine, OpenRGB, Libre Hardware Monitor, CapFrameX, MSI Afterburner, OmenMon, FanCtrl, ZenTimings, and Panorama9, among many others. As of now, all third-party/open-source hardware monitoring softwares are screwed," Fan Control developer Remi Mercier tells me. Sean Hollister at The Verge Anyone reading OSNews can probably solve this puzzle. Many fan control and hardware monitoring applications for Windows make use of the same open source driver: WinRing0. Uniquely, this kernel-level driver is signed, since it's from back in the days when developers could self-sign these sorts of drivers, but the signed version has a known vulnerability that's quite dangerous considering it's a kernel-level driver. The vulnerability has been fixed, but signing this new version - and keeping it signed - is a big ordeal and quite expensive, since these days, drivers have to be signed by Microsoft. And it just so happens that Windows Defender has started marking this driver, and thus any tool that uses it, as dangerous, sending it to quarantine. The result is failing hardware monitoring and fan control applications for quite a few Windows users. Some companies have invested in developing their own closed-source alternatives, but they're not sharing them. Luckily, Windows OEM iBuyPower says it's trying to get the patched version of WinRing0 signed, and if that happens, they will share it back with the community. Classy. For now, though, hardware monitoring and fan control on Windows might be a bit of an ordeal.
One of the biggest behind-the-scenes changes in the upcoming Plasma 6.4 release is the split of kwin_x11 and kwin_wayland codebases. With this blog post, I would like to delve in what led us to making such a decision and what it means for the future of kwin_x11. Vlad Zahorodnii For the most part, this change won't mean much for users of KWin on either Wayland or X11, at least for now. At least for the remainder of the Plasma 6.x life cycle, kwin_x11 will be maintained, and despite the split, you can continue to have both kwin_x11 and kwin_wayland installed and use them interchangeably. Don't expect any new features, though; kwin_x11 will get the usual bug fixes, some backports, and they'll make sure it keeps working with any new KDE frameworks introduced during the 6.x cycle, but that's all you're going to get if you're using KDE on X11. There's one area where this split might cause problems, though, and that's if you're using a particular type of KWin extension. While KWin extensions written in JavaScript and QML are backend agnostic and can be used without issues on both variants of KWin, extensions written in C++ are not. These extensions need to be coded specifically for either kwin_x11 or kwin_wayland, and with Wayland being the default for KDE, this may mean some of these extensions will leave X11 users behind to reduce the maintenance burden. It seems that very few people are still using KDE on X11, and kwin_x11 doesn't receive much testing anymore, so it makes sense to start preparations for the inevitable deprecation. While I think the time of X11 on Linux has come and gone, it's unclear what this will mean for KDE on the BSDs. While Wayland is available on all of the BSDs in varying states of maturity, I honestly don't know if they're ready for a Wayland-only KDE at this point in time.
Ah, PuTTY. Good old reliable PuTTY. This little tool is one of those cornerstone applications in the toolbox of most of us, without any fuss, without any upsells or anti-user nonsense - it just does its job, and it has been doing its job for 30 years. Have you ever wondered, though, where PuTTY's icons come from, how they were made, and how they evolved over time? PuTTY's icon designs date from the late 1990s and early 2000s. They've never had a major stylistic redesign, but over the years, the icons have had to be re-rendered under various constraints, which made for a technical challenge as well. Simon Tatham The icons have basically not changed since the late '90s, and I think that's incredibly fitting for the kind of tool PuTTY is. It turns out people actually offer to redesign all the icons in a modern style, but that's not going to happen. People sometimes object to the entire 1990s styling, and volunteer to design us a complete set of replacements in a different style. We've never liked any of them enough to adopt them. I think that's probably because the 1990s styling is part of what makes PuTTY what it is - reassuringly old-fashioned". I don't know if there's any major redesign that we'd really be on board with. Simon Tatham Amen.
After so much terrible tech politics news, let's focus on some nice, easy-going Linux news that's not going to be controversial at all: Ubuntu intends to replace numerous core Linux utilities with newer Rust replacements, starting with the ubiquitous GNU Coreutils. This package provides utilities which have become synonymous with Linux to many - the likes of ls, cp, and mv. In recent years, there has been an effort to reimplement this suite of tools in Rust, with the goal of reaching 100% compatibility with the existing tools. Similar projects, like sudo-rs, aim to replace key security-critical utilities with more modern, memory-safe alternatives. Starting with Ubuntu 25.10, my goal is to adopt some of these modern implementations as the default. My immediate goal is to make uutils' coreutils implementation the default in Ubuntu 25.10, and subsequently in our next Long Term Support (LTS) release, Ubuntu 26.04 LTS, if the conditions are right. Jon Seager Obviously, this is a massive change for Ubuntu, and while performance is one of the cited reasons for undertaking this effort, the biggest reason is, of course, security. To aid in the testing effort, Seager created a tool called oxidizr, with which you can swap between the classic versions and the new Rust versions of various tools to try them out in a non-destructive way. This is a massive vote of confidence in uutils, and I'm curious to see if it works out for Ubuntu. I doubt it's going to take long before other prominent distributions follow suit.
We've talked about Chimera Linux a few times now on OSNews, so I won't be repeating what makes it unique once more. The project announced today that it will be shuttering its RISC-V architecture support, and considering RISC-V has been supported by Chimera Linux pretty much since the beginning, this is a big step. The reason is as sad as it is predictable: there's simply no RISC-V hardware out there fit for the purpose of building a Linux distribution and all of its packages. Up until this point, Chimera Linux built its RISC-V variant on an x86_64 machine with qemu-user binfmt emulation coupled with transparent cbuild support". There are various problems with this setup, like serious reliability problems, not being able to test packages, and a lack of performance. The setup was intended to be a temporary solution until proper, performanct RISC-V hardware became available, but this simply hasn't happened, and it doesn't seem like this is going to change soon. Most of the existing RISC-V hardware options simply lack the performance to be used as build machines (think Raspberry Pi 3/4 levels of performance), making them even slower than the emulation setup they're currently using. The only machine that in theory would be performant enough to serve as a build machine is the Milk-V Pioneer, but this machine has serious other problems, as the project notes: Milk-V Pioneer is a board with 64 out-of-order cores; it is the only of its kind, with the cores being supposedly similar to something like ARM Cortex-A72. This would be enough in theory, however these boards are hard to get here (especially with Sophgon having some trouble, new US sanctions, and Mouser pulling all the Milk-V products) and from the information that is available to me, it is rather unstable, receives very little support, and is ridden with various hardware problems. Chimera Linux website So, not only is the Milk-V Pioneer difficult to get due to, among other things, US sanctions, it's also not very stable and receives very little support. Aside from the Pioneer and the various slow and therefore unsuitable options, there's nothing else in the pipeline either for performant RISC-V hardware, making it quite difficult to support the architecture. Of course, this could always change in the future, but for now, supporting RISC-V is clearly not an option for Chimera Linux. This is clearly sad news, especially for those of us hoping RISC-V becomes an open source hardware platform that we can use every day, and I wonder how many other projects are dealing with the same problem.