Feed osnews OSnews

Favorite IconOSnews

Link https://www.osnews.com/
Feed http://www.osnews.com/files/recent.xml
Updated 2025-04-02 22:02
Can you complete the Oregon Trail if you wait at a river for 14272 years: a study
Two years ago, Twitch streamer albrot discovered a bug in the code for crossing rivers. One of the options is to wait to see if conditions improve"; waiting a day will consume food but not recalculate any health conditions, granting your party immortality. From this conceit the Oregon Trail Time Machine was born; a multiday livestream of the game as the party waits for conditions to improve at the final Snake River crossing until the year 10000, to see if the withered travellers can make it to the ruins of ancient Oregon. The first attempt ended in tragedy; no matter what albrot tried, the party would succumb to disease and die almost immediately. A couple of days before New Years Eve 2025, albrot reached out and asked if I knew anything about Apple II hacking. Scott Percival It may have required some reverse engineering and hackery, but yes, you can reach the ruins of Oregon in the year 16120.
“The people should own the town square”
Mastodon, the only remaining social network that isn't a fascist hellhole like Twitter or Facebook, is changing its legal and operational foundation to a proper European non-profit. Simply, we are going to transfer ownership of key Mastodon ecosystem and platform components (including name and copyrights, among other assets) to a new non-profit organization, affirming the intent that Mastodon should not be owned or controlled by a single individual. It also means a different role for Eugen, Mastodon's current CEO. Handing off the overall Mastodon management will free him up to focus on product strategy where his original passion lies and he gains the most satisfaction. Official Mastodon blog Eugen Rochko has always been clear and steadfast about Mastodon not being for sale and not accepting any outside investments despite countless offers, and after eight years of both creating and running Mastodon, it makes perfect sense to move the network and its assets to a proper European non-profit. Mastodon's actual control over the entire federated ActivityPub network - the Fediverse - is actually limited, so it's not like the network is dependent on Mastodon, but there's no denying it's the most well-known part of the Fediverse. The Fediverse is the only social network on which OSNews is actively present (and myself, too, for that matter). By actively present" I only mean I'm keeping an eye on any possible replies; the feed itself consists exclusively of links to our stories as soon as they're published, and that's it. Everything else you might encounter on social media is either legacy cruft we haven't deleted yet, or something a third-party set up that we don't control. RSS means it's easy for people to set up third-party, unaffiliated accounts on any social medium posting links to our stories, and that's entirely fine, of course. However, corporate social media controlled by the irrational whims of delusional billionaires with totalitarian tendencies is not something we want to be a part of, so aside from visiting OSNews.com and using our RSS feeds, the only other official way to follow OSNews is on Mastodon.
WordPress is in trouble
It's hard to see how to move forward from here. I think the best bet would be for people to rally around a new community-driven infrastructure. This would likely require a fork of WordPress, though, and that's going to be a messy. The current open source version of WordPress relies on the sites and services Mullenweg controls. Joost de Valk, the original creator of an extremely popular SEO plugin, wrote a blog post with some thoughts on the matter. I'm hoping that more prominent people in the community step up like this, and that some way forward can be found. Update: Moments after posting this, I was pointed to a story on TechCrunch about Mullenweg deactivating the WordPress.org accounts of users planning a fork". This after he previously promoted (though in a slightly mocking way) the idea of forking open source software. In both cases, the people he mentioned weren't actually planning forks, but musing about future ways forward for WordPress. Mullenweg framed the account deactivations as giving people the push they need to get started. Remember that WordPress.org accounts are required to submit themes, plugins, or core code to the WordPress project. These recent events really make it seem like you're no longer welcome to contribute to WordPress if you question Matt Mullenweg. Gavin Anderegg I haven't wasted a single word on the ongoing WordPress drama yet, but the longer Matt Mullenweg, Automattic's CEO and thus owner of WordPress, keeps losing his mind, I can't really ignore the matter any more. OSNews runs, after all, on WordPress - self-hosted, at least, so not on Mullenweg's WordPress.com - and if things keep going the way they are, I simply don't know if WordPress remains a viable, safe, and future-proof CMS for OSNews. I haven't discussed this particular situation with OSNews owner, David Adams, yet, mostly since he's quite hands-off in the day-to-day operations and has more than enough other matters to take care of, but I think the time has come to start planning for a potential worst-case scenario in which Mullenweg takes even more of whatever he's taking and WordPress implodes entirely. Remember - even if you self-host WordPress outside of Automattic, several core infrastructure parts of WordPress still run through Automattic, so we're still dependent on what Mullenweg does or doesn't do. I have no answers, announcements, or even plans at this point, but if you or your company depend on WordPress, you might want to start thinking about where to go from here.
The history and use of /etc/glob in early Unixes
One of the innovations that the V7 Bourne shell introduced was built in shell wildcard globbing, which is to say expanding things like *, ?, and so on. Of course Unix had shell wildcards well before V7, but in V6 and earlier, the shell didn't implement globbing itself; instead this was delegated to an external program, /etc/glob (this affects things like looking into the history of Unix shell wildcards, because you have to know to look at the glob source, not the shell). Chris Siebenmann I never knew expanding wildcars in UNIX shells was once done by a separate program, but if you stop and think about the original UNIX philosophy, it kind of makes sense. On a slightly related note, I'm currently very deep into setting up, playing with, and actively using HP-UX 11i v1 on the HP c8000 I was able to buy thanks to countless donations from you all, OSNews readers, and one of the things I want to get working is email in dtmail, the CDE email program. However, dtmail is old, and wants you to do email the UNIX way: instead of dtmail retrieving and sending email itself, it expects other programs to those tasks for you. In other words, to setup and use dtmail (instead of relying on a 2010 port of Thunderbird), I'll have to learn how to set up things like sendmail, fetchmail, or alternatives to those tools. Those programs will in turn dump the emails in the maildir format for dtmail to work with. Configuring these tools could very well be above my paygrade, but I'll do my best to try and get it working - I think it's more authentic to use something like dtmail than a random Thunderbird port. In any event, this, too, feels very UNIX-y, much like delegating wildcard expansion to a separate program. What this also shows is that the UNIX philosophy" was subject to erosion from the very beginning, and really isn't a modern phenomenon like many people seem to imply. I doubt many of the people complaining about the demise of the UNIX philosophy today even knew wildcard expansion used to be done by a separate program.
Bringing SerenityOS to real hardware, one driver at a time
Many moons ago, around the time when Andreas formally resigned from being Serenity's BDFL, I decided that I want to get involved in the project more seriously. Looking at it from a perspective of what do I not like about this (codebase)", the first thing that came to mind was that it runs HERE points at QEMU and not THERE points at real hardware. Obvious oversight, let's fix it. sdomi There's no way for me to summarise this cursed saga, so just follow the lovely link and read it. It's a meandering story of complexity, but eventually, a corrupted graphical session appeared. Now the real work starts.
Google launches Chromium development fund to ward off antitrust concerns
Don't you just love it when companies get together under the thin guise of open source to promote their own interests? Today Google is pleased to announce our partnership with The Linux Foundation and the launch of the Supporters of Chromium-based Browsers. The goal of this initiative is to foster a sustainable environment of open-source contributions towards the health of the Chromium ecosystem and financially support a community of developers who want to contribute to the project, encouraging widespread support and continued technological progress for Chromium embedders. The Supporters of Chromium-based Browsers fund will be managed by the Linux Foundation, following their long established practices for open governance, prioritizing transparency, inclusivity, and community-driven development. We're thrilled to have Meta, Microsoft, and Opera on-board as the initial members to pledge their support. Shruthi Sreekanta on the Chromium blog First, there's absolutely no way around the fact that this entire effort is designed to counter some of the antitrust actions against Google, including a possible forced divestment of Chrome. By setting up an additional fund atop the Chromium organisation, placed under the management of the Linux Foundation, Google creates the veneer of more independence for Chromium than their really is. In reality, however, Chromium is very much a Google-led project, with 94% of code contributions coming from Google, and with the Linux Foundation being very much a corporate affair, of which Google itself is a member, one has to wonder just how much it means that the Linux Foundation is managing this new fund. Second, the initial members of this fund don't exactly instill confidence in the fund's morals and values. We've got Google, the largest online advertising company in the world. Then there's Facebook, another major online advertising company, followed by Microsoft, which, among other business ventures, is also a major online advertising company. Lastly we have Opera, an NFT and cryptoscammer making money through predatory loans in poor countries. It's a veritable who's who of some of the companies you least want near anything related to your browsing experience. I highly doubt a transparent effort like this is going to dissuade any judge or antritrust regulator from backing down. It's clear this fund is entirely self-serving and designed almost exclusively for optics, with an obvious bias towards online advertising companies who want to make the internet worse than towards companies and people trying to make the internet better.
VLC gets caught in “AI” hype, adds “AI” subtitles and translations
VLC media player, the popular open-source software developed by nonprofit VideoLAN, has topped 6 billion downloads worldwide and teased an AI-powered subtitle system. The new feature automatically generates real-time subtitles - which can then also be translated in many languages - for any video using open-source AI models that run locally on users' devices, eliminating the need for internet connectivity or cloud services, VideoLAN demoed at CES. Manish Singh at TechCrunch VLC is choosing to throw users who rely on subtitles for accessibility or translation reasons under the bus. Using speech-to-text and even AI" as a starting point for a proper accessibility expert of translator is fine, and can greatly reduce the workload. However, as anyone who works with STT and AI" translation software knows, their output is highly variable and wildly unreliable, especially once English isn't involved. Dumping the raw output of these tools onto people who rely on closed captions and subtitles to even be able to view videos is not only lazy, it's deeply irresponsible and demonstrates a complete lack of respect and understanding. I was a translator for almost 15 years, with two university degrees on the subject to show for it. This is obviously a subject close to my heart, and the complete and utter lack of respect and understanding from Silicon Valley and the wider technology world for proper localisation and translation has been a thorn in my side for decades. We all know about bad translations, but it goes much deeper than that - with Silicon Valley's utter disregard for multilingual people drawing most of my ire. Despite about 60 million people in the US alone using both English and Spanish daily, software still almost universally assumes you speak only one language at all times, often forcing fresh installs for something as simple as changing a single application's language, or not even allowing autocorrect on a touch keyboard to work with multiple languages simultaneously. I can't even imagine how bad things are for people who, for instance, require closed-captions for accessibility reasons. Imagine just how bad the AI"-translated Croatian closed-captions on an Italian video are going to be - that's two levels of AI" brainrot between the source and the ears of the Croatian user. It seems subtitles and closed captions are going to be the next area where technology companies are going to slash costs, without realising - or, more likely, without giving a shit - that this will hurt users who require accessibility or translations more than anything. Seeing even an open source project like VLC jump onto this bandwagon is disheartening, but not entirely unexpected - the hype bubble is inescapable, and a lot more respected projects are going to throw their users under the bus before this bubble pops. ...wait a second. Why is VLC at CES in the first place?
Nvidia CEO says company has plans for desktop chip designed with MediaTek
On Monday at CES 2025, Nvidia unveiled a desktop computer called Project DIGITS. The machine uses Nvidia's latest Blackwell" AI chip and will cost $3,000. It contains a new central processor, or CPU, which Nvidia and MediaTek worked to create. Responding to an analyst's question during an investor presentation, Huang said Nvidia tapped MediaTek to co-design an energy-efficient CPU that could be sold more widely. Now they could provide that to us, and they could keep that for themselves and serve the market. And so it was a great win-win," Huang said. Previously, Reuters reported that Nvidia was working on a CPU for personal computers to challenge the consumer and business computer market dominance of Intel, Advanced Micro Devices and Qualcomm. Stephen Nellis at Reuters I've long wondered why NVIDIA wasn't entering the general purpose processor market in a more substantial way than it did a few years ago with the Tegra, especially now that ARM has cemented itself as an architecture choice for more than just mobile devices. Much like Intel, AMD, and now Qualcomm, NVIDIA could easily deliver the whole package to laptop, tablet, and desktop makers: processor, chipset, GPU, of course glued together with special NVIDIA magic the other companies opting to use NVIDIA GPUs won't get. There's a lot of money to be made there, and it's the move that could help NVIDIA survive the inevitable crash of the AI" wave it's currently riding, which has pushed the company to become one of the most valuable companies in the world. I'm also sure OEMs would love nothing more than to have more than just Qualcomm to choose from for ARM laptops and desktops, if only to aid in bringing costs down through competition, and to potentially offer ARM devices with the same kind of powerful GPUs currently mostly reserved for x86 machines. I'm personally always for more competition, but this time with the asterisk that NVIDIA really doesn't need to get any bigger than it already is. The company has a long history of screwing over consumers, and I doubt that would change if they also conquered a chunky slice of the general purpose processor market.
Pairs not taken
So we all know about twisted-pair ethernet, huh? I get a little frustrated with a lot of histories of the topic, like the recent neil breen^w^wserial port video, because they often fail to address some obvious questions about the origin of twisted-pair network cabling. Well, I will fail to answer these as well, because the reality is that these answers have proven very difficult to track down. J. B. Crawford The problems with nailing down an accurate history of the development of the various standards, ideas, concepts, and implementations of Ethernet and other, by now dead, network standards are their age, as well as the fact that their history is entangled with the even longer history of telephone wiring. The reasoning behind some of the choices made by engineers over the past more than 100 years of telephone technology aren't always clear, and very difficult to retrace. Crawford dives into some seriously old and fun history here, trying to piece together the origins of twisted pair the best he can. It's a great read, as all of his writings are.
An operating system in 1000 lines
Hey there! In this book, we're going to build a small operating system from scratch, step by step. You might get intimidated when you hear OS or kernel development, the basic functions of an OS (especially the kernel) are surprisingly simple. Even Linux, which is often cited as a huge open-source software, was only 8,413 lines in version 0.01. Today's Linux kernel is overwhelmingly large, but it started with a tiny codebase, just like your hobby project. We'll implement basic context switching, paging, user mode, a command-line shell, a disk device driver, and file read/write operations in C. Sounds like a lot, however, it's only 1,000 lines of code! Seiya Nuta It's exactly what it says on the tin.
HDMI 2.2 will require new “Ultra96” cables, whenever we have 8K TVs and content
We've all had a good seven years to figure out why our interconnected devices refused to work properly with the HDMI 2.1 specification. The HDMI Forum announced at CES today that it's time to start considering new headaches. HDMI 2.2 will require new cables for full compatibility, but it has the same physical connectors. Tiny QR codes are suggested to help with that, however. The new specification is named HDMI 2.2, but compatible cables will carry an Ultra96" marker to indicate that they can carry 96GBps, double the 48 of HDMI 2.1b. The Forum anticipates this will result in higher resolutions and refresh rates and a next-gen HDMI Fixed Rate Link." The Forum cited AR/VR/MR, spatial reality, and light field displays" as benefiting from increased bandwidth, along with medical imaging and machine vision. Kevin Purdey at Ars Technica I'm sure this will not pose any problems whatsoever, and that no shady no-name manufacturers will abuse this situation at all. DisplayPort is the better standard and connector anyway. No, I will not be taking questions.
NESFab: a new programming language for creating NES games
NESFab is a new programming language for creating NES games. Designed with 8-bit limitations in mind, the language is more ergonomic to use than C, while also producing faster assembly code. It's easy to get started with, and has a useful set of libraries for making your first - or hundredth - NES game. NESFab website NESFab has some smart features developers of NES games will certainly appreciate, most notably automatic bank switching. Instead of doing this manually, but NESFab will automatically carve your code and data up into banks to be switched in and out of memory when needed. There's also an optional map editor, which makes it very easy to create additional levels for your game. All in all, a very cool project I hadn't heard of, which also claims to perform better than other compilers. If you've ever considered making an NES game, NESFab might be a tool to consider.
OpoLua: a compiled-OPL interpreter for iOS written in Lua
An OPO (compiled OPL) interpreter written in Lua and Swift, based on the Psion Series 5 era format (ie ER5, prior to the Quartz 6.x changes). It lets you run Psion 5 programs written in OPL on any iOS device, subject to the limitations described below. OpoLua GitHub page If you're pining for that Psion Series 5, but don't want to deal with the hassle of owning and maintaining a real one - here's a solution if you're an iOS users. Incredibly neat, but with one limitation: only pure OPL programs work. Any program that also has native ARM code will not work.
Dell rebrands its entire product line: XPS, Inspiron, Latitude, etc. are going away
Dell has announced it's rebranding literally its entire product line, so mainstays like XPS, Latitude, and Inspiron are going away. They're replacing all of these old brands with Dell, Dell Pro, and Dell Pro Max, and within each of these, there will be three tiers: Base, Plus, and Premium. Of course, the reason is AI". The AI PC market is quickly evolving. Silicon innovation is at its strongest and everyone from IT decision makers to professionals and everyday users are looking at on-device AI to help drive productivity and creativity. To make finding the right AI PC easy for customers, we've introduced three simple product categories to focus on core customer needs - Dell (designed for play, school and work), Dell Pro (designed for professional-grade productivity) and Dell Pro Max (designed for maximum performance). We've also made it easy to distinguish products within each of the new product categories. We have a consistent approach to tiering that lets customers pinpoint the exact device for their specific needs. Above and beyond the starting point (Base), there's a Plus tier that offers the most scalable performance and a Premium tier that delivers the ultimate in mobility and design. Kevin Terwilliger on Dell's blog Setting aside the nonsensical reasoning behind the rebrand, I do actually kind of dig the simplicity here. This is a simple, straightforward set of brand names and tiers that pretty much anyone can understand. That being said, the issue with Dell in particular is that once you go to their website to actually buy one of their machines, the clarity abruptly ends and it gets confusing fast. I hope these new brand names and tiers will untangle some of that mess to make it easier to find what you need, but I'm skeptical. My XPS 13 from 2017 is really starting to show its age, and considering how happy I've been with it over the years its current Dell equivalent would be a top contender (assuming I had the finances to do so). I wonder if the Linux support on current Dell laptops has improved since my XPS 13 was new?
Microsoft’s tone-deaf advice to Windows 10 users: just buy a new PC, you’re all rich, right?
Over 60% of Windows users are still using Windows 10, with only about 35% or so - and falling! - of them opting to use Windows 11. As we've talked about many times before, this is a major issue going into 2025, since Windows 10's support will end in October of this year, meaning hundreds of millions of people all over the world will suddenly be running an operating system that will no longer receive security updates. Most of those people don't want to, or cannot, upgrade to Windows 11, meaning Microsoft is leaving 60% of its Windows customer base out to dry. I'm sure this will go down just fine with regulators and governments the world over. Microsoft has tried everything, and it's clear desperation is setting in, because the company just declared 2025 The year of the Windows 11 PC refresh", stating that Windows 11 is the best way to get all the AI" stuff people are clearly clamoring for. All of the innovation arriving on new Windows 11 PCs is coming at an important time. We recently confirmed that after providing 10 years of updates and support, Windows 10 will reach the end of its lifecycle on Oct. 14, 2025. After this date, Windows 10 PCs will no longer receive security or feature updates, and our focus is on helping customers stay protected by moving to modern new PCs running Windows 11. Whether the current PC needs a refresh, or it has security vulnerabilities that require the latest hardware-backed protection, now is the time to move forward with a new Windows 11 PC. Some overpaid executive at Microsoft What makes this so incredibly aggravating and deeply tone-deaf is that for most of the people affected by this, upgrading" to Windows 11 simply isn't a realistic option. Their current PC is most likely performing and working just fine, but the steep and strict hardware requirements prohibit them from installing Windows 11. Buying an entirely new PC is often not only not needed from a performance perspective, but for many, many people also simply unaffordable. In case you haven't noticed, it's not exactly going great, financially, for a lot of people out there, and even in the US alone, 70-80% of people live paycheck-to-paycheck, and they're certainly not going to be able to just move forward with a new Windows 11 PC" for nebulous and often regressive benefits" like AI". The fact that Microsoft seems to think all of those hundreds of millions of people not only want to buy a new PC to get AI" features, but that they also can afford it like it's no big deal, shows some real lack of connective tissue between the halls of Microsoft's headquarters and the wider world. Microsoft's utter lack of a grasp on the financial realities of so many individuals and families today is shocking, at best, and downright offensive, at worst. I guess if you live in a world where you can casually bribe a president-elect for one million dollars, buying a new computer feels like buying a bag of potatoes.
Why Half-Life 3 speculation is reaching a fever pitch again
The more than two decades since Half-Life 2s release have been filled with plenty of rumors and hints about Half-Life 3, ranging from the official-ish to the thin to the downright misleading. As we head into 2025, though, we're approaching something close to a critical mass of rumors and leaks suggesting that Half-Life 3 is really in the works this time, and could be officially announced in the coming months. Kyle Orland at Ars Technica We should all be skeptical of anything related to Half-Life 3, but there's no denying something's buzzing. The one reason why I personally think a Half-Life 3 might be happening is the imminent launch of SteamOS for generic PCs, possibly accompanied by prebuilt SteamOS PCs and consoles and third-party Steam Decks. It makes perfect sense for Valve to have such a launch accompanied by the release of Half-Life 3, similar to how Half-Life 2 was accompanied by the launch of Steam. We'll have to wait and see. It will be hard to fulfill all the crazy expectations, though.
One dog v. the Windows 3.1 graphics stack
I'd like to write a full-fledged blog post about these adventures at some point, but for now I'm going to focus on one particular side quest: getting acceptable video output out of the 1000H when it's running Windows 3.11 for Workgroups. By default, Windows 3.x renders using the standard lowest common denominator" of video: VGA 640*480 at 16 colours. Unfortunately this looks awful on the Eee PC's beautiful 1024*600 screen, and it's not even the same aspect ratio. But how can we do better? Ash Wolf If you ever wanted to know how display drivers work in Windows 3.x, here's your chance. This definitely falls into the category of light reading for the weekend.
The Mac OS X dock turns 25
James Thomson, developer of, originally, DragThing and now PCalc, also happens to be the developer of the very first publicly shown version of the Mac OS dock. Now that it was shown to the world by Steve Jobs exactly 25 years ago, he reminisces about what it was like to create such an iconic piece of software history. The new Finder (codename Millennium") was at this point being written on Mac OS 9, because Mac OS X wasn't exactly firing on all cylinders quite yet. The filesystem wasn't working well, which is not super helpful when you are trying to write a user interface on top of it. The Dock was part of the Finder then, and could lean on all the high level C++ interfaces for dealing with disks and files that the rest of the team was working on. So, I started on Mac OS 9, working away in Metrowerks Codewarrior. The Finder was a Carbon app, so we could actually make quite a bit of early progress on 9, before the OS was ready for us. I vividly remember the first time we got the code running on Mac OS X. James Thomson I especially like the story about how Steve Jobs really demanded Thomson live in Cupertino in order to work on the dock, instead of remaining remote in Ireland. Thomson and his wife decided not to move to the United States, so he figured he'd lose his assignment, or maybe even his job altogether. Instead, his managers told him something along the lines of don't worry, we'll just tell Steve you moved". What followed were a lot of back-and-forth flights between Ireland and California, and Thomson's colleagues telling Steve all sorts of lies and cover stories for whenever he was in Ireland and Steve noticed. Absolutely wild. The dock is one of those things from my years using Mac OS X - between roughly 2003 and 2009 or so - that has stuck around with me ever since. To this day, I have a dock at the bottom of my screen that looks and works eerily similar to the Mac OS X dock, and I doubt that's going to change any time soon. It suits my way of using my computer incredibly well, and it's the first thing I set up on any new installation I perform (I use Fedora KDE).
Apple Vision Pro may now be out of production
Apple's first-generation Vision Pro headset may have now ceased production, following reports of reduced demand and production cuts earlier in the year. Hartley Charlton at MacRumors I think we'll live.
NVIDIA’s RTX 5090 will supposedly have a monstrous 575W TDP
The RTX 5090 and RTX 5080 are receiving their final updates. According to two highly reliable leakers, the RTX 5090 is officially a 575W TDP model, confirming that the new SKU requires significantly more power than its predecessor, the RTX 4090 with TDP of 450W. According to Kopite, there has also been an update to the RTX 5080 specifications. While the card was long rumored to have a 400W TDP, the final figure is now set at 360W. This change is likely because NVIDIA has confirmed the TDP, as opposed to earlier TGP figures that are higher and represent the maximum power limit required by NVIDIA's specifications for board partners. WhyCry at VideoCardz.com These kinds of batshit insane GPU power power requirements are eventually going to run into the limits of the kind of airflow an ATX case can provide. We're still putting the airflow stream of GPUs (bottom to top) perpendicular to the airflow through the case (front to back) like it's 1987, and you'd think at least someone would be thinking about addressing this - especially when a GPU is casually dumping this much heat into the constrained space within a computer case. I don't want more glass and gamer lights. I want case makers to hire at least one proper fluid dynamics engineer.
Windows 2: Final Fantasy of operating systems
It is common knowledge that Final Fantasy could have been the last game in the series. It is far less known that Windows 2, released around the same time, could too have been the last. If anything, things were more certain: even Microsoft believed that Windows 2 would be the last. The miracle of overwhelming commercial success brought incredible attention to Windows. The retro community and computer historians generally seem to be interested in the legendary origins of the system (how it all began) or in its turnabout Windows 3.0 release (what did they do right?). This story instead will be about the underdog of Windows, version 2. To understand where it all went wrong, we must start looking at events that happened even before Microsoft was founded. By necessity, I will talk a lot about the origins of Windows, too. Instead of following interpersonal/corporate drama, I will try to focus on the technical aspects of Windows and its competitors, as well as the technological limitations of the computers around the time. Some details are so convoluted and obscure that evenmultiple Microsoft sources, including Raymond Chen, are wrong about essential technical details. It is going to be quite a journey, and it might seem a bit random, but I promise that eventually, it all will start making sense. Nina Kalinina I'm not going to waste your previous time with my stupid babbling when you could instead spend it reading this amazingly detailed, lovingly crafted, beautifully illustrated, and deeply in-depth article by Nina Kalinina about the history, development, and importance of Windows 2. She's delivered something special here, and it's a joy to read and stare at the screenshots from beginning to end. Don't forget to click on the little expander triangles for a ton of in-depth technical stuff and even more background information.
AROS centimeters closer to 64bit
We've just entered the new year, and that means we're going to see some overviews about what the past year has brought. Today we're looking at AROS, as AROS News - great name, very classy, you've got good taste, don't change it - summarised AROS' 2024, and it's been a good year for the project. We don't hear a lot about AROS-proper, as the various AROS distributions are a more optimal way of getting to know the operating system and the project's communication hasn't always been great, but that doesn't mean they've been sitting still. Perhaps the most surprising amount of progress in 2024 was made in the move from 32bit to 64bit AROS. Deadwood also released a 64-bit version of the system (ABIv11) in a Linux hosted version (ABIv11 20241102-1) and AxRuntime version 41.12, which promises a complete switch to 64-bit in the near future. He has also developed a prototype emulator that will enable 64-bit AROS to run programs written for the 32-bit version of the system. Andrzej retrofaza" Subocz at AROS News This is great news for AROS, as being stuck in 32bit isn't particularly future-proof. It might not pose many problems today, as older hardware remains available and 64bit x86 processors can handle running 32bit operating systems just fine, but you never know when that will change. Int the same vein, Deadwood also released a 64bit version of Oddysey, the WebKit-based browser, which was updated this year from August 2015's WebKit to February 2019's WebKit. Sure, 2019 might still be a little outdated, but it does mean a ton of complex sites now work again on AROS, and that's a hugely positive development. Things like Python and GCC were also updated this year, and there was, as is fitting for an Amiga-inspired operating system, a lot of activity in the gaming world, including big updates to Doom 3 and ScummVM. This is just a selection of course, so be sure to read Subocz's entire summary at AROS News.
The GPU, not the TPM, is the root of hardware DRM
Do you think streaming platforms and other entities that employ DRM schemes use the TPM in your computer to decrypt stuff? Well, the Free Software Foundation seems to think so, and adds Microsoft's insistence on requiring a TPM for Windows 11 into the mix, but it turns out that's simply not true. I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff. What Icansay is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. Thereishardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU. Matthew Garrett A TPM is imply not designed to handle decryption of media streams, and even if they were, they're far, far too slow and underpowered to decode even a 1080P stream, let alone anything more demanding than that. In reality, DRM schemes like Google's Widevine, Apple's Fairplay, and Microsoft's Playready offer different levels of functionality, both in software and in hardware. The hardware DRM stuff is all done by the GPU, and not by the TPM. By focusing so much on the TPM, Garrett argues, the FSF is failing to see how GPU makers have enabled a ton of hardware DRM without anyone noticing. Personally, I totally understand why organisations like the Free Software Foundation are focusing on TPMs right now. They're one of the main reasons why people can't upgrade to Windows 11, it's the thing people have heard about, and it's the thing that'll soon prevent them from getting security updates for their otherwise perfectly fine machines. I'm not sure the FSF has enough clout these days to make any meaningful media impact, especially in more general, non-tech media, but by choosing the TPM as their focus they're definitely choosing a viable vector. Of course, over here in the tech corner, we don't like it when people are factually inaccurate or twisting and bending the truth, and I'm glad someone as knowledgeable as Garrett stepped up to set the record straight for us tech-focused people, while everyone else can continue to ignore this matter.
Running NetBSD on an IBM ThinkPad 380Z
Launched in 1998, the 380Z was one very fine ThinkPad. It was the last ThinkPad to come in the classic bulky and rectangular form factor. It was also one of the first to feature a huge 13.3'' TFT display, powerful 233MHz Pentium II, and whopping 160 megs of RAM. I recently stumbled upon one in perfect condition on eBay, and immediately thought it'd be a cool vintage gadget to put on the desk. I only wondered if I could still use it for some slow-paced, distraction-free coding, using reasonably modern software. Luke's web space You know where this is going, right? I evaluated a bunch of contemporary operating systems, including different variants of BSD and Linux. Usually, the experience was underwhelming in terms of performance, hardware support and stability. Well... except for NetBSD, which gave me such perfectly smooth ride, that I thought it was worth sharing. Luke's web space Yeah, of course it was going to be NetBSD (again). This old laptop, too, can run X11 just fine, with the EMWM that we discussed quite recently - in fact, bringing up X required no configuration, and a simple startx was all it needed out of the box. For web browsing, Dillo works just great, and building it took only about 20 minutes. It can even play some low-fi music streams from the internet, only stuttering when doing other processor-intensive tasks. In other words, this little machine with NetBSD turns out to be a great machine for some distraction-free programming. Look, nobody is arguing that a machine like this is all you need. However, it can perform certain specific, basic tasks - anything being better than sending it to a toxic landfill, with all the transportation waste and child labour that entails. If you have old laptops lying around, you should think twice about just sending them to recycling" (which is Western world speak for send to toxic landfill manned by children in poor countries"), since it might be quite easy to turn it into something useful, still.
The Windows CE Developer’s Kit from 1999
Rare, hard to come by, but now available on the Internet Archive: the complete book set for the Windows CE Developer's Kit from 1999. It contains all the separate books in their full glory, so if you ever wanted to write either a Windows CE application or driver for Windows CE 2.0, here's all the information you'll ever need. The Microsoft Windows CE Developer's Kit provides all the information you need to write applications for devices based on the Microsofte Windowso CE operating system. Windows CE Developer's Kit The Microsoft Windows CE Programmer's Guide details the architecture of the operating system, how to write applications, how to implement synchronisation with a PC, and much more that pertains to developing applications. The Microsoft Windows CE User Interface Services Guide can be seen as an important addition to the Programmer's Guide, as it details everything related to creating a GUI and how to handle various input methods. Going a few steps deeper, and we arrive at the Microsoft Windows CE Communications Guide, which, as the name implies, tells you all you need to know about infrared connections, telephony, networking and internet connections, and related matter. Finally, we arrive at the Microsoft Windows CE Device Driver Kit, which, as the name implies, is for those of us interested in writing device drivers for Windows CE, something that will surely be of great importance in the future, since Windows CE is sure to dominate our mobile life. To get started, you do need to have Microsoft Visual C++ version 6.0 and the Microsoft Windows CE Toolkit for Visual C++ version 6.0 up and running, since all code samples in the Programmer's Guide are developed with it, but I'm sure you already have this taken care of - why would you be developing for any other platforms, am I right?
LineageOS 22.1, based on Android 15 QPR1, released
LineageOS, the Debian of the custom Android ROM world, released version 22 - or, 22.1 to be more exact - today. On the verge of the new year, they managed to complete the rebase to Android 15, released in September, making this one of their fastest rebases ever. We've been hard at work since Android 15's release in September, adapting our unique features to this new version of Android. Android 15 introduced several complex changes under the hood, but due to our previous efforts adapting to Google's UI-centric adjustments in Android 12 through 14, we were able to rebase onto Android 15's code-base faster than anticipated. Additionally, this is far-and-away the easiest bringup cycle from a device perspective we have seen in years. This means that many more devices are ready on day one that we'd typically expect to have up this early in the cycle! Nolen Johnson LineageOS is also changing its versioning scheme to better match that of Google's new quarterly Android releases, and that's why this new release is 22.1: it's based on Android 15 QPR1. In other words, the 22 aligns with the major Android version number, and the .1 with the QPR it's based on. LineageOS 22.1 brings all the same new features as Android 15 and QPR1, as well as two brand new applications: Twelve, a replacement for LineageOS' aging music player application, and Camelot, a new PDF reader. The list of supported devices is pretty good for a new LineageOS release, and adds the Pixel 9 series of devices right off the bat. LineageOS 22.1 ships with the November Android security patches, and also comes with a few low-level changes, like completely new extract utilities written in Python, which massively improve extracting performance, virtIO support, and much more.
Chimera Linux enters beta
We've talked about Chimera Linux before - it's a unique Linux distribution that combines a BSD userland with the LLVM/Clang toolchain, and musl. Its init system is dinit, and it uses apk-tools from Alpine as its package manager. None of this has anything to do with being anti-anything; the choice of BSD's tools and userland is mostly technical in nature. Chimera Linux is available for x86-64, AArch64, RISC-V, and POWER (both little and big endian). I am unreasonably excited for Chimera Linux, for a variety of reasons - first, I love the above set of choices they made, and second, Chimera Linux' founder and lead developer, q66, is a well-known and respected name in this space. She not only founded Chimera Linux, but also used to maintain the POWER/PowerPC ports of Void Linux, which is the port of Void Linux I used on my POWER9 hardware. She apparently also contributed quite a bit to Enlightenment, and is currently employed by Igalia, through which she can work on Chimera. With the description out of the way, here's the news: Chimera Linux has officially entered beta. Today we have updatedapk-toolsto anrctag. With this, the project is now entering beta phase, after around a year and a half. In general, this does not actually mean much, as the project is rolling release and updates will simply keep coming. It is more of an acknowledgement of current status, though new images will be released in the coming days. Chimera Linux's website Despite my excitement, I haven't yet tried Chimera Linux myself, as I figured its pre-beta stage wasn't meant for an idiot like me who can't contribute anything meaningful, and I'd rather not clutter the airwaves. Now that it's entered beta, I feel like the time is getting riper and riper for me to dive in, and perhaps write about it here. Since the goal of Chimera Linux is to be a general-purpose distribution, I think I'm right in the proper demographic of users. It helps that I'm about to set up my dual-processor POWER9 machine again, and I think I'll be going with Chimera Linux. As a final note, you may have noticed I consistently refer to it as Chimera Linux". This is very much on purpose, as there's also something called ChimeraOS, a more standard Linux distribution aimed at gaming. To avoid confusion, I figured I'd keep the naming clear and consistent.
Running NetBSD 10.1 on a 1998 Toshiba laptop
Here are my notes on running NetBSD 10.1 on my first personal laptop that I still keep, a 1998 i586 Toshiba Satellite Pro with 81Mb of RAM and a 1Gb IBM 2.5'' IDE HD. In summary, the latest NetBSD runs well on this old hardware using an IDE to CF adapter and several changes to the i386 GENERIC kernel. Joel P. I don't think the BSD world - and NetBSD in particular - gets enough recognition for supporting both weird architectures and old hardware as well as it does. This here is a 26 year old laptop running the latest version of NetBSD, working X11 server and all, while other operating systems drop support for devices only a few years old. So many devices could be saved from toxic landfills if only more people looked beyond Windows and macOS.
IncludeOS: a minimal, resource efficient unikernel for cloud services
IncludeOSis an includable, minimalunikerneloperating system for C++ services running in the cloud and on real HW. Starting a program with#include <os>will literally include a tiny operating system into your service during link-time. IncludeOS GitHub page IncludeOS isn't exactly the only one of its kind, but I've always been slightly mystified by what, exactly, unikernels are for. The gist is, as far as I understand it, that if you build an application using a unikernel, it will find out at compile time exactly what it needs from the operating system to run, and then everything it needs from the operating system to run will be linked inside the resulting application. This can then be booted directly by a hypervisor. The advantages are clear: you don't have to deal with an entire operating system just to run that one application or service you need to provide, and footprint is kept to a minimum because only the exact dependencies the application needs from the operating system are linked to it during compilation. The downsides are obvious too - you're not running an operating system so it's far less flexible, and if issues are found in the unikernel you're going to have to recompile the application and the operating system bits inside of it just to fix it (at least, I think that's the case - don't quote me on it). IncludeOS is under heavy development, so take that under advisement if you intend to use it for something serious. The last full release dates back to 2019, but it's still under development as indicated by the GitHub activity. I hope it'll push out a new release soon.
Emulating HP-UX using QEMU
While we're out here raising funds to make me daily-drive HP-UX 11i v1 - we're at 59% of the goal, so I'm starting to prepare for the pain - it seems you can actually run older versions, HP-UX 10.20 and 11.00 to be specific, in a virtual machine using QEMU. QEMU is an open source computer emulation and virtualization software, first released in 2003 by Fabrice Bellard. It supports many different computer systems and includes support for many RISC architectures besides x86. PA-RISC emulation has been included in QEMU since 2018. QEMU emulates a complete computer in software without the need for specific virtualization hardware. With QEMU, a full HP Visualize B160L and C3700 workstation can be emulated to run PA-RISC operating systems like HP-UX Unix and compatible applications. Paul Weissman at OpenPA The emulation is complete enough that it can run X11 and CDE, and you can choose between emulating 32bit PA-RISC of 64bit PA-RISC. Devices and peripherals support is a bit of a mixed bag, with things like USB being only partially supported, and audio not working at all since an audio chip commonly found in PA-RISC workstations isn't supported either. A number of SCSCI and networking devices found on HP's workstations aren't supported either, and a few chipsets don't work either. As far as operating system support goes, you can run HP-UX 10.20, HP-UX 11.00, Linux, and NetBSD. Newer (11i v1 and later) and older (9.07 and 9.05) versions of HP-UX don't work, and neither does NeXTSTEP 3.3. Some of these issues probably stem from missing QEMU drivers, others from a lack of testing; PA-RISC is, after all, not one of the most popular of the dead UNIX architectures, with things like SPARC and MIPS usually taking most of the spotlight. Absolutely nothing beats running operating systems on the bare metal they're designed for, but with PA-RISC hardware becoming ever harder to obtain, it makes sense for emulation efforts to pick up speed so more people can explore HP-UX. I'm weirdly into HP-UX, despite its reputation as a difficult platform to work with, so I personally really want actual hardware, but for most of you, getting HP-UX 11i to work properly on QEMU is most likely the only way you will ever experience this commercial UNIX.
A systemd-sysupdate plugin for GNOME Software
In late June 2024 I got asked to take over the work started byJerry Wucreating asystemd-sysupdateplugin forSoftware. The goal was to allow Software to update sysupdate targets, such as base system images orsystem extension images, all while respecting the user's preferences such as whether to download updates on metered connections. To do so, the plugin communicates with thesystemd-sysupdateddaemon via itsorg.freedesktop.sysupdate1D-Bus interface. I didn't know many of the things required to complete this project and it's been a lot to chew in one bite for me, hence how long it took to complete. Adrien Plazas This new plugin was one of the final pieces of moving GNOME OS - which we talked about before - from OSTree to sysupdate, which in turn is part of GNOME OS' effort to have a fully trusted boot sequence. While I'm not sure GNOME OS is something that will find a lot of uptake among the kind of people that read OSNews, I think it's a hugely important effort to create a no-nonsense, easy-to-use Linux system for normal people to embrace. The Steam Deck employs similar implementations, and it's easy to see why.
The Tasmania LAN party photos archive reminded me of my terrible teenage fashion choices
I've never been to a LAN party, not even back in the '90s and early 2000s when they were quite the common occurance. Both my family and various friends did have multiple computers in the house, so I do have fond memories of hooking up computers through null modem cables to play Rise of the Triad, later superseded by direct Ethernet connections to play newer games. LAN parties have left lasting impressions on those that regularly attended them, but since most took place before the era of ever-present digital camera and smartphones, photos of such events are rarer than they should be. Luckily, Australian software engineer Issung did a lot of digging and eventually struck gold: a massive collection of photos and a few videos from LAN parties that took place from 1996 and 2010 in Australia. After trying a few other timestamps and a few more web searches I sadly couldn't find anything. As a last ditch effort I made a few posts on various forums, including the long dormant Dark-Media Steam group, then I forgot about it all, until 2 months ago! Someone reached out and was able to get me into a small private Facebook group, once in I could see I had gotten more than I bargained for! I was just looking for Dark-Media photos, but found another regular LAN I had forgotten about, and photos from even more LANs from the late 90s. I was able to scrape all the photos and now upload them toarchive.orgwhere they can hopefully live forever. Issung I love browsing through these, as they bring back so many memories of the computers and dubious fashion choices of my teenage years - I used to combine different colours zip-off pants, and even had mohawks, spikes, and god knows what else before I started losing all my hair in my very early 20s. Anyway, the biggest change is the arrival of flat displays signalling the end of the widespread use of CRTs, and the slow dissappearence of beige in favour of black. Such a joy to see the trends change in these photos. If anyone else is sitting on treasure troves like these, be sure to share them with the world before it's too late.
Microsoft puts an “AI” in a shell’s split view
AI Shell is an interactive shell that provides a chat interface with language models. The shell provides agents that connect to different AI models and other assistance providers. Users can interact with the agents in a conversational manner. Microsoft Learn Basically, what Microsoft means with this is a split-view terminal where one of the two views is a prompt where you can ask questions to an AI", like OpenAI or whatever. The AI" features are not actually integrated into your shell, which instead lives in the other view and acts like a completely normal, standard shell. Instead of opening up an AI" chatbot in a browser window or whatever, you now have it in a split view in your terminal - that's really all there's to it here. I'm going to blow your mind here and say that in theory, this could be an actually useful addition to terminals and shells, as a properly vetted and configured AI" that has been trained on properly obtained source material could indeed be a great help in determining the right terminal commands and options. Tons of people already blindly copy and paste terminal commands from websites even though they really shouldn't anyway, so it's not like this introduces anything new here in terms of dangers. Hell, tutorial writers still add -y to dnf or apt-get commands, so it can really only go up from here.
ASUS UEFI force-installs and reinstalls shovelware on Windows and it’s spamming users with Christmas wishes
I didn't have the time to post this one before Christmas, but it's so funny and sad at the same time I don't want to keep this from you. It turns out that in the days leading up to Christmas this year, users of ASUS computers - or with ASUS motherboards, I guess - were greeted with a black bar covering about a third of their screen, decorated with a Christmas wreath. I am making this post for the sake of people like me who will have a black box show up at the bottom of their screen with a Christmas wreath labeled christmas.exe" in task manager and think it's Windows 10/11 malware. It is not. It is from the ASUS Armoury Crate program and can be safely closed and ignored. It looks super sketchy and will hopefully save you some time diagnosing the problem. Slow-Macaroon9630 on reddit So yes, if you're using an ASUS computer and have their shovelware installed, you may have been greeted by a giant black banner caused by an executable called christmas.exe", which sounds exactly like something shitty malware would do. The banner would disappear after a while, and the executable would vanish from the list of running processes as well. It turns out there's a similar seasonal greeting called HappyNewYear.exe", so if you haven't done anything to address the first black bar, you might be getting a second one soon. The fact that shitty OEM shovelware does this kind of garbage on Windows is nothing new - class is not something you can accuse Windows of having - but I was surprised to find out just how deeply embedded this ASUS shovelware program called Armoury Crate really is. It doesn't just come preinstalled on ASUS computers - no, this garbage program actually has roots in your motherboard's firmware. If you merely uninstall Amoury Crate from Windows, it will automatically reinstall itself because your motherboard's firmware tells it to. I'm not joking. To prevent Armory Crate from reinstalling itself, you have to reboot your PC into its UEFI, go to the Advanced Mode, go to Tool > ASUS Armoury Crate, and disable the option Download & Install ARMOURY CRATE app. I had no idea Windows hardware makers had sunk to this kind of low, but I'm also not surprised. If Microsoft shoves endless amounts of ads and shovelware on people's computers, why can't OEMs?
CobolCraft: a Minecraft server written in COBOL
COBOL, your mother's and grandmother's programming language, is still in relatively wide use today, and with the initial batches of COBOL programmers retiring and, well, going away, there's a market out there for younger people to learn COBOL and gain some serious job security in stable, but perhaps boring market segments. One of the things you would not associate with COBOL, however, is gaming - but it turns out it can be used for that, too. CobolCraft is a Minecraft server written in, you guessed it, COBOL. It was developed using GnuCOBOL, and only works on Linux - Windows and macOS are not supported, but it can be run using Electron for developers, otherwise known as Docker. It's only compatible with the latest release of Minecraft at the time of CobolCraft's development, version 1.21.4, and a few more complex blocks with states are not yet supported because of how difficult it is to program those using COBOL. CobolCraft's creator, Fabian Meyer, explains why he started this project: Well, there are quite a lot of rumors and stigma surrounding COBOL. This intrigued me to find out more about this language, which is best done with some sort of project, in my opinion. You heard right - I had no prior COBOL experience going into this. Writing a Minecraft server was perhaps not the best idea for a first COBOL project, since COBOL is intended for business applications, not low-level data manipulation (bits and bytes) which the Minecraft protocol needs lots of. However, quitting before having a working prototype was not on the table! A lot of this functionality had to be implemented completely from scratch, but with some clever programming, data encoding and decoding is not just fully working, but also quite performant. Fabian Meyer I don't know much about programming, but I do grasp that this is a pretty crazy thing to do, and quite the achievement to get working this well, too. Do note that this isn't a complete implementation of the Minecraft server, with certain more complex blocks not working, and things like a lighting engine not being made yet either. This doesn't detract from the achievement, but it does mean you won't be playing regular Minecraft with this for a while yet - if ever, if this remains a fun hobby project for its creator.
The Moxie child support robot gets new lease on life through open source
It's a Christmas miracle! The Moxie, that support robot thing for kids we talked about two weeks ago, seems to be getting a new lease on life. The start-up that makes the Moxie has announced it's going to not only release a version of the server software for self-hosting, but will also publish all of the source code as open source. We understand how unsettling and disappointing it has been to face the possibility of losing the daily comfort and support Moxie provides. Since the onset of these recent challenges, many of you have voiced heartfelt concerns and offered suggestions, and we have taken that feedback seriously. While our cloud services may become unavailable, a group of former technical team members from Embodied is working on a potential solution to allow Moxie to operate locally-without the need for ongoing cloud connectivity. This initiative involves developing a local server application (OpenMoxie") that you can run on your own computer. Once available, this community-driven option will enable you (or technically inclined individuals) to maintain Moxie's basic functionality, develop new features, and modify her capabilities to better suit your needs-without reliance on Embodied's cloud servers. Paolo Pirjanian Having products like this be dependent on internet connectivity is not great, but as long as Silicon Valley is the way it is, that's not going to change. You can tell from their efforts that the people at Embodied do genuinely care about their product and the people that use it, because they have zero - absolutely zero - financial incentive or legal obligation to do any of this. They could've just walked away like their original communication said they were going to, but instead they listened to their customers and changed their minds. Regardless of my thoughts on requiring internet connectivity for something like this, they at least did the right thing today - and I commend them for that.
Never forgive them
The people running the majority of internet services have used a combination of monopolies and a cartel-likecommitment to growth-at-all-costs thinkingto make war with the user, turning the customer into something between a lab rat and an unpaid intern, with the goal to juice as much value from the interaction as possible. To be clear, tech has always had an avaricious streak, and it would be naive to suggest otherwise, but this moment feels different. I'm stunned by the extremes tech companies are going to extract value from customers, but also by the insidious way they've gradually degraded their products. Ed Zitron This is the reality we're all living in, and it's obvious from any casual computer use, or talking to anyone who uses computers, just how absolutely dreadful using the mainstream platforms and services has become. Google Search has become useless, DuckDuckGo is being overrun with AI"-generated slop, Windows is the operating system equivalent of this, Apple doesn't even know how to make a settings application anymore, iOS is yelling at you about all the Apple subscriptions you don't have yet, Android is adding AI" to its damn file manager, and the web is unusable without aggressive ad blocking. And all of this is not only eating up our computers' resources, it's also actively accelerating the destruction of our planet, just so lazy people can generate terrible images where people have six fingers. I'm becoming more and more extreme in my complete and utter dismissal of the major tech companies, and I'm putting more and more effort into taking back control over the digital aspects of my life wherever possible. Not using Windows or macOS has improved the user experience of my PCs and laptops by incredible amounts, and moving from Google's Android to GrapheneOS has made my smartphone feel more like it's actually mine than ever before. Using technology products and services made by people who actually care and have morals and values that don't revolve around unending greed is having a hugely positive impact on my life, and I'm at the point now where I'd rather not have a smartphone or computer than be forced to use trashware like Windows, macOS, or iOS. The backlash against shitty technology companies and their abusive practices is definitely growing, and while it hasn't exploded into the mainstream just yet, I think we're only a few more shitty iOS updates and useless Android AI" features away from a more general uprising against the major technology platforms. There's a reason laws like the DMA are so overwhelmingy popular, and I feel like this is only the beginning.
What does APPEND do in DOS?
The working principle of APPEND is not complicated. It primarily serves as a bridge between old DOS applications which have no or poor support for directories, and users who really, really want to organize files and programs in multiple directories and possibly across multiple drive letters. Of course the actual APPEND implementation is anything but straightforward. Michal Necasek Another gem of an article by Michal Necasek, detailing a command I've known about almost all my life but never once knew what it was supposed to be for. The gist is that APPEND allows for files to be opened not only in the current working directory, but also up to two levels deeper. This gives you a rudimentary way of working with directories, even when using programs or commands that have no clue what directories even are. since DOS 1.x doesn't support directories, but DOS 2.x does, having a tool like this to create a bridge between the pre and post-directory worlds can be quite useful. I've basically learned more about DOS from Necasek's work in the past few years than I learned about DOS when I was actively using it in the early '90s.
T2 Linux takes weird architectures seriously, including my beloved PA-RISC
With more and more Linux distributions - as well as the kernel itself - dropping support for more exotic, often dead architectures, it's a blessing T2 Linux exists. This unique, source-based Linux distribution focuses on making it as easy as possible to build a Linux installation tailored to your needs, and supports an absolutely insane amount of architectures and platforms. In fact, calling T2 a distribution" does it a bit of a disservice, since it's much more than that. You may have noticed the banner at the top of OSNews, and if we somehow - unlikely! -manage to reach that goal before the two remaining new-in-box HP c8000 PA-RISC workstations on eBay are sold, my plan is indeed to run HP-UX as my only operating system for a week, because I like inflicting pain on myself. However, I also intend to use that machine to see just how far T2 Linux on PA-RISC can take me, and if it can make a machine like the c8000, which is plenty powerful with its two dual-core 1.0Ghz PA-RISC processors, properly useful in 2024. T2 Linux 24.12 has just been released, and it brings with it the latest versions of the Linux kernel, gcc, LLVM/Clang, and so on. With T2 Linux, which describes itself as a System Development Environment, it's very easy to spin up a heavily customised Linux installation fit for your purpose, targeting anything from absolutely resource-starved embedded systems to big hunks of, I don't know, SPARC or POWER metal. If you've got hardware with a processor in it, you can most likely build T2 for it. The project also provides a large number of pre-built ISOs for a whole slew of supported architectures, sometimes further divided into glibc or musl, so you can quickly get started even without having to build something yourself. It's an utterly unique project that deserves more attention than it's getting, especially since it seems to be one of the last Linux distributions" that takes supporting weird platforms out-of-the-box seriously. Think of it as the NetBSD of the Linux world, and I know for a fact that there's a very particular type of person to whom that really appeals.
Intel admits it no longer controls the direction of x86
Remember x86S, Intel's initiative to create a 64bit-only x86 instruction set, with the goal of removing some of the bloat that the venerable architecture accumulated over the decades? Well, this initiative is now dead, and more or less replaced with the x86 Ecosystem Advisory Group, a collection of companies with a stake in keeping x86 going. Most notably, this includes Intel and AMD, but also other tech giants like Google. In the first sign of changes to come after the formation of a new industry group, Intel has confirmed toTom's Hardwarethat it is no longer working on the x86S specification. The decision comes after Intel announced the formation of thex86 Ecosystem Advisory Group, which brings together Intel, AMD, Google, and numerous other industry stalwarts to define the future of the x86 instruction set. Intel originally announced its intentions to de-bloat the x86 instruction set by developing a simplified 64-bit mode-only x86S version, publishing a draft specification inMay 2023,and then updating it to a 1.2 revision in June of this year. Now, the company says it has officially ended that initiative. Paul Alcorn This seems like an acknowledgement of the reality that Intel is no longer in the position it once was when it comes to steering the direction of x86. It's AMD that's doing most of the heavy-lifting for the architecture at the moment, and it's been doing that for a while now, with little sign that's going to change. I doubt Intel had enough clout left to push something as relatively drastic as x86S, and now has to rely on building consensus with other companies invested in x86. It may seem like a small thing, and I doubt many larger tech outlets will care, but this story is definitely the biggest sign yet that Intel is in a lot more trouble than people already seem to think based on Intel's products and market performance. What we have here is a full admission by Intel that they no longer control the direction of x86, and have to rely on the rest of the industry to help them. That's absolutely wild.
NetBSD 10.1 released
NetBSD 10.1 has been released. As the version number indicates, this isn't supposed to be a major, groundbreaking release, but it still contains a ton of changes, fixes, and improvements. It's got the usual set of new and improved drivers, kernel improvements - like the ability to hotplug spares and components in a RAID - and improvements for various specific architectures, and much more. If you're using NetBSD you already know how to upgrade, and if you're not yet using NetBSD, here's the download page for the various supported architectures. There are a lot of them.
The European Commission’s proposed interoperability measures place Apple under a form of guardianship
What's the European Commission to do when one of the largest corporations in the world has not only been breaking its laws continually, but also absolutely refuses to comply, uses poison pills in its malicious compliance, badmouths you in the press through both official - and unofficial - employees? Well, you start telling that corporation exactly what it needs to do to comply, down to the most minute implementation details, and in the process take away any form of wiggle room. Steven Troughton-Smith, an absolute wizard when it comes to the inner workings of Apple's various platforms and allround awesome person, dove into the European Commission's proposed next steps when it comes to dealing with Apple's refusal to comply with EU law - the Digital Markets Act, in particular - and it's crystal-clear that the EC is taking absolutely no prisoners. They're not only telling Apple exactly what kind of interoperability measures it must take, down to the API level, but they're also explicitly prohibiting Apple from playing games through complex contracts and nebulous terms to try and make interoperability a massive burden. As an example of just how detailed the EC is getting with Apple, here's what the company needs to do to make AirDrop interoperable: Apple shall provide a protocol specification that gives third parties all information required to integrate, access, and control the AirDrop protocol within an application or service (including as part of the operating system) running on a third-party connected physical device in order to allow these applications and services to send files to, and receive files from, an iOS device. European Commission In addition, Apple must make any new features or changes to AirDrop available to third parties at the same time as it releases them: For future functionalities of or updates to the AirDrop feature, Apple shall make them available to third parties no later than at the time they are made available to any Apple connected physical device. European Commission These specific quotes only cover AirDrop, but similar demands are made about things like AirPlay, the easy pairing process currently reserved for Apple's own accessories, and so on. I highly suggest reading the source document, or at the very least the excellent summary thread by Steven, to get an even better idea of what the EC is demanding here. The changes must be made in the next major version of iOS, or at the very latest before the end of 2025. The EC really goes into excruciating detail about how Apple is supposed to implement these interoperability features, and leaves very little to no wiggle room for Apple shenanigans. The EC is also clearly fed up with Apple's malicious compliance and other tactics to violate the spirit of the DMA: Apple shall not impose any restrictions on the type or use case of the software application and connected physical device that can access or makeuse of the features listed in this Document. Apple shall not undermine effective interoperability with the 11 features set out in this Document by behaviour of a technical nature. In particular, Apple shall actively take all the necessary actions to allow effective interoperability with these features. Apple shall not impose any contractual or commercial restrictions that would be opaque, unfair, unreasonable, or discriminatory towards third parties or otherwise defeat the purpose of enabling effective interoperability. In particular, Apple shall not restrict business users, directly or indirectly, to make use of any interoperability solution in their existing apps via an automatic update. European Commission What I find most interesting about all of this is that it could have been so easily avoided by Apple. Had Apple approached the EU and the DMA with the same kind of respect, grace, and love Apple and Tim Cook clearly reserve for totalitarian dictatorships like China, Apple could've enabled interoperability in such a way that it would still align with most of Apple's interests. They would've avoided the endless stream of negative press this fruitless fight" with the EU is generating, and it would've barely impacted Apple's bottom line. Put it on one of those Apple microsites that capture your scrolling, boast about how amazing Apple is and how much they love interoperability, and it most likely would've been a massive PR win. Instead, under the mistaken impression that this is a business negotiation, Apple tried to cry, whine, throw tamper tantrums, and just generally act like horrible spoiled brats just because someone far, far more powerful than they are told them no" for once. Now they've effectively been placed under guardianship, and have to do exactly as the European Commission tells them to, down to the API level, without any freedom to make their own choices. The good thing is that the EC's journey to make iOS a better and more capable operating system continues. We all benefit. Well, us EU citizens, anyway.
Thanks again to our outgoing sponsor: OS-SCi
We're grateful for our weekly sponsor, OpenSource Science B.V., an educational institution focused on Open Source software. OS-SCi is training the next generation FOSS engineers, by using Open Source technologies and philosophy in a project learning environment. One final reminder: OS-SCi is offering OSNews readers a free / gratis online masterclass by Prof. Ir. Erik Mols on how the proprietary ecosystem is killing itself. This is a live event, on January 9, 2025 at 17:00 PM CET. Sign up here.
POSIX conformance testing for the Redox signals project
The Redox team has received a grant fromNLnetto developRedox OS Unix-style Signals, moving the bulk of signal management to userspace, and making signals more consistent with the POSIX concepts of signaling for processes and threads. It also includes Process Lifecycle and Process Management aspects. As a part of that project, we are developing tests to verify that the new functionality is in reasonable compliance with the POSIX.1-2024 standard. This report describes the state of POSIX conformance testing, specifically in the context of Signals. Ron Williams This is the kind of dry, but important matter a select few of you will fawn over. Consider it my Christmas present for you. There's also a shorter update on the dynamic linker in Redox, which also goes into some considerable detail about how it works, and what progress has been made.
How to make an Apple Watch work with Android
What if you have an Android phone, but consider the Apple Watch superior to other smartwatches? Well, you could switch to iOS, or, you know, you could hack your way into making an Apple Watch work with Android, like Abishek Muthian did. So I decided to make Apple Watch work with my Android phone usingopen-source applications, interoperable protocols and 3rd party services. If you just want to use my code and techniques and not read my commentary on it then feel free to checkout my GitHub for sources. Abishek Muthian Getting notifications to work, so that notifications from the Android phone would show up on the Apple Watch, was the hardest part. Muthian had to write a Python script to read the notifications on the Android device using Termux, and then use Pushover to send them to the Apple Watch. For things like contacts and calendar, he relied on *DAV, which isn't exactly difficult to set up, so pretty much anyone who's reading this can do that. Sadly, initial setup of the watch did require the use of an iPhone, using the same SIM as is in the Android phone. This way, it's possible to set up mobile data as well as calling, and with the SIM back in the Android phone, a call will show up on both the Apple Watch and the Android device. Of course, this initial setup makes the process a bit more cumbersome than just buying a used Apple Watch off eBay or whatever, but I'm honestly surprised everything's working as well as it does. This goes to show that the Apple Watch is not nearly as deeply integrated" with the iPhone as Apple so loves to claim, and making the Apple Watch work with Android in a more official manner certainly doesn't look to be as impossible as Apple makes it out to be when dealing with antitrust regulators. Of course, any official support would be much more involved, especially in the testing department, but it would be absolute peanuts, financially, for a company with Apple's disgusting level of wealth. Anyway, if you want to setup an Apple Watch with Android, Muthian has put the code on GitHub.
A quick look at OS/2’s built-in virtualisation
Most of us are aware that IBM's OS/2 has excellent compatibility with DOS and Windows 3.x programs, to the point where OS/2 just ships with an entire installation of Windows 3.x built-in that you can run multiple instances of. In fact, to this day, ArcaOS, the current incarnation of the maintained and slightly modernised OS/2 codebase, still comes with an entire copy of Windows 3.x, making ArcaOS one of the very best ways to run DOS and Windows 3.x programs on a modern machine, without resorting to VMware or VirtualBox. Peter Hofmann took a look at one of the earlier versions of OS/2 - version 2.1 from 1993 - to see how its DOS compatibility actually works, or more specifically, the feature DOS from drive A:". You can insert a bootable DOS floppy and then runthatDOS in a new window. Since this is called DOSfrom drive A:", surely this is something DOS-specific, right? Maybe only supports MS-DOS or even only PC DOS? Far from it, apparently. Peter Hofmann Hofmann wrote a little test program using nothing but BIOS system calls, meaning it doesn't use any DOS system calls. This real mode BIOS program" can run from the bootsector, if you wanted to, so after combining his test program with a floppy disk boot record, you end up with a bootable floppy that runs the test program, for instance in QEMU. After a bit of work, the test program on the bootable floppy will work just fine using OS/2's DOS from drive A:" feature, even though it shouldn't. What this seems to imply is that this functionality in OS/2 2.1 looks a lot like a hypervisor, or as Hofmann puts it, basically a builtin QEMU that anybody with a 386 could use". That's pretty advanced for the time, and raises a whole bunch of questions about just how much you can do with this.
Fedora proposes dropping Atomic desktops for PPC64LE
Fedora is proposing to stop building their Atomic desktop versions for PPC64LE. PopwerPC 64 LE basically comes down to IBM's POWER architecture, and as far as desktop use goes, that exclusively means the POWER9 machines from Raptor Computing Systems. I reviewed their small single-socket Blackbird machine in 2021, and I also have their dual-socket Talos II workstation. I can tell you from experience that nobody who owns one of these is opting for an immutable Fedora variant, and on top of that, these machines are getting long in the tooth. Raptor passed on POWER10 because it required proprietary firmware, so we've been without new machines for years now. As such, it makes sense for Fedora to stop building Atomic desktops for this architecture. We will stop building the Fedora Atomic Desktops for the PowerPC 64 LE architecture. According to the count me statistics, we don't have any Atomic Desktops users on PPC64LE. Users of Atomic Desktops on PPC64LE will have to either switch back to a Fedora package mode installation or build their own images using Bootable Containers which are available for PPC64LE. Timothee Ravier I've never written much about the Talos II, surmising that most of my Blackbird review applies to the Talos II, as well. If there's interest, I can check to see what the current state of Fedora and/or other distributions on POWER9 is, and write a short review about the experience. I honestly don't know if there's much interest at this point in POWER9, but if there is, here's your chance to get your questions answered.
Microsoft Recall screenshots credit cards and Social Security numbers, even with the “sensitive information” filter enabled
Microsoft's Recall feature recently made its way back to Windows Insiders after having beenpulled from test buildsback in June, due to security and privacy concerns. The new version of Recall encrypts the screens it captures and, by default, it has a Filter sensitive information," setting enabled, which is supposed to prevent it from recording any app or website that is showing credit card numbers, social security numbers, or other important financial / personal info. In my tests, however, this filter only worked in some situations (on two e-commerce sites), leaving a gaping hole in the protection it promises. Avram Piltch at Tom's Hardware Recall might be one of the biggest own goals I have seen in recent technology history. In fact, it's more of a series of own goals that just keep on coming, and I honestly have no idea why Microsoft keeps making them, other than the fact that they're so high on their own AI" supply that they just lost all touch with reality at this point. There's some serious Longhorn-esque tunnel vision here, a project during which the company also kind of forgot the outside world existed beyond the walls of Microsoft's Redmond headquarters. It's clear by now that just like many other tech companies, Microsoft is so utterly convinced it needs to shove AI" into every corner of its products, that it no longer seems to be asking the most important question during product development: do people actually want this? The response to Windows Recall has been particularly negative, yet Microsoft keep pushing and pushing it, making all the mistakes along the way everybody has warned them about. It's astonishing just how dedicated they are to a feature nobody seem to want, and everybody seems to warn them about. It's like we're all Kassandra. The issue in question here is exactly as dumb as you expect it to be. The Filter sensitive information" setting is so absurdly basic and dumb it basically only seems to work on shopping sites, not anywhere else where credit card or other sensitive information might be shown. This shortcoming is obvious to anyone who think about what Recall does for more than one nanosecond, but Microsoft clearly didn't take a few moments to think about this, because their response is to let them know through the Feedback Hub any time Recall fails to detect and sensitive information. They're basically asking you, the consumer, to be the filter. Unpaid, of course. After the damage has already been done. Wild. If you can ditch Windows, you should. Windows is not a place of honour.
Fedora’s new Btrfs SIG should focus on making Btrfs’ features more accessible
As Michel Lind mentioned back in August, we wanted to form a Special Interest Group to further the development and adoption of Btrfs in Fedora. As of yesterday, the SIG is now formed. Neal Gompa Since I've been using Fedora on all my machines for a while now, I've also been using Btrfs as my one and only file system for just as much time, without ever experiencing any issues. In fact, I recently ordered four used 4TB enterprise hard drives (used, yes, but zero SMART issues) to set up a storage pool whereto I can download my favourite YouTube playlists so I don't have to rely on internet connectivity and YouTube not being shit. I combined the four drives into a single 16TB Btrfs volume, and it's working flawlessly. Of course, not having any redundancy is a terrible idea, but I didn't care much since it's just downloaded YouTube videos. However, it's all working so flawlessly, and the four drives were so cheap, I'm going to order another four drives and turn the whole thing into a 16TB Btrfs volume using one of the Btrfs RAID profiles for proper redundancy, even if it costs" me half of the 32TB of total storage. This way, I can also use it as an additional backup for more sensitive data, which is never a bad thing. The one big downside here is that all of this has to be set up and configured using the command line. While that makes sense in a server environment and I had no issues doing so, I think a product that calls itself Fedora Workstation (or, in my case, Fedora KDE, but the point stands) should have proper graphical tools for managing the file system it uses. Fedora should come with a graphical utility to set up, manage, and maintain Btrfs volumes, so you don't have to memorise a bunch of arcane commands. I know a lot of people get very upset when you even suggest someting like this, but that's just elitist nonsense. Btrfs has various incredibly useful features that should be exposed to users of all kinds, not just sysadmins and weird nerds - and graphical tools are a great way to do this. I don't know exactly what the long-term plans of the new Btrrfs SIG are going to be, but I think making the useful features of Btrfs more accessible should definitely be on the list. You shouldn't need to be a CLI expert to set up resilient, redundant local storage on your machine, especially now that the interest in digital self-sufficiency is increasing.
There’s a market out there for a modern X11/Motif-based desktop distribution
EMWM is a fork of the Motif Window Manager with fixes and enhancements. The idea behind this is to provide compatibility with current xorg extensions and applications, without changing the way the window manager looks and behaves. This includes support for multi-monitor setups through Xinerama/Xrandr, UFT-8 support with Xft fonts, and overall better compatibility with software that requiresExtended Window Manager Hints. Additionally a couple of goodies are available in the separate utilities package:XmToolbox, atoolchestlike application launcher, which reads it's multi-level menu structure from a simple plain-text file ~/.toolboxrc, andXmSm, a simple session manager that provides session configuration, locking and shutdown/suspend options. EMWM homepage I had never heard of EMWM, but I immediately like it. This same developer, Alexander Pampuchin, also develops XFile, a file manager for X11 which presents the file system as it actually is, instead of using a bunch of imaginary" locations to hide the truth, if you will. On top of that, they also develop XImaging, a comprehensive image viewer for X11. All of these use the Motif widget toolkit, focus on plain X11, and run on most Linux distributions and BSDs. They need to be compiled by the user, most likely. I am convinced that there is a small but sustainable audience for a modern, up-to-date Linux distribution (although a BSD would work just as well), that instead of offering GNOME, KDE, Xfce, or whatever, focuses instead of delivering a traditional, yet modernised and maintained, desktop environment and applications using not much more than X11 and Motif, eschewing more complex stuff like GTK, Qt, systemd, Wayland, and so on. I would use the hell out of a system that gives me a version of the Motif-based desktops like CDE from the '90s, but with some modern amenities, current hardware support, support for high-resolution displays, and so on. You can certainly grab bits and bobs left and right from the web and build something like this from scratch, but not everyone has the skills and time to do so, yet I think there's enough people out there who are craving for something like this. There's tons of maintained X11/Motif software out there - it's just all spread out, disorganised, and difficult to assemble because it almost always means compiling it all from scratch, and most people simply don't have the time and energy for that. Package this up on a solid Debian, Fedora, or FreeBSD base, and I think you've got quite some users lining up.
12345678910...