Apple's first-generation Vision Pro headset may have now ceased production, following reports of reduced demand and production cuts earlier in the year. Hartley Charlton at MacRumors I think we'll live.
The RTX 5090 and RTX 5080 are receiving their final updates. According to two highly reliable leakers, the RTX 5090 is officially a 575W TDP model, confirming that the new SKU requires significantly more power than its predecessor, the RTX 4090 with TDP of 450W. According to Kopite, there has also been an update to the RTX 5080 specifications. While the card was long rumored to have a 400W TDP, the final figure is now set at 360W. This change is likely because NVIDIA has confirmed the TDP, as opposed to earlier TGP figures that are higher and represent the maximum power limit required by NVIDIA's specifications for board partners. WhyCry at VideoCardz.com These kinds of batshit insane GPU power power requirements are eventually going to run into the limits of the kind of airflow an ATX case can provide. We're still putting the airflow stream of GPUs (bottom to top) perpendicular to the airflow through the case (front to back) like it's 1987, and you'd think at least someone would be thinking about addressing this - especially when a GPU is casually dumping this much heat into the constrained space within a computer case. I don't want more glass and gamer lights. I want case makers to hire at least one proper fluid dynamics engineer.
It is common knowledge that Final Fantasy could have been the last game in the series. It is far less known that Windows 2, released around the same time, could too have been the last. If anything, things were more certain: even Microsoft believed that Windows 2 would be the last. The miracle of overwhelming commercial success brought incredible attention to Windows. The retro community and computer historians generally seem to be interested in the legendary origins of the system (how it all began) or in its turnabout Windows 3.0 release (what did they do right?). This story instead will be about the underdog of Windows, version 2. To understand where it all went wrong, we must start looking at events that happened even before Microsoft was founded. By necessity, I will talk a lot about the origins of Windows, too. Instead of following interpersonal/corporate drama, I will try to focus on the technical aspects of Windows and its competitors, as well as the technological limitations of the computers around the time. Some details are so convoluted and obscure that evenmultiple Microsoft sources, including Raymond Chen, are wrong about essential technical details. It is going to be quite a journey, and it might seem a bit random, but I promise that eventually, it all will start making sense. Nina Kalinina I'm not going to waste your previous time with my stupid babbling when you could instead spend it reading this amazingly detailed, lovingly crafted, beautifully illustrated, and deeply in-depth article by Nina Kalinina about the history, development, and importance of Windows 2. She's delivered something special here, and it's a joy to read and stare at the screenshots from beginning to end. Don't forget to click on the little expander triangles for a ton of in-depth technical stuff and even more background information.
We've just entered the new year, and that means we're going to see some overviews about what the past year has brought. Today we're looking at AROS, as AROS News - great name, very classy, you've got good taste, don't change it - summarised AROS' 2024, and it's been a good year for the project. We don't hear a lot about AROS-proper, as the various AROS distributions are a more optimal way of getting to know the operating system and the project's communication hasn't always been great, but that doesn't mean they've been sitting still. Perhaps the most surprising amount of progress in 2024 was made in the move from 32bit to 64bit AROS. Deadwood also released a 64-bit version of the system (ABIv11) in a Linux hosted version (ABIv11 20241102-1) and AxRuntime version 41.12, which promises a complete switch to 64-bit in the near future. He has also developed a prototype emulator that will enable 64-bit AROS to run programs written for the 32-bit version of the system. Andrzej retrofaza" Subocz at AROS News This is great news for AROS, as being stuck in 32bit isn't particularly future-proof. It might not pose many problems today, as older hardware remains available and 64bit x86 processors can handle running 32bit operating systems just fine, but you never know when that will change. Int the same vein, Deadwood also released a 64bit version of Oddysey, the WebKit-based browser, which was updated this year from August 2015's WebKit to February 2019's WebKit. Sure, 2019 might still be a little outdated, but it does mean a ton of complex sites now work again on AROS, and that's a hugely positive development. Things like Python and GCC were also updated this year, and there was, as is fitting for an Amiga-inspired operating system, a lot of activity in the gaming world, including big updates to Doom 3 and ScummVM. This is just a selection of course, so be sure to read Subocz's entire summary at AROS News.
Do you think streaming platforms and other entities that employ DRM schemes use the TPM in your computer to decrypt stuff? Well, the Free Software Foundation seems to think so, and adds Microsoft's insistence on requiring a TPM for Windows 11 into the mix, but it turns out that's simply not true. I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff. What Icansay is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. Thereishardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU. Matthew Garrett A TPM is imply not designed to handle decryption of media streams, and even if they were, they're far, far too slow and underpowered to decode even a 1080P stream, let alone anything more demanding than that. In reality, DRM schemes like Google's Widevine, Apple's Fairplay, and Microsoft's Playready offer different levels of functionality, both in software and in hardware. The hardware DRM stuff is all done by the GPU, and not by the TPM. By focusing so much on the TPM, Garrett argues, the FSF is failing to see how GPU makers have enabled a ton of hardware DRM without anyone noticing. Personally, I totally understand why organisations like the Free Software Foundation are focusing on TPMs right now. They're one of the main reasons why people can't upgrade to Windows 11, it's the thing people have heard about, and it's the thing that'll soon prevent them from getting security updates for their otherwise perfectly fine machines. I'm not sure the FSF has enough clout these days to make any meaningful media impact, especially in more general, non-tech media, but by choosing the TPM as their focus they're definitely choosing a viable vector. Of course, over here in the tech corner, we don't like it when people are factually inaccurate or twisting and bending the truth, and I'm glad someone as knowledgeable as Garrett stepped up to set the record straight for us tech-focused people, while everyone else can continue to ignore this matter.
Launched in 1998, the 380Z was one very fine ThinkPad. It was the last ThinkPad to come in the classic bulky and rectangular form factor. It was also one of the first to feature a huge 13.3'' TFT display, powerful 233MHz Pentium II, and whopping 160 megs of RAM. I recently stumbled upon one in perfect condition on eBay, and immediately thought it'd be a cool vintage gadget to put on the desk. I only wondered if I could still use it for some slow-paced, distraction-free coding, using reasonably modern software. Luke's web space You know where this is going, right? I evaluated a bunch of contemporary operating systems, including different variants of BSD and Linux. Usually, the experience was underwhelming in terms of performance, hardware support and stability. Well... except for NetBSD, which gave me such perfectly smooth ride, that I thought it was worth sharing. Luke's web space Yeah, of course it was going to be NetBSD (again). This old laptop, too, can run X11 just fine, with the EMWM that we discussed quite recently - in fact, bringing up X required no configuration, and a simple startx was all it needed out of the box. For web browsing, Dillo works just great, and building it took only about 20 minutes. It can even play some low-fi music streams from the internet, only stuttering when doing other processor-intensive tasks. In other words, this little machine with NetBSD turns out to be a great machine for some distraction-free programming. Look, nobody is arguing that a machine like this is all you need. However, it can perform certain specific, basic tasks - anything being better than sending it to a toxic landfill, with all the transportation waste and child labour that entails. If you have old laptops lying around, you should think twice about just sending them to recycling" (which is Western world speak for send to toxic landfill manned by children in poor countries"), since it might be quite easy to turn it into something useful, still.
Rare, hard to come by, but now available on the Internet Archive: the complete book set for the Windows CE Developer's Kit from 1999. It contains all the separate books in their full glory, so if you ever wanted to write either a Windows CE application or driver for Windows CE 2.0, here's all the information you'll ever need. The Microsoft Windows CE Developer's Kit provides all the information you need to write applications for devices based on the Microsofte Windowso CE operating system. Windows CE Developer's Kit The Microsoft Windows CE Programmer's Guide details the architecture of the operating system, how to write applications, how to implement synchronisation with a PC, and much more that pertains to developing applications. The Microsoft Windows CE User Interface Services Guide can be seen as an important addition to the Programmer's Guide, as it details everything related to creating a GUI and how to handle various input methods. Going a few steps deeper, and we arrive at the Microsoft Windows CE Communications Guide, which, as the name implies, tells you all you need to know about infrared connections, telephony, networking and internet connections, and related matter. Finally, we arrive at the Microsoft Windows CE Device Driver Kit, which, as the name implies, is for those of us interested in writing device drivers for Windows CE, something that will surely be of great importance in the future, since Windows CE is sure to dominate our mobile life. To get started, you do need to have Microsoft Visual C++ version 6.0 and the Microsoft Windows CE Toolkit for Visual C++ version 6.0 up and running, since all code samples in the Programmer's Guide are developed with it, but I'm sure you already have this taken care of - why would you be developing for any other platforms, am I right?
LineageOS, the Debian of the custom Android ROM world, released version 22 - or, 22.1 to be more exact - today. On the verge of the new year, they managed to complete the rebase to Android 15, released in September, making this one of their fastest rebases ever. We've been hard at work since Android 15's release in September, adapting our unique features to this new version of Android. Android 15 introduced several complex changes under the hood, but due to our previous efforts adapting to Google's UI-centric adjustments in Android 12 through 14, we were able to rebase onto Android 15's code-base faster than anticipated. Additionally, this is far-and-away the easiest bringup cycle from a device perspective we have seen in years. This means that many more devices are ready on day one that we'd typically expect to have up this early in the cycle! Nolen Johnson LineageOS is also changing its versioning scheme to better match that of Google's new quarterly Android releases, and that's why this new release is 22.1: it's based on Android 15 QPR1. In other words, the 22 aligns with the major Android version number, and the .1 with the QPR it's based on. LineageOS 22.1 brings all the same new features as Android 15 and QPR1, as well as two brand new applications: Twelve, a replacement for LineageOS' aging music player application, and Camelot, a new PDF reader. The list of supported devices is pretty good for a new LineageOS release, and adds the Pixel 9 series of devices right off the bat. LineageOS 22.1 ships with the November Android security patches, and also comes with a few low-level changes, like completely new extract utilities written in Python, which massively improve extracting performance, virtIO support, and much more.
We've talked about Chimera Linux before - it's a unique Linux distribution that combines a BSD userland with the LLVM/Clang toolchain, and musl. Its init system is dinit, and it uses apk-tools from Alpine as its package manager. None of this has anything to do with being anti-anything; the choice of BSD's tools and userland is mostly technical in nature. Chimera Linux is available for x86-64, AArch64, RISC-V, and POWER (both little and big endian). I am unreasonably excited for Chimera Linux, for a variety of reasons - first, I love the above set of choices they made, and second, Chimera Linux' founder and lead developer, q66, is a well-known and respected name in this space. She not only founded Chimera Linux, but also used to maintain the POWER/PowerPC ports of Void Linux, which is the port of Void Linux I used on my POWER9 hardware. She apparently also contributed quite a bit to Enlightenment, and is currently employed by Igalia, through which she can work on Chimera. With the description out of the way, here's the news: Chimera Linux has officially entered beta. Today we have updatedapk-toolsto anrctag. With this, the project is now entering beta phase, after around a year and a half. In general, this does not actually mean much, as the project is rolling release and updates will simply keep coming. It is more of an acknowledgement of current status, though new images will be released in the coming days. Chimera Linux's website Despite my excitement, I haven't yet tried Chimera Linux myself, as I figured its pre-beta stage wasn't meant for an idiot like me who can't contribute anything meaningful, and I'd rather not clutter the airwaves. Now that it's entered beta, I feel like the time is getting riper and riper for me to dive in, and perhaps write about it here. Since the goal of Chimera Linux is to be a general-purpose distribution, I think I'm right in the proper demographic of users. It helps that I'm about to set up my dual-processor POWER9 machine again, and I think I'll be going with Chimera Linux. As a final note, you may have noticed I consistently refer to it as Chimera Linux". This is very much on purpose, as there's also something called ChimeraOS, a more standard Linux distribution aimed at gaming. To avoid confusion, I figured I'd keep the naming clear and consistent.
Here are my notes on running NetBSD 10.1 on my first personal laptop that I still keep, a 1998 i586 Toshiba Satellite Pro with 81Mb of RAM and a 1Gb IBM 2.5'' IDE HD. In summary, the latest NetBSD runs well on this old hardware using an IDE to CF adapter and several changes to the i386 GENERIC kernel. Joel P. I don't think the BSD world - and NetBSD in particular - gets enough recognition for supporting both weird architectures and old hardware as well as it does. This here is a 26 year old laptop running the latest version of NetBSD, working X11 server and all, while other operating systems drop support for devices only a few years old. So many devices could be saved from toxic landfills if only more people looked beyond Windows and macOS.
IncludeOSis an includable, minimalunikerneloperating system for C++ services running in the cloud and on real HW. Starting a program with#include <os>will literally include a tiny operating system into your service during link-time. IncludeOS GitHub page IncludeOS isn't exactly the only one of its kind, but I've always been slightly mystified by what, exactly, unikernels are for. The gist is, as far as I understand it, that if you build an application using a unikernel, it will find out at compile time exactly what it needs from the operating system to run, and then everything it needs from the operating system to run will be linked inside the resulting application. This can then be booted directly by a hypervisor. The advantages are clear: you don't have to deal with an entire operating system just to run that one application or service you need to provide, and footprint is kept to a minimum because only the exact dependencies the application needs from the operating system are linked to it during compilation. The downsides are obvious too - you're not running an operating system so it's far less flexible, and if issues are found in the unikernel you're going to have to recompile the application and the operating system bits inside of it just to fix it (at least, I think that's the case - don't quote me on it). IncludeOS is under heavy development, so take that under advisement if you intend to use it for something serious. The last full release dates back to 2019, but it's still under development as indicated by the GitHub activity. I hope it'll push out a new release soon.
While we're out here raising funds to make me daily-drive HP-UX 11i v1 - we're at 59% of the goal, so I'm starting to prepare for the pain - it seems you can actually run older versions, HP-UX 10.20 and 11.00 to be specific, in a virtual machine using QEMU. QEMU is an open source computer emulation and virtualization software, first released in 2003 by Fabrice Bellard. It supports many different computer systems and includes support for many RISC architectures besides x86. PA-RISC emulation has been included in QEMU since 2018. QEMU emulates a complete computer in software without the need for specific virtualization hardware. With QEMU, a full HP Visualize B160L and C3700 workstation can be emulated to run PA-RISC operating systems like HP-UX Unix and compatible applications. Paul Weissman at OpenPA The emulation is complete enough that it can run X11 and CDE, and you can choose between emulating 32bit PA-RISC of 64bit PA-RISC. Devices and peripherals support is a bit of a mixed bag, with things like USB being only partially supported, and audio not working at all since an audio chip commonly found in PA-RISC workstations isn't supported either. A number of SCSCI and networking devices found on HP's workstations aren't supported either, and a few chipsets don't work either. As far as operating system support goes, you can run HP-UX 10.20, HP-UX 11.00, Linux, and NetBSD. Newer (11i v1 and later) and older (9.07 and 9.05) versions of HP-UX don't work, and neither does NeXTSTEP 3.3. Some of these issues probably stem from missing QEMU drivers, others from a lack of testing; PA-RISC is, after all, not one of the most popular of the dead UNIX architectures, with things like SPARC and MIPS usually taking most of the spotlight. Absolutely nothing beats running operating systems on the bare metal they're designed for, but with PA-RISC hardware becoming ever harder to obtain, it makes sense for emulation efforts to pick up speed so more people can explore HP-UX. I'm weirdly into HP-UX, despite its reputation as a difficult platform to work with, so I personally really want actual hardware, but for most of you, getting HP-UX 11i to work properly on QEMU is most likely the only way you will ever experience this commercial UNIX.
In late June 2024 I got asked to take over the work started byJerry Wucreating asystemd-sysupdateplugin forSoftware. The goal was to allow Software to update sysupdate targets, such as base system images orsystem extension images, all while respecting the user's preferences such as whether to download updates on metered connections. To do so, the plugin communicates with thesystemd-sysupdateddaemon via itsorg.freedesktop.sysupdate1D-Bus interface. I didn't know many of the things required to complete this project and it's been a lot to chew in one bite for me, hence how long it took to complete. Adrien Plazas This new plugin was one of the final pieces of moving GNOME OS - which we talked about before - from OSTree to sysupdate, which in turn is part of GNOME OS' effort to have a fully trusted boot sequence. While I'm not sure GNOME OS is something that will find a lot of uptake among the kind of people that read OSNews, I think it's a hugely important effort to create a no-nonsense, easy-to-use Linux system for normal people to embrace. The Steam Deck employs similar implementations, and it's easy to see why.
I've never been to a LAN party, not even back in the '90s and early 2000s when they were quite the common occurance. Both my family and various friends did have multiple computers in the house, so I do have fond memories of hooking up computers through null modem cables to play Rise of the Triad, later superseded by direct Ethernet connections to play newer games. LAN parties have left lasting impressions on those that regularly attended them, but since most took place before the era of ever-present digital camera and smartphones, photos of such events are rarer than they should be. Luckily, Australian software engineer Issung did a lot of digging and eventually struck gold: a massive collection of photos and a few videos from LAN parties that took place from 1996 and 2010 in Australia. After trying a few other timestamps and a few more web searches I sadly couldn't find anything. As a last ditch effort I made a few posts on various forums, including the long dormant Dark-Media Steam group, then I forgot about it all, until 2 months ago! Someone reached out and was able to get me into a small private Facebook group, once in I could see I had gotten more than I bargained for! I was just looking for Dark-Media photos, but found another regular LAN I had forgotten about, and photos from even more LANs from the late 90s. I was able to scrape all the photos and now upload them toarchive.orgwhere they can hopefully live forever. Issung I love browsing through these, as they bring back so many memories of the computers and dubious fashion choices of my teenage years - I used to combine different colours zip-off pants, and even had mohawks, spikes, and god knows what else before I started losing all my hair in my very early 20s. Anyway, the biggest change is the arrival of flat displays signalling the end of the widespread use of CRTs, and the slow dissappearence of beige in favour of black. Such a joy to see the trends change in these photos. If anyone else is sitting on treasure troves like these, be sure to share them with the world before it's too late.
AI Shell is an interactive shell that provides a chat interface with language models. The shell provides agents that connect to different AI models and other assistance providers. Users can interact with the agents in a conversational manner. Microsoft Learn Basically, what Microsoft means with this is a split-view terminal where one of the two views is a prompt where you can ask questions to an AI", like OpenAI or whatever. The AI" features are not actually integrated into your shell, which instead lives in the other view and acts like a completely normal, standard shell. Instead of opening up an AI" chatbot in a browser window or whatever, you now have it in a split view in your terminal - that's really all there's to it here. I'm going to blow your mind here and say that in theory, this could be an actually useful addition to terminals and shells, as a properly vetted and configured AI" that has been trained on properly obtained source material could indeed be a great help in determining the right terminal commands and options. Tons of people already blindly copy and paste terminal commands from websites even though they really shouldn't anyway, so it's not like this introduces anything new here in terms of dangers. Hell, tutorial writers still add -y to dnf or apt-get commands, so it can really only go up from here.
I didn't have the time to post this one before Christmas, but it's so funny and sad at the same time I don't want to keep this from you. It turns out that in the days leading up to Christmas this year, users of ASUS computers - or with ASUS motherboards, I guess - were greeted with a black bar covering about a third of their screen, decorated with a Christmas wreath. I am making this post for the sake of people like me who will have a black box show up at the bottom of their screen with a Christmas wreath labeled christmas.exe" in task manager and think it's Windows 10/11 malware. It is not. It is from the ASUS Armoury Crate program and can be safely closed and ignored. It looks super sketchy and will hopefully save you some time diagnosing the problem. Slow-Macaroon9630 on reddit So yes, if you're using an ASUS computer and have their shovelware installed, you may have been greeted by a giant black banner caused by an executable called christmas.exe", which sounds exactly like something shitty malware would do. The banner would disappear after a while, and the executable would vanish from the list of running processes as well. It turns out there's a similar seasonal greeting called HappyNewYear.exe", so if you haven't done anything to address the first black bar, you might be getting a second one soon. The fact that shitty OEM shovelware does this kind of garbage on Windows is nothing new - class is not something you can accuse Windows of having - but I was surprised to find out just how deeply embedded this ASUS shovelware program called Armoury Crate really is. It doesn't just come preinstalled on ASUS computers - no, this garbage program actually has roots in your motherboard's firmware. If you merely uninstall Amoury Crate from Windows, it will automatically reinstall itself because your motherboard's firmware tells it to. I'm not joking. To prevent Armory Crate from reinstalling itself, you have to reboot your PC into its UEFI, go to the Advanced Mode, go to Tool > ASUS Armoury Crate, and disable the option Download & Install ARMOURY CRATE app. I had no idea Windows hardware makers had sunk to this kind of low, but I'm also not surprised. If Microsoft shoves endless amounts of ads and shovelware on people's computers, why can't OEMs?
COBOL, your mother's and grandmother's programming language, is still in relatively wide use today, and with the initial batches of COBOL programmers retiring and, well, going away, there's a market out there for younger people to learn COBOL and gain some serious job security in stable, but perhaps boring market segments. One of the things you would not associate with COBOL, however, is gaming - but it turns out it can be used for that, too. CobolCraft is a Minecraft server written in, you guessed it, COBOL. It was developed using GnuCOBOL, and only works on Linux - Windows and macOS are not supported, but it can be run using Electron for developers, otherwise known as Docker. It's only compatible with the latest release of Minecraft at the time of CobolCraft's development, version 1.21.4, and a few more complex blocks with states are not yet supported because of how difficult it is to program those using COBOL. CobolCraft's creator, Fabian Meyer, explains why he started this project: Well, there are quite a lot of rumors and stigma surrounding COBOL. This intrigued me to find out more about this language, which is best done with some sort of project, in my opinion. You heard right - I had no prior COBOL experience going into this. Writing a Minecraft server was perhaps not the best idea for a first COBOL project, since COBOL is intended for business applications, not low-level data manipulation (bits and bytes) which the Minecraft protocol needs lots of. However, quitting before having a working prototype was not on the table! A lot of this functionality had to be implemented completely from scratch, but with some clever programming, data encoding and decoding is not just fully working, but also quite performant. Fabian Meyer I don't know much about programming, but I do grasp that this is a pretty crazy thing to do, and quite the achievement to get working this well, too. Do note that this isn't a complete implementation of the Minecraft server, with certain more complex blocks not working, and things like a lighting engine not being made yet either. This doesn't detract from the achievement, but it does mean you won't be playing regular Minecraft with this for a while yet - if ever, if this remains a fun hobby project for its creator.
It's a Christmas miracle! The Moxie, that support robot thing for kids we talked about two weeks ago, seems to be getting a new lease on life. The start-up that makes the Moxie has announced it's going to not only release a version of the server software for self-hosting, but will also publish all of the source code as open source. We understand how unsettling and disappointing it has been to face the possibility of losing the daily comfort and support Moxie provides. Since the onset of these recent challenges, many of you have voiced heartfelt concerns and offered suggestions, and we have taken that feedback seriously. While our cloud services may become unavailable, a group of former technical team members from Embodied is working on a potential solution to allow Moxie to operate locally-without the need for ongoing cloud connectivity. This initiative involves developing a local server application (OpenMoxie") that you can run on your own computer. Once available, this community-driven option will enable you (or technically inclined individuals) to maintain Moxie's basic functionality, develop new features, and modify her capabilities to better suit your needs-without reliance on Embodied's cloud servers. Paolo Pirjanian Having products like this be dependent on internet connectivity is not great, but as long as Silicon Valley is the way it is, that's not going to change. You can tell from their efforts that the people at Embodied do genuinely care about their product and the people that use it, because they have zero - absolutely zero - financial incentive or legal obligation to do any of this. They could've just walked away like their original communication said they were going to, but instead they listened to their customers and changed their minds. Regardless of my thoughts on requiring internet connectivity for something like this, they at least did the right thing today - and I commend them for that.
The people running the majority of internet services have used a combination of monopolies and a cartel-likecommitment to growth-at-all-costs thinkingto make war with the user, turning the customer into something between a lab rat and an unpaid intern, with the goal to juice as much value from the interaction as possible. To be clear, tech has always had an avaricious streak, and it would be naive to suggest otherwise, but this moment feels different. I'm stunned by the extremes tech companies are going to extract value from customers, but also by the insidious way they've gradually degraded their products. Ed Zitron This is the reality we're all living in, and it's obvious from any casual computer use, or talking to anyone who uses computers, just how absolutely dreadful using the mainstream platforms and services has become. Google Search has become useless, DuckDuckGo is being overrun with AI"-generated slop, Windows is the operating system equivalent of this, Apple doesn't even know how to make a settings application anymore, iOS is yelling at you about all the Apple subscriptions you don't have yet, Android is adding AI" to its damn file manager, and the web is unusable without aggressive ad blocking. And all of this is not only eating up our computers' resources, it's also actively accelerating the destruction of our planet, just so lazy people can generate terrible images where people have six fingers. I'm becoming more and more extreme in my complete and utter dismissal of the major tech companies, and I'm putting more and more effort into taking back control over the digital aspects of my life wherever possible. Not using Windows or macOS has improved the user experience of my PCs and laptops by incredible amounts, and moving from Google's Android to GrapheneOS has made my smartphone feel more like it's actually mine than ever before. Using technology products and services made by people who actually care and have morals and values that don't revolve around unending greed is having a hugely positive impact on my life, and I'm at the point now where I'd rather not have a smartphone or computer than be forced to use trashware like Windows, macOS, or iOS. The backlash against shitty technology companies and their abusive practices is definitely growing, and while it hasn't exploded into the mainstream just yet, I think we're only a few more shitty iOS updates and useless Android AI" features away from a more general uprising against the major technology platforms. There's a reason laws like the DMA are so overwhelmingy popular, and I feel like this is only the beginning.
The working principle of APPEND is not complicated. It primarily serves as a bridge between old DOS applications which have no or poor support for directories, and users who really, really want to organize files and programs in multiple directories and possibly across multiple drive letters. Of course the actual APPEND implementation is anything but straightforward. Michal Necasek Another gem of an article by Michal Necasek, detailing a command I've known about almost all my life but never once knew what it was supposed to be for. The gist is that APPEND allows for files to be opened not only in the current working directory, but also up to two levels deeper. This gives you a rudimentary way of working with directories, even when using programs or commands that have no clue what directories even are. since DOS 1.x doesn't support directories, but DOS 2.x does, having a tool like this to create a bridge between the pre and post-directory worlds can be quite useful. I've basically learned more about DOS from Necasek's work in the past few years than I learned about DOS when I was actively using it in the early '90s.
With more and more Linux distributions - as well as the kernel itself - dropping support for more exotic, often dead architectures, it's a blessing T2 Linux exists. This unique, source-based Linux distribution focuses on making it as easy as possible to build a Linux installation tailored to your needs, and supports an absolutely insane amount of architectures and platforms. In fact, calling T2 a distribution" does it a bit of a disservice, since it's much more than that. You may have noticed the banner at the top of OSNews, and if we somehow - unlikely! -manage to reach that goal before the two remaining new-in-box HP c8000 PA-RISC workstations on eBay are sold, my plan is indeed to run HP-UX as my only operating system for a week, because I like inflicting pain on myself. However, I also intend to use that machine to see just how far T2 Linux on PA-RISC can take me, and if it can make a machine like the c8000, which is plenty powerful with its two dual-core 1.0Ghz PA-RISC processors, properly useful in 2024. T2 Linux 24.12 has just been released, and it brings with it the latest versions of the Linux kernel, gcc, LLVM/Clang, and so on. With T2 Linux, which describes itself as a System Development Environment, it's very easy to spin up a heavily customised Linux installation fit for your purpose, targeting anything from absolutely resource-starved embedded systems to big hunks of, I don't know, SPARC or POWER metal. If you've got hardware with a processor in it, you can most likely build T2 for it. The project also provides a large number of pre-built ISOs for a whole slew of supported architectures, sometimes further divided into glibc or musl, so you can quickly get started even without having to build something yourself. It's an utterly unique project that deserves more attention than it's getting, especially since it seems to be one of the last Linux distributions" that takes supporting weird platforms out-of-the-box seriously. Think of it as the NetBSD of the Linux world, and I know for a fact that there's a very particular type of person to whom that really appeals.
Remember x86S, Intel's initiative to create a 64bit-only x86 instruction set, with the goal of removing some of the bloat that the venerable architecture accumulated over the decades? Well, this initiative is now dead, and more or less replaced with the x86 Ecosystem Advisory Group, a collection of companies with a stake in keeping x86 going. Most notably, this includes Intel and AMD, but also other tech giants like Google. In the first sign of changes to come after the formation of a new industry group, Intel has confirmed toTom's Hardwarethat it is no longer working on the x86S specification. The decision comes after Intel announced the formation of thex86 Ecosystem Advisory Group, which brings together Intel, AMD, Google, and numerous other industry stalwarts to define the future of the x86 instruction set. Intel originally announced its intentions to de-bloat the x86 instruction set by developing a simplified 64-bit mode-only x86S version, publishing a draft specification inMay 2023,and then updating it to a 1.2 revision in June of this year. Now, the company says it has officially ended that initiative. Paul Alcorn This seems like an acknowledgement of the reality that Intel is no longer in the position it once was when it comes to steering the direction of x86. It's AMD that's doing most of the heavy-lifting for the architecture at the moment, and it's been doing that for a while now, with little sign that's going to change. I doubt Intel had enough clout left to push something as relatively drastic as x86S, and now has to rely on building consensus with other companies invested in x86. It may seem like a small thing, and I doubt many larger tech outlets will care, but this story is definitely the biggest sign yet that Intel is in a lot more trouble than people already seem to think based on Intel's products and market performance. What we have here is a full admission by Intel that they no longer control the direction of x86, and have to rely on the rest of the industry to help them. That's absolutely wild.
NetBSD 10.1 has been released. As the version number indicates, this isn't supposed to be a major, groundbreaking release, but it still contains a ton of changes, fixes, and improvements. It's got the usual set of new and improved drivers, kernel improvements - like the ability to hotplug spares and components in a RAID - and improvements for various specific architectures, and much more. If you're using NetBSD you already know how to upgrade, and if you're not yet using NetBSD, here's the download page for the various supported architectures. There are a lot of them.
What's the European Commission to do when one of the largest corporations in the world has not only been breaking its laws continually, but also absolutely refuses to comply, uses poison pills in its malicious compliance, badmouths you in the press through both official - and unofficial - employees? Well, you start telling that corporation exactly what it needs to do to comply, down to the most minute implementation details, and in the process take away any form of wiggle room. Steven Troughton-Smith, an absolute wizard when it comes to the inner workings of Apple's various platforms and allround awesome person, dove into the European Commission's proposed next steps when it comes to dealing with Apple's refusal to comply with EU law - the Digital Markets Act, in particular - and it's crystal-clear that the EC is taking absolutely no prisoners. They're not only telling Apple exactly what kind of interoperability measures it must take, down to the API level, but they're also explicitly prohibiting Apple from playing games through complex contracts and nebulous terms to try and make interoperability a massive burden. As an example of just how detailed the EC is getting with Apple, here's what the company needs to do to make AirDrop interoperable: Apple shall provide a protocol specification that gives third parties all information required to integrate, access, and control the AirDrop protocol within an application or service (including as part of the operating system) running on a third-party connected physical device in order to allow these applications and services to send files to, and receive files from, an iOS device. European Commission In addition, Apple must make any new features or changes to AirDrop available to third parties at the same time as it releases them: For future functionalities of or updates to the AirDrop feature, Apple shall make them available to third parties no later than at the time they are made available to any Apple connected physical device. European Commission These specific quotes only cover AirDrop, but similar demands are made about things like AirPlay, the easy pairing process currently reserved for Apple's own accessories, and so on. I highly suggest reading the source document, or at the very least the excellent summary thread by Steven, to get an even better idea of what the EC is demanding here. The changes must be made in the next major version of iOS, or at the very latest before the end of 2025. The EC really goes into excruciating detail about how Apple is supposed to implement these interoperability features, and leaves very little to no wiggle room for Apple shenanigans. The EC is also clearly fed up with Apple's malicious compliance and other tactics to violate the spirit of the DMA: Apple shall not impose any restrictions on the type or use case of the software application and connected physical device that can access or makeuse of the features listed in this Document. Apple shall not undermine effective interoperability with the 11 features set out in this Document by behaviour of a technical nature. In particular, Apple shall actively take all the necessary actions to allow effective interoperability with these features. Apple shall not impose any contractual or commercial restrictions that would be opaque, unfair, unreasonable, or discriminatory towards third parties or otherwise defeat the purpose of enabling effective interoperability. In particular, Apple shall not restrict business users, directly or indirectly, to make use of any interoperability solution in their existing apps via an automatic update. European Commission What I find most interesting about all of this is that it could have been so easily avoided by Apple. Had Apple approached the EU and the DMA with the same kind of respect, grace, and love Apple and Tim Cook clearly reserve for totalitarian dictatorships like China, Apple could've enabled interoperability in such a way that it would still align with most of Apple's interests. They would've avoided the endless stream of negative press this fruitless fight" with the EU is generating, and it would've barely impacted Apple's bottom line. Put it on one of those Apple microsites that capture your scrolling, boast about how amazing Apple is and how much they love interoperability, and it most likely would've been a massive PR win. Instead, under the mistaken impression that this is a business negotiation, Apple tried to cry, whine, throw tamper tantrums, and just generally act like horrible spoiled brats just because someone far, far more powerful than they are told them no" for once. Now they've effectively been placed under guardianship, and have to do exactly as the European Commission tells them to, down to the API level, without any freedom to make their own choices. The good thing is that the EC's journey to make iOS a better and more capable operating system continues. We all benefit. Well, us EU citizens, anyway.
We're grateful for our weekly sponsor, OpenSource Science B.V., an educational institution focused on Open Source software. OS-SCi is training the next generation FOSS engineers, by using Open Source technologies and philosophy in a project learning environment. One final reminder: OS-SCi is offering OSNews readers a free / gratis online masterclass by Prof. Ir. Erik Mols on how the proprietary ecosystem is killing itself. This is a live event, on January 9, 2025 at 17:00 PM CET. Sign up here.
The Redox team has received a grant fromNLnetto developRedox OS Unix-style Signals, moving the bulk of signal management to userspace, and making signals more consistent with the POSIX concepts of signaling for processes and threads. It also includes Process Lifecycle and Process Management aspects. As a part of that project, we are developing tests to verify that the new functionality is in reasonable compliance with the POSIX.1-2024 standard. This report describes the state of POSIX conformance testing, specifically in the context of Signals. Ron Williams This is the kind of dry, but important matter a select few of you will fawn over. Consider it my Christmas present for you. There's also a shorter update on the dynamic linker in Redox, which also goes into some considerable detail about how it works, and what progress has been made.
What if you have an Android phone, but consider the Apple Watch superior to other smartwatches? Well, you could switch to iOS, or, you know, you could hack your way into making an Apple Watch work with Android, like Abishek Muthian did. So I decided to make Apple Watch work with my Android phone usingopen-source applications, interoperable protocols and 3rd party services. If you just want to use my code and techniques and not read my commentary on it then feel free to checkout my GitHub for sources. Abishek Muthian Getting notifications to work, so that notifications from the Android phone would show up on the Apple Watch, was the hardest part. Muthian had to write a Python script to read the notifications on the Android device using Termux, and then use Pushover to send them to the Apple Watch. For things like contacts and calendar, he relied on *DAV, which isn't exactly difficult to set up, so pretty much anyone who's reading this can do that. Sadly, initial setup of the watch did require the use of an iPhone, using the same SIM as is in the Android phone. This way, it's possible to set up mobile data as well as calling, and with the SIM back in the Android phone, a call will show up on both the Apple Watch and the Android device. Of course, this initial setup makes the process a bit more cumbersome than just buying a used Apple Watch off eBay or whatever, but I'm honestly surprised everything's working as well as it does. This goes to show that the Apple Watch is not nearly as deeply integrated" with the iPhone as Apple so loves to claim, and making the Apple Watch work with Android in a more official manner certainly doesn't look to be as impossible as Apple makes it out to be when dealing with antitrust regulators. Of course, any official support would be much more involved, especially in the testing department, but it would be absolute peanuts, financially, for a company with Apple's disgusting level of wealth. Anyway, if you want to setup an Apple Watch with Android, Muthian has put the code on GitHub.
Most of us are aware that IBM's OS/2 has excellent compatibility with DOS and Windows 3.x programs, to the point where OS/2 just ships with an entire installation of Windows 3.x built-in that you can run multiple instances of. In fact, to this day, ArcaOS, the current incarnation of the maintained and slightly modernised OS/2 codebase, still comes with an entire copy of Windows 3.x, making ArcaOS one of the very best ways to run DOS and Windows 3.x programs on a modern machine, without resorting to VMware or VirtualBox. Peter Hofmann took a look at one of the earlier versions of OS/2 - version 2.1 from 1993 - to see how its DOS compatibility actually works, or more specifically, the feature DOS from drive A:". You can insert a bootable DOS floppy and then runthatDOS in a new window. Since this is called DOSfrom drive A:", surely this is something DOS-specific, right? Maybe only supports MS-DOS or even only PC DOS? Far from it, apparently. Peter Hofmann Hofmann wrote a little test program using nothing but BIOS system calls, meaning it doesn't use any DOS system calls. This real mode BIOS program" can run from the bootsector, if you wanted to, so after combining his test program with a floppy disk boot record, you end up with a bootable floppy that runs the test program, for instance in QEMU. After a bit of work, the test program on the bootable floppy will work just fine using OS/2's DOS from drive A:" feature, even though it shouldn't. What this seems to imply is that this functionality in OS/2 2.1 looks a lot like a hypervisor, or as Hofmann puts it, basically a builtin QEMU that anybody with a 386 could use". That's pretty advanced for the time, and raises a whole bunch of questions about just how much you can do with this.
Fedora is proposing to stop building their Atomic desktop versions for PPC64LE. PopwerPC 64 LE basically comes down to IBM's POWER architecture, and as far as desktop use goes, that exclusively means the POWER9 machines from Raptor Computing Systems. I reviewed their small single-socket Blackbird machine in 2021, and I also have their dual-socket Talos II workstation. I can tell you from experience that nobody who owns one of these is opting for an immutable Fedora variant, and on top of that, these machines are getting long in the tooth. Raptor passed on POWER10 because it required proprietary firmware, so we've been without new machines for years now. As such, it makes sense for Fedora to stop building Atomic desktops for this architecture. We will stop building the Fedora Atomic Desktops for the PowerPC 64 LE architecture. According to the count me statistics, we don't have any Atomic Desktops users on PPC64LE. Users of Atomic Desktops on PPC64LE will have to either switch back to a Fedora package mode installation or build their own images using Bootable Containers which are available for PPC64LE. Timothee Ravier I've never written much about the Talos II, surmising that most of my Blackbird review applies to the Talos II, as well. If there's interest, I can check to see what the current state of Fedora and/or other distributions on POWER9 is, and write a short review about the experience. I honestly don't know if there's much interest at this point in POWER9, but if there is, here's your chance to get your questions answered.
Microsoft's Recall feature recently made its way back to Windows Insiders after having beenpulled from test buildsback in June, due to security and privacy concerns. The new version of Recall encrypts the screens it captures and, by default, it has a Filter sensitive information," setting enabled, which is supposed to prevent it from recording any app or website that is showing credit card numbers, social security numbers, or other important financial / personal info. In my tests, however, this filter only worked in some situations (on two e-commerce sites), leaving a gaping hole in the protection it promises. Avram Piltch at Tom's Hardware Recall might be one of the biggest own goals I have seen in recent technology history. In fact, it's more of a series of own goals that just keep on coming, and I honestly have no idea why Microsoft keeps making them, other than the fact that they're so high on their own AI" supply that they just lost all touch with reality at this point. There's some serious Longhorn-esque tunnel vision here, a project during which the company also kind of forgot the outside world existed beyond the walls of Microsoft's Redmond headquarters. It's clear by now that just like many other tech companies, Microsoft is so utterly convinced it needs to shove AI" into every corner of its products, that it no longer seems to be asking the most important question during product development: do people actually want this? The response to Windows Recall has been particularly negative, yet Microsoft keep pushing and pushing it, making all the mistakes along the way everybody has warned them about. It's astonishing just how dedicated they are to a feature nobody seem to want, and everybody seems to warn them about. It's like we're all Kassandra. The issue in question here is exactly as dumb as you expect it to be. The Filter sensitive information" setting is so absurdly basic and dumb it basically only seems to work on shopping sites, not anywhere else where credit card or other sensitive information might be shown. This shortcoming is obvious to anyone who think about what Recall does for more than one nanosecond, but Microsoft clearly didn't take a few moments to think about this, because their response is to let them know through the Feedback Hub any time Recall fails to detect and sensitive information. They're basically asking you, the consumer, to be the filter. Unpaid, of course. After the damage has already been done. Wild. If you can ditch Windows, you should. Windows is not a place of honour.
As Michel Lind mentioned back in August, we wanted to form a Special Interest Group to further the development and adoption of Btrfs in Fedora. As of yesterday, the SIG is now formed. Neal Gompa Since I've been using Fedora on all my machines for a while now, I've also been using Btrfs as my one and only file system for just as much time, without ever experiencing any issues. In fact, I recently ordered four used 4TB enterprise hard drives (used, yes, but zero SMART issues) to set up a storage pool whereto I can download my favourite YouTube playlists so I don't have to rely on internet connectivity and YouTube not being shit. I combined the four drives into a single 16TB Btrfs volume, and it's working flawlessly. Of course, not having any redundancy is a terrible idea, but I didn't care much since it's just downloaded YouTube videos. However, it's all working so flawlessly, and the four drives were so cheap, I'm going to order another four drives and turn the whole thing into a 16TB Btrfs volume using one of the Btrfs RAID profiles for proper redundancy, even if it costs" me half of the 32TB of total storage. This way, I can also use it as an additional backup for more sensitive data, which is never a bad thing. The one big downside here is that all of this has to be set up and configured using the command line. While that makes sense in a server environment and I had no issues doing so, I think a product that calls itself Fedora Workstation (or, in my case, Fedora KDE, but the point stands) should have proper graphical tools for managing the file system it uses. Fedora should come with a graphical utility to set up, manage, and maintain Btrfs volumes, so you don't have to memorise a bunch of arcane commands. I know a lot of people get very upset when you even suggest someting like this, but that's just elitist nonsense. Btrfs has various incredibly useful features that should be exposed to users of all kinds, not just sysadmins and weird nerds - and graphical tools are a great way to do this. I don't know exactly what the long-term plans of the new Btrrfs SIG are going to be, but I think making the useful features of Btrfs more accessible should definitely be on the list. You shouldn't need to be a CLI expert to set up resilient, redundant local storage on your machine, especially now that the interest in digital self-sufficiency is increasing.
EMWM is a fork of the Motif Window Manager with fixes and enhancements. The idea behind this is to provide compatibility with current xorg extensions and applications, without changing the way the window manager looks and behaves. This includes support for multi-monitor setups through Xinerama/Xrandr, UFT-8 support with Xft fonts, and overall better compatibility with software that requiresExtended Window Manager Hints. Additionally a couple of goodies are available in the separate utilities package:XmToolbox, atoolchestlike application launcher, which reads it's multi-level menu structure from a simple plain-text file ~/.toolboxrc, andXmSm, a simple session manager that provides session configuration, locking and shutdown/suspend options. EMWM homepage I had never heard of EMWM, but I immediately like it. This same developer, Alexander Pampuchin, also develops XFile, a file manager for X11 which presents the file system as it actually is, instead of using a bunch of imaginary" locations to hide the truth, if you will. On top of that, they also develop XImaging, a comprehensive image viewer for X11. All of these use the Motif widget toolkit, focus on plain X11, and run on most Linux distributions and BSDs. They need to be compiled by the user, most likely. I am convinced that there is a small but sustainable audience for a modern, up-to-date Linux distribution (although a BSD would work just as well), that instead of offering GNOME, KDE, Xfce, or whatever, focuses instead of delivering a traditional, yet modernised and maintained, desktop environment and applications using not much more than X11 and Motif, eschewing more complex stuff like GTK, Qt, systemd, Wayland, and so on. I would use the hell out of a system that gives me a version of the Motif-based desktops like CDE from the '90s, but with some modern amenities, current hardware support, support for high-resolution displays, and so on. You can certainly grab bits and bobs left and right from the web and build something like this from scratch, but not everyone has the skills and time to do so, yet I think there's enough people out there who are craving for something like this. There's tons of maintained X11/Motif software out there - it's just all spread out, disorganised, and difficult to assemble because it almost always means compiling it all from scratch, and most people simply don't have the time and energy for that. Package this up on a solid Debian, Fedora, or FreeBSD base, and I think you've got quite some users lining up.
After two years of intense development, the third major Linux desktop environment has released a new version: Xfce 4.20 is here. The major focus of this release cycle was getting Xfce ready for Wayland, and they've achieved quite a bit of that goal, but support for it is still experimental. Thanks toBrianandGaelalmost all Xfce components are able to run on Wayland windowing, while still keeping support for X11 windowing. This major effort was achieved by abstracting away any X11/Wayland windowing specific calls and making use of Wayland/Wlroots protocols. A whole new Xfce library, libxfce4windowing" was introduced during that process. XWayland will not be required to run any of the ported Xfce components. Xfce development team A major gap in Xfce's Wayland support is the fact that Xfwm4 has not been ported to Wayland yet, so the team suggests using Labwc or Wayfire instead if you want to dive into using Xfce on Wayland. While there are plans to port Xfwm4 over to Wayland, this requires a major restructuring and they're not going to set any timelines or expectations for when this will be completed. Regardless, this is an excellent achievement and solid progress for Xfce on Wayland, which is pretty much a requirement for Xfce (and other desktop environments) te remain relevant going forward. Of course, while Wayland is a major focus this release, there's a lot more here, too - and that's not doing the Xfce developers justice. Xfce 4.20 comes packed with so many new features, enchancements, and bug fixes across the board that I have no idea where to start. I like the large number of changes to Thunar, like the ability to use symoblic icons in the sidebar, optimising it for small window sizes, automatically opening folders when dragging and dropping, and so much more. They've also done another pass to update any remaining icons not working well on HiDPI displays, removing any instances where you'd encounter fuzzy icons. I can't wait to give Xfce 4.20 a go once it lands in Fedora Xfce.
Haiku is already awash with browsers to choose from, with Falkon (yes, the same one) being the primary choice for most Haiku users, since it offers the best overall experience. We've got a new addition to the team, however, as Firefox - in the form of Iceweasel, because trademark stuff and so on - has been ported to Haiku. Jules Enriquez provides some more background in a post on Mastodon: An experimental port of Firefox Iceweasel is now available on HaikuDepot! So far, most sites are working fine. YouTube video playback is fine and Discord just works, however the web browser does occasionally take itself down. Still rather usable, though! If @ActionRetro thought that Haiku was ready for daily driving with Falkon (see first screenshot), then rebranded Firefox surely has to make it even more viable by those standards! It should be noted though that just like with Falkon, some crash dialogs can be ignored (drag them to another workspace) and the web browser can still be used. Jules Enriquez It's not actually called Firefox at the moment because of the various trademark restrictions Mozilla places on the Firefox branding, which I think is fair just to make sure not every half-assed barely-working port can slap the Firefox logo and name on itself and call it a day. As noted, this port is experimental and needs more work to bring it up to snuff and eligible for using the name Firefox, but this is still an awesome achievement and a great addition to the pool of applications that are already making Haiku quite daily-drivable for some people. Speaking of which, are there any people in our audience who use Haiku as their main operating system? There's a lot of chatter out there about just how realistic of an option this has become, but I'm curious if any of you have made the jump and are leading the way for the rest of us. Action Retros videos about Haiku have done a lot to spread the word, and I'm noticing more and more people from far outside the usual operating system circles talking about Haiku. Which is great, and hopefully leads to more people also picking up Haiku development, because I'm sure the team can always use fresh blood.
It was only a matter of time before Google would jump into the virtual/augmented reality fray once again with Android, after their several previous attempts failed to catch on. This time, it's called Android XR, and it's aimed at both the big clunky headsets like Apple's Vision Pro as well as basic glasses that overlay information onto the world. Google has been working on this with Samsung, apparently, and of course, this new Android variant is drenched in AI" slop. We're working to create a vibrant ecosystem of developers and device makers for Android XR, building on the foundation that brought Android to billions. Today's release is a preview for developers, and by supporting tools like ARCore, Android Studio, Jetpack Compose, Unity, and OpenXR from the beginning, developers can easily start building apps and games for upcoming Android XR devices. For Qualcomm partners like Lynx, Sony and XREAL, we are opening a path for the development of a wide array of Android XR devices to meet the diverse needs of people and businesses. And, we are continuing to collaborate with Magic Leap on XR technology and future products with AR and AI. Shahram Izadi at Google's blog What they've shown of Android XR so far looks a lot like the kind of things Facebook and Apple are doing with their headsets, as far as user interface and interactions go. As for the developer story, Google is making it possible for regular Android applications to run on XR headsets, and for proper XR applications you'll need to user Jetpack Compose and various new additions to it, and the 3D engine Google opted for is Unity, with whom they've been collaborating on this. For now, it's just an announcement of the new platform and the availability of the development tools, but for actual devices that ship with Android XR you'll have to wait until next year. Other than the potential for exercise, I'm personally not that interested in VR/AR, and I doubt Google's Android-based me-too will change much in that regard.
I've been dropping a lot of hints about my journey to rid myself of Google's Android on my Pixel 8 Pro lately, a quest which grew in scope until it covered everything from moving to GrapheneOS to dropping Gmail, from moving to open source stock" Android application replacements to reconsidering my use of Google Photos, from dropping my dependency on Google Keep to setting up Home Assistant, and much, much more. You get the idea: this has turned into a very complex process where I evaluated my every remaining use of big tech, replacing them with alternatives where possible, leaving only a few cases where I'm sticking with what I was using. And yes, this whole process will turn into an article detailing my quest, because I think recent events have made remocing big tech from your life a lot more important than it already was. Anyway, one of the few things I couldn't find an alternative for was Google Pay's tap-to-pay functionality in stores. I don't like using cash - I haven't held paper money in my hands in like 15 years - and I'd rather keep my bank cards, credit card, and other important documents at home instead of carrying them around and losing them (or worse). As such, I had completely embraced the tap-to-pay lifestyle, with my phone and my Pixel Watch II. Sadly, Google Pay tap-to-pay NFC payments are simply not possible on GrapheneOS (or other de-Googled ROMS, for that matter), because of Google's stringent certification requirements. Some banks do offer NFC payments through their own applications, but mine does not. I thought this is where the story ended, but as it turns out, there is actually a way to get tap-to-pay NFC payments in stores back: Garmin Pay. Garmin offers this functionality on a number of its watches, and it pretty much works wherever Google Pay or Apple Pay is accepted, too. And best of all: it works just fine on de-Googled Android ROMs. Peope have been asking me to check this out and make it part of my quest, and ever the people-pleaser, I would love to oblige. Sadly, it does require owning a supported Garmin watch, which I don't have. To guage interest in me testing this, I've set up a Ko-Fi goal of 400 you can contribute to. Obviously, this is by no means a must, but if you're interested in finding out if you can ditch big tech, but keep enjoying the convenience of tap-to-pay NFC payments - this is your chance.
With its latest reales qemu added the Venus patches so that virtio-gpu now support venus encapsulation for vulkan. This is one more piece to the puzzle towards full Vulkan support. An outdated blog post onclollaboradescribed in 2021 how to enable 3D acceleration of Vulkan applications in QEMU through the Venus experimental Vulkan driver for VirtIO-GPU with a local development environment. Following up on the outdated write up, this is how its done today. Pepper Gray A major milestone, and for the adventurous, you can get it working today. Give it a few more months, and many of the versions required will be part of your ditribution's package repositories, making the process a bit easier. On a related note, Linux kernel developers are considering removing 32-bit x86 KVM host support for all architectures that support it - PowerPC, MIPS, RISC-V, and x86-64 - because nobody is using this functionality. This support was dropped from 32bit ARM a few years ago, and the remaining architectures mentioned above have orders of magnitude fewer users still. If nobody is using this functionality, it really makes no sense to keep it around, and as such, the calls to remove it. In other words, if your custom workflow of opening your garage door through your fridge light's flicker frequency and the alignment of the planets and custom scripts on a Raspberry Pi 2 requires this support, let the kernel developers know, or forever hold your peace.
CPUs start executing instructions by fetching those instruction bytes from memory and decoding them into internal operations (micro-ops). Getting data from memory and operating on it consumes power and incurs latency. Micro-op caching is a popular technique to improve on both fronts, and involves caching micro-ops that correspond to frequently executed instructions. AMD's recent CPUs have particularly large micro-op caches, or op caches for short. Zen 4's op cache can hold 6.75K micro-ops, and has the highest capacity of any op cache across the past few generations of CPUs. This huge op cache enjoys high hitrates, and gives the feeling AMD is leaning harder on micro-op caching than Intel or Arm. That begs the question of how the core would handle if its big, beautiful op cache stepped out for lunch. Chester Lam at Chips and Cheese The results of turning off the op cache were far less dramatic than one would expect, and this mostly comes down to the processor having to wait on other bottlenecks anyway, like the memory, and a lot of tasks consisting of multiple types of operations which not all make use of op cache. While it definitely contributes to making Zen 4 cores faster overall, even without it, it's still an amazing core that outperforms its Intel competition. As a sidenote, this is such a fun and weird thing to do and benchmark. It doesn't serve much of a purpose, and the information gained isn't very practical, but turning off specific parts of a processor and observing the consequences does create some insight into exactly how a modern processor works. There are so many different elements that make up a modern processor now, and just gigahertz or even the number of cores barely tells even half the story. Anyway, we need more of these weird benchmarks.
When we announced the security flawCVE-2024-11053on December 11, 2024 together with the release ofcurl 8.11.1we fixed a security bug that was introduced in a curl release9039days ago. That is close to twenty-five years. The previous record holder wasCVE-2022-35252at 8729 days. Daniel Stenberg Ir's really quite fascinating to see details like this about such a widepsread and widely used tool like curl. The bug in question was a logic error, which made Stenberg detail how any modern language like Rust, instead of C, would not have prevented this issue. Still, about 40% of all security issues in curl stem from not using a memory-safe language, or about 50% of all high/critical severity ones. I understand that jumping on every bandwagon and rewriting everything in a memory-safe language is a lot harder than it sounds, but I also feel like it's getting harder and harder to keep justifying using old languages like C. I really don't know why people get so incredibly upset at the cold, hard data about this. Anyway, the issue that sparked this post is fixed in curl 8.11.1.
Every now and then I load OpenPA and browse around. Its creator and maintainer, Paul Weissmann, has been very active lately updating the site with new articles, even more information, and tons of other things, and it's usually a joy to stumble upon something I haven't read yet, or just didn't know anything about. This time it's something called HP-RT, a real-time operating system developed and sold by HP for a number of its PA-RISC workstations back in the '90s. HP-RT is derived from the real-time operating system LynxOS and was built as real-time operating system from scratch with native POSIX API and Unix features like protected address spaces, multiprocessing, and standard GUI. Real-time scheduling is part of the kernel with response times under 200 s, later improved to sub-100 s for uses such as hospital system tied to a heart monitor, or a missile tracking system. For programming, HP-RT supported dynamic shared libraries, ANSI C, Softbench (5.2), FORTRAN, ADA, C++ and PA-RISC assembly. From HP-RT 3.0, GUI-based debugging environment (DDErt) and Event Logging library (ELOG) were included. POSIX 1003.1, 1003.1b and POSIX 1003.4a draft 4 were supported. On the software side, HP-RT supported fast file system, X and Motif clients, X11 SERVERrt, STREAMSrt (SVR 3.2), NFS, and others. Paul Weissmann at OpenPA I had no idea HP-RT existed, and looking at the feature list, it seems like it was actually a pretty impressive operating system and wider ecosystem back in the '90s when it was current. HP released several versions of its real-time operating system, with 1997's 3.0 and 3.01 being the final version. Support for it ended in the early 2000s alongside the end of the line for PA-RISC. I'd absolutely love to try it out today, but sadly, my PA-RISC workstation - an HP Visualise c3750 - is way too new" to be supported by HP-RT, and in the wrong product category at that. HP-RT required both a regular HP 9000 700 HP-UX workstation, as well as one of HP's VME machines with a single-module module with the specific rt" affix in the model number. On top of that you obviously needed the actual HP-RT operating system, which was part of the HP-RT Development Environment. The process entails using the HP-UX machine to compile HP-RT, which was then downloaded to the VMe machine. The odds of not only finding all the right parts to complete the setup, but also to get it all working with what is undoubtedly going to be spotty documentation and next to nobody to talk to about how to do it, are very, very slim. I'm open to suggestions, of course, but considering the probable crazy rarity of the specific hardware, the price-gauging going on in the retrocomputing world, the difficulty of shipping to the Swedish Arctic, and the knwoledge required, I don't think I'll be the one to get this to work and show it off. But man do I want to.
Some news is both sad and dystopian at the same time, and this is one of those cases. Moxie, a start-up selling $800 emotional support robots intended to help children is shutting down operations since it can't find enough money, and since their robots require constant connectivity to servers to operate, all of the children's robots will cease functioning within days. They're not offering refunds, but they will send out a letter to help parents tell their children in an age-appropriate way" that their lovable robot is going to die. If you have kids yourself, you know how easily they can sometimes get attached to the weirdest things, from fluffy stuffed animals designed to be cute, to random inanimate objects us adults would never consider to be even remotely interesting. I can definitely see how my own kids would be devastated if one of their favourite emotional" toys were to suddenly stop working or disappear, and we don't even have anything that pretends to have a personality or that actively interacts with our kids like this robot thing does. We can talk about how it's insane that no refunds will be given, or how a company can just remotely kill a product like this without any repercussions, but most of all I'm just sad for the kids who use one and are truly attached to it, who now have to deal with their little friend going away. That's just heartbreaking, and surely a sign of things to come as more and more companies start stuffing AI" into their toys. The only thing I can say is that we as parents should think long and hard about what kind of toys we give our children, and that we should maybe try to avoid anything tied to a cloud service that can go away at any time.
Although there's little evidence of them today, Apple made a long succession of Mac servers and servers for Macs from 1988 to 2014, and only discontinued support for the last release of macOS Server in April 2022. Its first entry into the market was a special version of the Macintosh II running Apple's own port of Unix way back in 1988. Howard Oakley These days, you can nab Xserves for pretty cheap on eBay, but since Apple doesn't properly support them anymore, they're mostly a curiosity for people who are into retro homelab stuff and the odd Apple enthusiast who doesn't know what to do with it. It always felt like Apple's head was never really in the game when it came to its servers, despite the fact that both its hardware and software were quite interesting and user friendly compared to the competition. Regardless, if my wife and I ever manage to buy our own house, the basement's definitely getting a nice homelab rack with old - mostly non-x86 Sun and HP - servers, and I think an Xserve would be a fun addition, too. Living in the Arctic means any heat they generate is useful for like 9 or so months of the year to help warm the house, and since our electricity is generated from hydropower they wouldn't be generating a massive excess of pollution, either. I have to figure out what to do with the excess heat during the few months of the year where it's warm outside, though.
Today I'm delighted to announce Willow, our latest quantum chip. Willow has state-of-the-art performance across a number of metrics, enabling two major achievements. The concensus seems to be that this is a major achievement and milestone in quantum computing, and that it's come faster than everyone expected. This topic is obviously far more complicated than most people can handle, so we have to rely on the verdicts and opinions from independent experts to gain some sense of just how significant an announcement this really is. The paper's published in Nature for those few of us possessing the right amount of skill and knowledge to disseminate this information.
We're grateful for our weekly sponsor, OpenSource Science B.V., an educational institution focused on Open Source software. OS-SCi is training the next generation FOSS engineers, by using Open Source technologies and philosophy in a project learning environment. OS-SCi is offering OSNews readers a free / gratis online masterclass by Prof. Ir. Erik Mols on how the proprietary ecosystem is killing itself. This is a live event, on January 9, 2025 at 17:00 PM CET. Sign up here.
It's no secret that I am very worried about the future of Firefox, and the future of Firefox on Linux in particular. I'm not going to rehash these worries here, but suffice to say that with Mozilla increasingly focusing on advertising, Firefox' negligible market share, and the increasing likeliness that the Google Search deal, which accounts for 85% of Mozilla's revenue, will come to an end, I have little faith in Firefox for Linux remaining a priority for Mozilla. On top of that, as more and more advertising nonsense, in collaboration with Facebook, makes its way into Firefox, we may soon arrive at a point where Firefox can't be shipped by Linux distributions at all anymore, due to licensing and/or idealogical reasons. I've been warning the Linux community, and distributions in particular, for years now that they're going to need an alternative default browser once the inevitable day Firefox truly shits the bed is upon us. Since I'm in the middle of removing the last few remaining bits of big tech from my life, I figured I might as well put my money where my mouth is and go on a small side quest to change my browser, too. Since I use Fedora KDE on all my machines and prefer to have as many native applications as possible, I made the switch to KDE's own browser: Falkon. What is Falkon? Falkon started out as an independent project called QupZilla, but in 2017 it joined the KDE project and was renamed to Falkon. It uses QtWebEngine as its engine, which is Qt's version of the Chromium engine, but without all the services that talk to Google, which are stripped out. This effectively makes it similar to using de-Googled Chromium. The downside is that QtWebEngine does lag behind the current Chromium version; QtWebEngine 6.8.0, the current version, is Chromium 122, while Chromium 133 is current at the time of writing. The fact that Falkon uses a variant of the Chromium engine means websites just work, and there's really nothing to worry about when it comes to compatibility. Another advantage of using QtWebEngine is that the engine is updated independently from the browser, so even if it seems Falkon isn't getting much development, the engine it uses is updated regularly as part of your distribution's and KDE's Qt upgrades. The downside, of course, is that you're using a variant of Chromium, but at least it's de-Googled and entirely invisible to the user. It's definitely not great, and it contributes to the Chromium monoculture, but I can also understand that a project like Qt isn't going to develop its own browser engine, and in turn, it makes perfect sense for KDE, as a flagship Qt product, to use it as well. It's the practical choice, and I don't blame either of them for opting for what works, and what works now - the reality is that no matter what browser you're choose, you're either using a browser made by Google, or one kept afloat by Google. Pick your poison. It's not realistic for Qt or KDE to develop their own browser engine from scratch, so opting for the most popular and very well funded browser engine and strip out all of its nasty Google bits makes the most sense. Yes, we'd all like to have more capable browser engines and thus more competition, but we have to be realistic and understand that's not going to happen while developing a browser engine is as complex as developing an entire operating system. Falkon's issues and strengths While rendering websites, compatibility, and even performance is excellent - as a normal user I don't notice any difference between running Chrome, Firefox, or Falkon on my machines - the user interface and feature set is where Falkon stumbles a bit. There's a few things users have come to expect from their browser that Falkon simply doesn't offer yet, and those things needs to be addressed if the KDE project wants Falkon to be a viable alternative to Firefox and Chrome, instead of just a languishing side project nobody uses. The biggest thing you'll miss is without a doubt support for modern extensions. Falkon does have support for the deprecated PPAPI plugin interface and its own extensions system, but there's no support for the modern extensions API Firefox, Chrome, and other browsers use. What this means for you as a user is that there are effectively no extensions available for Falkon, and that's a huge thing to suddenly have to do without. Luckily, Falkon does have adblock built-in, including support for custom block lists, so the most important extension is there, but that's it. There's a very old bug report/feature request about adding support for Firefox/Chrome extensions, which in turn points to a similar feature request for QtWebEngine to adopt support for such extensions. The gist is that for Falkon to get support for modern Firefox and Chrome extensions, it will have to go through QtWebEngine and thus the Qt project. While I personally can just about get by with using the BitWarden application (instead of the extension) and the built-in adblock, I think this is an absolute most for most people to adopt Falkon in any serious numbers. Most people who would consider switching to a different browser than Chrome or Firefox are going to need extensions support. The second major thing you'll miss is any lack of synchronisation support. You won't be synchronising your bookmarks across different machines, let alone open tabs. Of course, this extends to mobile, where Falkon has no presence, so don't expect to send your open tabs from your phone to your desktop as you get home. While I don't think this is as big of an issue as the lack of modern extensions, it's something I use a lot when working on OSNews - I find stories to link to while browsing on my phone, and then open them on my desktop to post them through the tab sharing feature of Firefox, and
Back in December 2019, Microsoftfinally killed off Windows 10 Phoneas it announced the end of support. The company's grand plans with Lumia and Windows Phones sadly never became the success it needed to be in order to be able to compete with the likes of Android or iOS. Thus Windows 11 Phone never became a real official thingoutside of concepts. However, there is a free unofficial way that makes it possible, albeit the experience may not totally be free from flaws. Dubbed Project Renegade, the mod enables users to try Windows 11 on Qualcomm Snapdragon phones, among other devices. Sayan Sen at Neowin Windows Phone 7 and 8 were amazing, and probably my favourite mobile platform of all time. I'm still sad that the duopoly made it impssible even for Microsoft to gain a foothold, because their efforts definitely deserved it. They didn't just blindly copy Android or iOS, but came up with a truly original, unique, and in my view, superior mobile operating system, and in a fair market, they would've been rewarded for it, and Windows Phone would have a perhaps small, but profitable segment of the market. In the vein of Bernie can still win, I still have this faint belief that Microsoft hasn't completely given up on the smartphone market. Now that they're serious about Windows on ARM, they might use it as sneaky way to get application developers on board, so that their applications are ready for the big Surface Phone a few years from now, complete with Windows Phone-inspired features and UI. I know this won't happen, but let me enjoy my non-existent future, please. I don't want to rely on big tech anymore, but I might make an exception for an up-to-date Windows Phone. I'm only human.
EXiGY rolls up the all of the above experiences into a single package: make games the way they were made in the mid-90s, by dragging and dropping objects into a window, programming some behaviour into those objects, and clicking the Run button. It's like ZZT with tile graphics instead of ASCII. Want to send your little game to some friends? Click the Gift button to package all of the files up, and send your friend the .XGY file. EXiGY is about making it fun to create games again. Chris on the Exigy website I fell in love with this the second I saw it come by on Mastodon. Chris - I don't know the author's full name so I'll stick with Chris - has been working on this for the past year, and it's not out quite yet. Still, the feature list is packed, and on the linked website, they intend to post development updates so we can keep up with the goings-on. This seems like an incredibly cool project and I'd love to play around with it when Chris deems it ready for release.
Every time a new Redox monthly report comes out, I'm baffled by the fact we've apparently rounded another month. They just keep on coming and going, don't they? And I even turned 40 this 1 December, so it hits even harder this time. I'm now as old as I remember my parents were in some of my oldest memories, and now I've got two kids of my own. Wild. Time isn't supposed to move this fast, and I strongly advise the Redox team to stop this madness. Anyway, this month also saw the release of the 4th alpha of system76's new COSMIC Linux desktop environment, and the parts of COSMIC available on Redox were updated to reflect that. This past months also saw a major milestone: the RISC-V version of Redox running in an emulator on the x86-64 version of Redox. That's quite the feat, and highlights just how capable Redox has become in such a short time. There's also the usual list of kernel, driver, and relibc improvements, as well as additional Rust programs ported to Redox. Also highlighted in this report: a video detailing how to build Redox under Windows Subsystem for Linux. This could be a great avenue for operating system developers who use Windows to get their feet wet at building Redox on their own systems.
Today, we are announcing the availability ofVanir, a new open-source security patch validation tool. Introduced at Android Bootcamp in April, Vanir gives Android platform developers the power to quickly and efficiently scan their custom platform code for missing security patches and identify applicable available patches. Vanir significantly accelerates patch validation by automating this process, allowing OEMs to ensure devices are protected with critical security updates much faster than traditional methods. This strengthens the security of the Android ecosystem, helping to keep Android users around the world safe. Google Security Blog Google makes it clear this tool can easily be adapted for other avenues too - it's not locked into only working with Android and Java/C/C++. Since it's now open source, anyone can contribute to it and make it compatible - for lack of a better term - with other platforms and programming languages as well.
Mozilla isn't just another tech company - we're a global crew of activists, technologists and builders, all working to keep the internet free, open and accessible. For over 25 years, we've championed the idea that the web should be for everyone, no matter who you are or where you're from. Now, with a brand refresh, we're looking ahead to the next 25 years (and beyond), building on our work and developing new tools to give more people the control to shape their online experiences. Lindsey Lionheart O'Brien at the Mozilla blog I have no clue about marketing and branding and what investments in those things cost, but all I could think about while reading this massive pile of marketing wank is that the name Firefox" only occurs once. How many Firefox bugs could've been squashed with the money spent on this rebrand literally nobody is going to care about because nobody uses Firefox as it is? Is a new logo and accompanying verbal diarrea really what's going to turn this sinking ship around? I've already made my choice, and I've left Firefox behind on all my machines, opting for an entirely different browser instead. I'm writing about that experience as we speak, so you'll have to wait a bit longer to find out what choice I made, but rest assured I know I'm not the only one who is leaving Firefox behind after two decades of loyal service, and I doubt an expensive new logo is going to change anybody's mind.