The current version of Windows on ARM contains Prism, Microsoft's emulator that allows x86-64 code to run on ARM processors. While it was already relatively decent on the recent Snapdragon X platform, it could still be very hit-or-miss with what applications it would run, and especially games seemed to be problematic. As such, Microsoft has pushed out a major update to Prism that adds support for a whole bunch of extensions to the x86 architecture. This new support in Prism is already in limited use today in the retail version of Windows 11, version 24H2, where it enables the ability to runAdobe Premiere Pro 25on Arm. Starting with Build 27744, the support is being opened to any x64 application under emulation. You may find some games or creative apps that were blocked due to CPU requirements before will be able to run using Prism on this build of Windows. At a technical level, the virtual CPU used by x64 emulated applications through Prism will now have support for additional extensions to the x86 instruction set architecture. These extensions include AVX and AVX2, as well as BMI, FMA, F16C, and others, that are not required to run Windows but have become sufficiently commonplace that some apps expect them to be present. You can see some of the new features in the output of a tool likeCoreinfo64.exe. Amanda Langowski and Brandon LeBlanc on the Windows Blog Hopefully this makes running existing x86 applications that don't yet have an ARM version a more reliable affair for Windows on ARM users.
A long, long time ago, back when running BeOS as my main operating system had finally become impossible, I had a short stint running QNX as my one and only operating system. In 2004, before I joined OSNews and became its managing editor, I also wrote and published an article about QNX on OSNews, which is cringe-inducing to read over two decades later (although I was only 20 when I wrote that - I should be kind to my young self). Sadly, the included screenshots have not survived the several transitions OSNews has gone through since 2004. Anyway, back in those days, it was entirely possible to use QNX as a general purpose desktop operating system, mostly because of two things. First, the incredible Photon MicroGUI, an excellent and unique graphical environment that was a joy to use, and two, because of a small but dedicated community of enthousiasts, some of which QNX employees, who ported a ton of open source applications, from basic open source tools to behemoths like Thunderbird, the Mozilla Suite, and Firefox, to QNX. It even came with an easy-to-use package manager and associated GUI to install all of these applications without much hassle. Using QNX like this was a joy. It really felt like a tightly controlled, carefully crafted user experience, despite desktop use being so low on the priority list for the company that it might as well have not been on there at all. Not long after, I think a few of the people inside QNX involved with the QNX desktop community left the company, and the entire thing just fizzled out afterwards when the company was acquired by Harman Kardon. Not long after, it became clear the company lost all interest, a feeling only solidified once Blackberry acquired the company. Somewhere in between the company released some of its code under some not-quite-open-source license, accompanied by a rather lacklustre push to get the community interested again. This, too, fizzled out. Well, it seems the company is trying to reverse course, and has started courting the enthusiast community once again. This time, it's called QNX Everywhere, and it involves making QNX available for non-commercial use for anyone who wants it. No, it's not open source, and yes, it requires some hoops to jump through still, but it's better than nothing. In addition, QNX also put a bunch of open source demos, applications, frameworks, and libraries on GitLab. One of the most welcome new efforts is a bootable QNX image for the Raspberry Pi 4 (and only the 4, sadly, which I don't own). It comes with a basic set of demo application you can run from the command line, including a graphical web browser, but sadly, it does not seem to come with Photon microGUI or any modern equivalent. I'm guessing Photon hasn't seen a ton of work since its golden days two decades ago, which might explain why it's not here. There's also a list of current open source ports, which includes chunks of toolkits like GTK and Qt, and a whole bunch of other stuff. Honestly, as cool as this is, it seems it's mostly aimed at embedded developers instead of weird people who want to use QNX as a general purpose operating system, which makes total sense from QNX' perspective. I hope Photon microGUI will make a return at some point, and it would be awesome - but I expect unlikely - if QNX could be released as open source, so that it would be more likely a community of enthusiasts could spring up around it. For now, without much for a non-developer like me to do with it, it's not making me run out to buy a Raspberry Pi 4 just yet.
Old-school Apple fans probably remember a time, just before the iPhone became a massive gaming platform in its own right, when Apple released a wide range of games designed for late-model clickwheel iPods. While those clickwheel-controlled titles didn't exactly set the gaming world on fire, they represent an important historical stepping stone in Apple's long journey through the game industry. Today, though, these clickwheel iPod games are on the verge of becoming lost media-impossible to buy or redownload from iTunes and protected on existing devices by incredibly strong Apple DRM. Now, the classic iPod community is engaged in a quest to preserve these games in a way that will let enthusiasts enjoy these titles on real hardware for years to come. Kyle Orland at Ars Technica A nice effort, of course, and I'm glad someone is putting time and energy into preserving these games and making them accessible to a wider audience. As is usual with Apple, these small games were heavily encumbered with DRM, being locked to both the the original iTunes account that bought them, but also to the specific hardware identifier of the iPod they were initially synchronised to using iTunes. A clever way around this DRM exists, and it involves collectors and enthusiasts creating reauthorising their iTunes accounts to the same iTunes installation, and thus adding their respective iPod games to that single iTunes installation. Any other iPods can then be synced to that master account. The iPod Clickwheel Games Preservation Project takes this approach to the next level, by setting up a Windows virtual machine with iTunes installed in it, which can then be shared freely around the web for people to the games to their collection. This is a rather remarkably clever method of ensuring these games remain accessible, but obviously does require knowledge of setting up Qemu and USB passthrough. I personally never owned an iPod - I was a MiniDisc fanatic until my Android phone took over the role of music player - so I also had no clue these games even existed. I assume most of them weren't exactly great to control with the limited input method of the iPod, but that doesn't mean there won't be huge numbers of people who have fond memories of playing these games when they were younger - and thus, they are worth preserving. We can only hope that one day, someone will create a virtual machine that can run the actual iPod operating system, called Pixo OS.
Nothing is sacred. With this update, we are introducing the ability to rewrite content in Notepad with the help of generative AI. You can rephrase sentences, adjust the tone, and modify the length of your content based on your preferences to refine your text. Dave Grochocki at the Windows Insider Blog This is the reason everything is going to shit.
Today, Microsoftannouncedthe general availability of Windows Server IoT 2025. This new release includes several improvements, including advanced multilayer security, hybrid cloud agility, AI,performance enhancements, and more. Microsoft claims that Windows Server IoT 2025 will be able to handle the most demanding workloads, including AI and machine learning. It now has built-in support forGPU partitioningand the ability to process large datasets across distributed environments. With Live Migration and High Availability, it also offers a high-performance platform for both traditional applications and advanced AI workloads. Pradeep Viswanathan at Neowin Windows Server IoT 2025 brings the same benefits, new features, and improvements as the just-released regular Windows Server 2025. I must admit I'm a little unclear as to what Windows Server IoT has to offer over the regular edition, and reading the various Microsoft marketing materials and documents don't really make it any clearer for me either, since I'm not particularly well-versed in all that enterprise networking lingo.
NetBSD is an open-source, Unix-like operating system known for its portability, lightweight design, and robustness across a wide array of hardware platforms. Initially released in 1993, NetBSD was one of the first open-source operating systems based on the Berkeley Software Distribution (BSD) lineage, alongside FreeBSD and OpenBSD. NetBSD's development has been led by a collaborative community and is particularly recognized for its clean" and well-documented codebase, a factor that has made it a popular choice among users interested in systems programming and cross-platform compatibility. Andre Machado I'm not really sure what to make of this article, since it mostly reads like an advertisement for NetBSD, but considering NetBSD is one of the lesser-talked about variants of an operating system family that already sadly plays second fiddle to the Linux behemoth, I don't think giving it some additional attention is really hurting anybody. The article is still gives a solid overview of the history and strengths of NetBSD, which makes it a good introduction. I have personally never tried NetBSD, but it's on my list of systems to try out on my PA-RISC workstation since from what I've heard it's the only BSD which can possibly load up X11 on the Visualize FX10pro graphics card it has (OpenBSD can only boot to a console on this GPU). While I could probably coax some cobbled-together Linux installation into booting X11 on it, where's the fun in that? Do any of you lovely readers use NetBSD for anything? FreeBSD and even OpenBSD are quite well represented as general purpose operating systems in the kinds of circles we all frequent, but I rarely hear about people using NetBSD other than explicitly because it supports some outdated, arcane architecture in 2024.
Another month lies behind us, so another monthly update from Redox is upon us. The biggest piece of news this time is undoubtedly that Redox now runs on RISC-V - a major achievement. Andrey Turkin has done extensive work on RISC-V support in the kernel, toolchain and elsewhere. Thanks very much Andrey for the excellent work! Jeremy Soller has incorporated RISC-V support into the toolchain and build process, has begun some refactoring of the kernel and device drivers to better handle all the supported architectures, and has gotten the Orbital Desktop working when running in QEMU. Ribbon and Ron Williams That's not all, though. Redox on the Raspberry Pi 4 boots to the GUI login screen, but needs more work on especially USB support to become a fully usable target. The application store from the COSMIC desktop environment has been ported, and as part of this effort, Redox also adopted FreeDesktop standards to make package installation easier - and it just makes sense to do so, with more and more of COSMIC making its way to Redox. Of course, there's also a slew of smaller improvements to the kernel, various drivers including the ACPI driver, RedoxFS, Relibc, and a lot more. The progress Redox is making is astounding, and while that's partly because it's easier to make progress when there's a lot of low-hanging fruit as there inevitably will be in a relatively new operating system, it's still quite an achievement. I feel very positive about the future of Redox, and I can't wait until it reaches a point where more general purpose use becomes viable.
Microsoft has confirmed the general availability of Windows Server 2025, which, as a long-term servicing channel (LTSC) release, will be supported for almost ten years. This article describes some of the newest developments in Windows Server 2025, which boasts advanced features that improve security, performance, and flexibility. With faster storage options and the ability to integrate with hybrid cloud environments, managing your infrastructure is now more streamlined. Windows Server 2025 builds on the strong foundation of its predecessor while introducing a range of innovative enhancements to adapt to your needs. What's new in Windows Server 2025 article It should come as no surprise that Windows Server 2025 comes loaded with a ton of new features and improvements. I already covered some of those, such as DTrace by default, NVMe and storage improvements, hotpatching, and more. Other new features we haven't discussed yet are a massive list of changes and improvements to Active Directory, a feature-on-demand feature for Azure Arc, support for Bluetooth keyboards, mice, and other peripherals, and tons of Hyper-V improvements. SMB is also seeing so many improvements it's hard to pick just a few to highlight, and software-defined networking is also touted as a major aspect of Server 2025. With SDN you can separate the network control plane from the data plane, giving administrators more flexibility in managing their network. I can just keep going listing all of the changes, but you get the idea - there's a lot here. You can try Windows Server 2025 for free for 180 days, as a VM in Azure, a local virtual machine image, or installed locally through an ISO image.
Some months ago, I got really fed up with C. Like, I don'thateC. Hating programming languages is silly. But it was way too much effort to do simple things like lists/hashmaps and other simple data structures and such. I decided to try this language calledOdin, which is one of these Better C" languages. And I ended up liking itso muchthat I moved my gameArtificial Ragefrom C to Odin. Since Odin has support for Raylib too (like everything really), it was very easy to move things around. Here's how it all went.. Well, what I remember the very least. Akseli Lahtinen You programmers might've thought you escaped the wrath of Monday on OSNews, but after putting the IT administrators to work in my previous post, it's now time for you to get to work. If you have a C codebase and want to move it to something else, in this case Odin, Lahtinen's article will send you on your way. As someone who barely knows how to write HTML, it's difficult for me to say anything meaningful about the technical details, but I feel like there's a lot of useful, first-hand info here.
It's the start of the work week, so for the IT administrators among us, I have another great article by friend of the website, Stefano Marinelli. This article covers migrating a Proxmox-based setup to FreeBSD with bhyve. The load is not particularly high, and the machines have good performance. Suddenly, however, I received a notification: one of the NVMe drives died abruptly, and the server rebooted. ZFS did its job, and everything remained sufficiently secure, but since it's a leased server and already several years old, I spoke with the client and proposed getting more recent hardware and redoing the setupbasedon aFreeBSDhost. Stefano Marinelli If you're interested in moving one of your own setups, or one of your clients' setups, from Linux to FreeBSD, this is a great place to start and get some ideas, tips, and tricks. Like I said, it's Monday, and you need to get to work.
It's been less than a week, and late Friday night we reached the fundraiser goal of 2500 (it sat at 102% when I closed it) on Ko-Fi! I'm incredibly grateful for each and every donation, big or small, and every new Patreon that joined our ranks. It's incredible how many of you are willing to support OSNews to keep it going, and it means the absolute world to me. Hopefully we'll eventually reach a point where monthly Patreon income is high enough so we can turn off ads for everyone, and be fully free from any outside dependencies. Of course, it's not just those that choose to support us financially - every reader matters, and I'm very thankful for each and every one of you, donor/Patreon or not. The weekend's almost over, so back to regular posting business tomorrow. I wish y'all an awesome Sunday evening.
Many MacOS users are probably used by now to the annoyance that comes with unsigned applications, as they require a few extra steps to launch them. This feature is called Gatekeeper and checks for an Apple Developer ID certificate. Starting with MacOS Sequoia 15, the easy bypassing of this feature with e.g. holding Control when clicking the application icon is nowno longer an option, with version 15.1 disabling ways to bypass this completely. Not unsurprisingly, this change has caught especially users of open source software like OpenSCAD by surprise, as evidenced bya rangeofforum postsand GitHubtickets. Maya Posch at Hackaday It seems Apple has disabled the ability for users to bypass application signing entirely, which would be just the next step in the company's long-standing effort to turn macOS into iOS, with the same, or at least similar, lockdowns and restrictive policies. This would force everyone developing software for macOS to spend 99 per year in order to get their software signed, which may not be a realistic option for a lot of open source software. Before macOS 15.0, you could ctrl+right-click an unsigned application and force it to run. In macOS 15.0, Apple removed the ability to do this; instead, you had to try and open the application (which would fail), and then open System Settings, go to Privacy & Security, and click the Open Anyway" button to run the application. Stupidly convoluted, but at least it was possible to run unsigned applications. In macOS 15.1, however, even this convoluted method no longer seems to be working. When you try and launch an unsigned application in macOS 15.1, you get a dialog that reads The application Finder" does not have permission to open (null)", and no button to open the application anyway appears under Privacy & Security. The wording of the dialog would seem to imply this is a bug, but Apple's lack of attention to UI detail in recent years means I wouldn't be surprised if this is intentional. This means that the only way to run unsigned applications on macOS 15.1 is to completely disable System Integrity Protection and Gatekeeper. To do this, you have to boot into recovery mode, open the terminal, run the command sudo spctl --master-disable, reboot. However, I do not consider this a valid option for 99.9% of macOS users, and having to disable complex stuff like this through recovery mode and several reboots just to launch an application is utterly bizarre. For those of you still stuck on macOS, I can only hope this is a bug, and not a feature.
In a major shift of its release cycle, Google has revealed thatAndroid 16will be released in Q2 of 2025, confirming my report from late last month. Android 16 is the name of the next major release of the Android operating system, and its release in Q2 marks a significant departure from the norm. Google typically pushes out a new major release of Android in Q3 or Q4, but the company has decided to move next year's major release up by a few months so more devices will get the update sooner. Mishaal Rahman at Android Authority That's a considerable shake-up of Android's long-lasting release cadence. The change includes more than just moving up the major Android release, as Google also intends to ship more minor releases of Android throughout the year. The company has already unveiled a rough schedule for Android 16, only weeks after releasing Android 15, with the major Android 16 release coming in the second quarter of 2025, followed by a minor release in the fourth quarter of 2025. There are two reasons Google is doing this. First, this new release schedule better aligns with when new flagship Android devices are released, so that from next year onwards, they can ship with the latest version of Android of that year preinstalled, instead of last year's release. This should help bump up the number of users using the latest release. Second, this will allow Google to push out SDK releases more often, allowing for faster bug fixing. I honestly feel like most users will barely notice this change. Not only is the Android update situation still quite messy compared to its main rival iOS, the smartphone operating system market has also matured quite a bit, and the changes between releases are no longer even remotely as massive as they used to be. Other than Pixel users, I don't think most people will even realise they're on a faster release schedule.
Genode's rapid development carries on apace. Whilst Genode itself is a so-called OS Framework - the computing version of a rolling chassis that can accept various engines (microkernels) and coachwork of the customer's choice - they also have an in-house PC desktop system. This flagship product, Sculpt OS, comes out on a bi-annual schedule and Autumn brings us the second for the year, with what has become an almost a customary big advance: Among the many usability-related topics on our road map, multi-monitor support is certainly the most anticipated feature. It motivated a holistic modernization of Genode's GUI stack over several months, encompassing drivers, the GUI multiplexer, inter-component interfaces, up to widget toolkits. Sculpt OS 24.10 combines these new foundations with a convenientuser interfacefor controlling monitor modes, making brightness adjustments, and setting up mirrored and panoramic monitor configurations. Genode website Sculpt OS 24.10 is available as ready-to-use system image for PC hardware, the PinePhone, and the MNT Reform laptop.
Another day, another Windows Recall problem. Microsoft is delaying the feature yet again, this time from October to December. We are committed to delivering a secure and trusted experience with Recall. To ensure we deliver on these important updates, we're taking additional time to refine the experience before previewing it with Windows Insiders," says Brandon LeBlanc, senior product manager of Windows, in a statement toThe Verge. Originally planned for October, Recall will now be available for preview with Windows Insiders on Copilot Plus PCs by December." Tom Warren at The Verge Making Recall secure, opt-in, and uninstallable is apparently taking more time than the company originally planned. When security, opt-in, and uninstallable are not keywords during your design and implementation process for new features, this is the ungodly mess that you end up with. This could've all been prevented if Microsoft wasn't high on its own AI" supply.
Torvalds said that the current state of AI technology is 90 percent marketing and 10 percent factual reality. The developer, who won Finland's Millennium Technology Prize for the creation of the Linux kernel,was interviewedduring the Open Source Summit held in Vienna, where he had the chance to talk about both the open-source world and the latest technology trends. Alfonso Maruccia at Techspot Well, he's not wrong. AI" definitely feels like a bubble at the moment, and while there's probably eventually going to be useful implementations people might actually want to actively use to produce quality content, most AI" features today produce a stream of obviously fake diarrhea full of malformed hands, lies, and misinformation. Maybe we'll eventually work out these serious kinks, but for now, it's mostly just a gimmick providing us with an endless source of memes. Which is fun, but not exactly what we're being sold, and not something worth destroying the planet for even faster. Meanwhile, Google is going utterly bananas with its use of AI" inside the company, with Sundar Pichai claiming 25% of code inside Google is now AI"-generated. Sundar Pichai We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster. So much here feels wrong. First, who wants to bet those engineers care a whole lot less about the generated code than they do about code they write themselves? Second, who wants to bet that generated code is entirely undocumented? Third, who wants to bet what the additional costs will be a few years from now when the next batch of engineers tries to make sense of that undocumented generated code? Sure, Google might save a bit on engineers' salaries now, but how much extra will they have to spend to unspaghettify that diarrhea code in the future? It will be very interesting to keep an eye on this, and check back in, say, five years, and hear from the Google engineers of the future how much of their time is spent fixing undocumented AI"-generated code. I can't wait.
It seems the GNOME team is getting quite serious about turning GNOME OS into an end-user focused Linux distribution, similar to a project KDE is working on. GNOME OS is GNOME's development, testing, and QA distribution. It's not designed to be useful as a general-purpose system, and so it hasn't been the center of attention. However, that makes it a convenient place to experiment, and ultimately through sheer coincidence the GNOME OS team ended up developing something that follows my vision using the same technology that I was. The onlyrealdifference was intent: carbonOS was intended for mass adoption, and GNOME OS was not. In essentially every other aspect, the projects had the same roadmap: following Lennart Poettering'sFitting Everything Together"proposal, providing a stock GNOME experience, and even using the same build system (BuildStream). Adrian Vovk The goal with GNOME OS is to showcase the best GNOME has to offer, built on top of an immutable base system, using Flatpak as the means to install applications. Basically, we're looking at something very similar to the immutable Fedora GNOME variant, but probably with even less modifications to stock GNOME, and perhaps with few more newer things as default, like perhaps systemd-boot over GRUB. KDE also happens to be working on a very similar project, with many of the same design choices and constraints. I think this is an excellent idea, for both GNOME and KDE. This allows them to offer users a very focused, simple, and resilient way of showcasing the latest and greatest the two desktop environments have to offer, without having to rely on third-party distributions to not make silly choices or mess things up - for which GNOME and KDE developers then tend to take the heat. Systems like these will, of course, also be a great way for developers to quickly spin up the latest stock versions of GNOME and KDE to test their applications. Still, there's also a downside to having official GNOME and KDE distributions. If users find bugs or issues in these desktop environment when running other distributions, like Fedora or Ubuntu, GNOME and KDE developers may be tempted to just shrug them off and point them to the official GNOME and KDE distributions. It works there, so obviously the cause of the bug lies with the unofficial distribution, right? This may be a tempting conclusion, but may not be accurate at all, as the real cause could still lie with GNOME and KDE. Once such official" GNOME and KDE Linux distributions exist, the projects run a real risk of only really caring about how well GNOME and KDE work there, while not caring as much, or even at all, how well they run everywhere else. I'm not sure how they intend to prevent this from happening, but from here, I can already see the drama erupting. I hope this is something they take into consideration. Immutable distributions are not for me, since I prefer the control regular Fedora and RPM gives me, and I don't want to give that up. It also doesn't help I really, really don't like Flatpak as it exists today, which is another major barrier to entry for someone like me, and I assume most OSNews readers. However, there are countless Linux users out there who just want to get stuff done with whatever defaults come with their operating system, and for them, this newly proposed GNOME OS and its KDE counterpart are a great choice. There's a reason Valve opted for an Arch-based immutable KDE distribution for the Steam Deck, after all.
There's been more controversy regarding Microsoft's Recall feature for Windows, with people supposedly discovering Recall was being secretly installed on Windows 11 24H2. Furthermore, trying to remove this secretly installed Recall would break Explorer, as it seemed Explorer had a dependency on Recall. Unsurprisingly, this spread like wildfire all across the web, but I didn't report on it because something about it felt off - reports were sporadic and vague, and there didn't seem to be any consistency in the various stories. Well, it turns out that it is a big misunderstanding arising from Microsoft's usual incompetence. Ever since the Recall security fiasco in summer, all insider and production builds lack Recall completely," explains Windows watcherAlbacore, in messages toThe Verge. Albacore created theAmperage toolthat allowed Recall to run on older Snapdragon chips. The references we're seeing in current installs of 24H2 are related to Microsoft making it easier for system admins to remove Recall or disable it. Ironically, Microsoft going out of its way to make removal easier is being flipped into AI / spying / whatever hoaxes," says Albacore. Microsoft has an ungodly complex and long winded system for integrating development changes into a mainline build, parts of the optional-izing work were most likely not merged at once, and thus produce crash loops in very specific scenarios that slipped testing," explains Albacore. Tom Warren at The Verge What this story really highlights is just how little trust Microsoft has left with its very own users. Microsoft has a history of silently and secretely re-enabling features users turned off, re-installing Edge without any user interaction or consent, lots of disabled telemetry features suddenly being turned on again after an update, and so on. Over the years, this has clearly eroded any form of trust users have in Microsoft, so when a story like this hits, users just assume it's Microsoft doing shady stuff again. Can you blame them? All of this is made worse by the absolutely dreadfully bad messaging and handling of the Recall feature. The shoddy implementation, the complete lack of security, the severe inability to read the room about the privacy implications of a feature like Recall, combined with the lack of trust mentioned above, and you have a very potent cocktail of misinformation entirely of Microsoft's own making. I'm not trying to excuse Microsoft here - they themselves are the only ones to blame for stories like these. I have a feeling we're going to see a lot more Recall problems.
The standard trope when talking about timezones is to rattle off falsehoods programmers believe about them. These lists are only somewhat enlightening - it's really hard to figure out what truth is just from the contours of falsehood. So here's an alternative approach. I'm gonna show you some weird timezones. In fact, theweirdesttimezones. They're each about as weird as timezones are allowed to get in some way. Ulysse Carion The reason why timezones are often weird is not only things like the shape of countries dictating where the actual timezones begin and end, but also because of politics. A lot of politics. The entirety of China runs on Beijing time, even though it covers five geographical timezones. Several islands in the Pacific were forced by their colonisers to run on insanely offset timezones because it made exploiting them easier. Time in Europe is political, too - countries like The Netherlands, Belgium, France, and Spain should really be in the same time zone as the UK, but adopted UTC+1 because it aligns better with the rest of mainland Europe. Although anything is better than whatever the hell Dutch Time was. Then there is, of course, daylight savings, which is a whole pointless nightmare in and of itself that should be abolished. Daylight savings rules and exceptions alone cover a ton of the oddities and difficulties with timezones, which is reason enough to get rid of it, aside from all the other possible issues, but a proposal to abolish it in the EU has sadly stalled.
Speaking of Wayland, one of the most important parts of the transition is Xwayland, which makes sure legacy X applications not yet capable of running under a modern graphics stack can continue to function. Xwayland applications have had this weird visual glitch during resize operations, however, where the opposite side of the window would expand and contract while resizing. KDE developer Vlad Zahorodnii wanted to fix this, and he wrote a very detailed article explaining why, exactly, this bug happens, which takes you deep into the weeds of X and Wayland. Window resizing in X would be a glitchy mess, if it wasn't for the X11 protocol to synchronize window repaints during interactive resize, which ensures that the window resize and the application repainting its window contents remain synchronised. This protocol is supported by Kwin and GNOME's Mutter, so what's the problem here? Shouldn't everything just work? KWin supports the basic frame synchronization protocol, so there should be no visual glitches when resizing X11 windows in the Plasma Wayland session, right? At quick glance, yes, but we forget about the most important detail: Wayland compositors don't useXCompositeNameWindowPixmap()orxcb_composite_name_window_pixmap()to grab the contents of X11 windows, instead they rely on Xwayland attaching graphics buffers towl_surfaceobjects, so there is no strict order between the Wayland compositor receiving an XSync request acknowledgement and graphics buffers for the new window size. Vlad Zahorodnii Basically, the goal of the fix is to make sure these steps are also synchronised when using Xwayland, and that's exactly what Zahorodnii has achieved. This makes the resizing X windows under Xwayland look normal and without weird visual glitches, which is a massive improvement to the overall experience of using a Wayland desktop with a few stray X applications. Thanks to this fix, which was made possible with help from Wayland developers, Kwin is now one of the few compositors that correctly synchronises X windows running under Wayland. KDE has been doing an amazing job moving from X to Wayland, and I don't think there's anyone else who has managed to make the transition quite as painless. Not only do KDE developers focus on difficult bugs like this one that many others would just shrug off as acceptable jank, they also made things like the Wayland to X11 Video Bridge, a desktop-agnostic tool to allow things like screen sharing in Teams, Discord, Slack, etc. to work properly on Wayland.
The slow rise of Wayland hasn't really been slow anymore for years now, and today another major part of the Linux ecosystem is making the jump from X to Wayland. So we made the decision to switch. For most of this year, we have been working on porting labwc to the Raspberry Pi Desktop. This has very much been a collaborative process with the developers of both labwc and wlroots: both have helped us immensely with their support as we contribute features and optimisations needed for our desktop. After much optimisation for our hardware, we have reached the point where labwc desktops run just as fast as X on older Raspberry Pi models. Today, we make the switch with our latest desktop image:Raspberry Pi Desktop now runs Wayland by default across all models. Simon Long Raspberry Pi Desktop already used Wayland on some of the newer models, through the use of Wayfire. However, it turned out Wayfire wasn't a good fit for the older Pi models, and Wayfire'x development direction would move it even further away from that goal, which is obviously important to the Raspberry Pi Foundation. They eventually settled on using labwc instead, which can also be used on older Pi models. As such, all Pi models will now switch to using Wayland with the latest update to the operating system. This new update also brings vastly improved touchscreen support, a rewritten panel application that won't keep removed plugins in memory, a new display configuration utility, and more.
Do you want OSNews to continue to exist? Do you like the selection of news items I manage to scrounge up almost every day? Do you want OSNews free from corporate influence, AI"-generated nonsense, and the kind of SEO-optimised blogspam we all despise? Consider supporting OSNews financially, so I can keep running the site as an independent entity, free from the forces that make the web shittier every day. There are several ways you can support OSNews. First, you can become a Patreon. Being an OSNews Patreon means no more ads on OSNews, access to the OSNews Matrix room, and some fancy flair on your comments. The goal is to eventually have enough Patreons supporting us to make us independent even from regular ads, which means we'll need to hit at least 1500-2000 a month. Once we achieve that, we will turn off ads for everyone. OSNews is my job, and thus my only source of income, so we can only turn off ads once community support is high enough to do so. This is obviously a long-term goal. To help us all get there, I've added a brand new, even higher Patreon tier. If being a Platinum Patreon isn't enough for you, you can now move on up and become an Antimatter Patreon for 50/month. You'll get all the same benefits as the Platinum tier, but on top of that, you can opt to have your name permanently displayed on the frontpage in our sidebar. This tier is really specifically designed for the most hardcore supporters of OSNews, and can even be used as a bit of a marketing tool for yourself. By the way, I do not know where to go after antimatter. What's rarer and more expensive than antimatter? Second, you can make an individual donation to OSNews through Ko-Fi. Recently, my wife, two kids, and I were all hit with, in order, bronchitis, flu, and then a minor cold. With all of us down and out, unable to work, our finances obviously took a bit of a hit. My wife works in home care for the elderly, which isn't exactly a job with a fair wage, so any time we can't work it hits us hard. Individual Ko-Fi donations have proven to be lifesavers. As such, I've set up a Ko-Fo donation target of 2500, so my wife, kids, and I can build up a bit of a buffer for emergencies. Creating such a buffer will be a huge load off our backs. Third, we have official OSNews merch! Our merch store is filled with a ton of fun products for the operating system connoisseurs among us, from the basic OSNews T-shirt and mug, to the old-school ASCII-art OSNews T-shirt and sweatshirt, and finally three unique terminal T-shirts showing the terminal of MS-DOS, BeOS, and Mac OS X. Each of the terminal shirts sport the correct colour schemes, text, and fonts. The pricing has been set up in such a way that for each product sold, we receive about $8. OSNews has always been a passion project for everyone involved, and I'd like to continue that. By making sure we're independent, free from the forces that are destroying websites left, right, and centre, OSNews can keep doing what it's always done: report on things nobody else covers, without the pressure to post 45 items about every new iPhone, stupid SEO blogspam nonsense about how to plug in a USB cable or whatever, or AI"-generated drudgery. The people making that possible are all of our Patreons, Ko-Fi donors, and merch customers. You have no idea how thankful I am for each and every one of you.
The Trinity Desktop Environment, a fork of the last release in the KDE 3.x series, has just released their latest version, R14.1.3. Despite its rather small version number change, it contains some very welcome new features. TDE started the process of integrating the XDG Desktop Portal API, which will bring a lot of welcome integration with applications from the wider ecosystem. There's also a brand new touchpad settings module, which was something I was sorely missing when I tried out TDE a few months ago. Furthermore, there's of course a ton of bugfixes and improvements, but also things like support for tiling windows, some new theme and colour scheme options, and a lot more. Not too long ago, when KDE's Akademy 2024 took place, a really fun impromptu event happened. A number of KDE developers got together - I think in a restaurant or coffee place - and ended up organising an unplanned TDE installation party. Several photos floated around Mastodon of KDE developers using TDE, and after a few fun interactions between KDE and TDE developers on Mastodon, TDE developers ended up being invited to next year's Akademy. We'll have to wait and see if the schedules line up, but if any of this can lead to both projects benefiting from some jolly cooperation, it can only be seen as a good thing. Regardless, TDE is an excellent project with a very clear goal, and they're making steady progress all the time. It's not a fast-paced environment chasing the latest and greatest technologies, but instead builds upon a solid foundation, bringing it into modern world where it makes sense. If you like KDE 3.x, TDE is going to be perfect for you.
There's many ways to judge if an operating system has made it to the big leagues, and one of the more unpleasant ones is the availability of malware. Haiku, the increasingly capable and daily-driveable successor to BeOS, is now officially a mainstream operating system, as it just had its first piece of malware. HaikuRansomware is an experimental ransomware project designed for educational and investigative purposes. Inspired by the art of poetry and the challenge of cryptography, this malware encrypts files with a custom extension and provides a ransom note with a poetic touch. This is a proof of concept aimed to push the boundaries of how creative ransomware can be designed. HaikuRansomware's GitHub page Now this is obviously a bit of a tongue-in-cheek, experimental kind of thing, but it's still something quite unique to happen to Haiku. I'm not entirely sure how the ransomware is supposed to spread, but my guess would be through social engineering. With Haiku being a relatively small project, and one wherein every user runs as root - baron, in BeOS parlance - I'm sure anything run through social engineering can do some serious damage without many guardrails in place. Don't quote me on that, though, as Haiku may have more advanced guardrails and mitigations in place than classic BeOS did. This proof-of-concept has no ill intent, and is more intended as an art project to highlight what you can do with encryption and ransomware on Haiku today, and I definitely like the art-focused approach of the author.
As of the previous release of POSIX, the Austin Group gained more control over the specification, having it be more working group oriented, and they got to work making the POSIX specification more modern. POSIX 2024 is the first release that bears the fruits of this labor, and as such, the changes made to it are particularly interesting, as they will define the direction of the specification going forwards. This is what this article is about! Well, mostly. POSIX is composed of a couple of sections. Notably XBD (Base Definitions, which talk about things like what a file is, how regular expressions work, etc), XSH (System Interfaces, the C API that defines POSIX's internals), and XCU (which defines the shell command language, and the standard utilities available for the system). There's also XRAT, which explains the rationale of the authors, but it's less relevant for our purposes today. XBD and XRAT are both interesting as context for XSH and XCU, but those are the real meat of the specification. This article will focus on the XCU section, in particular the utilities part of that section. If you're more interested in the XSH section, there's an excellent summary page bysortix'sJonas Termansen that you can readhere. im tosti The weekend isn't over yet, so here's some more light reading.
Old Vintage Computing Research, by the incredibly knowledgeable Cameron Kaiser, is one of the best resources on the web about genuinely obscure retrocomputing, often diving quite deep in topics nobody else covers - or even can cover, considering how rare some of the hardware Kaiser covers is. I link to Old VCR all the time, and today I've got two more great articles by Kaiser for you. First, we've got the more well-known - relatively speaking - of the two devices covered today, and that's the MIPS ThinkPad, officially known as the IBM WorkPad z50. This was a Windows CE 2.11 device powered by a NECVR4120 MIPS processor, running at 131 Mhz, released in 1999. Astute readers might note the WorkPad branding, which IBM also used for several rebranded Palm Pilots. Kaiser goes into his usual great detail covering this device, with tons of photos, and I couldn't stop reading for a second. There's so much good information in here I have no clue what to highlight, but since OSNews has OS in the name, this section makes sense to focus on: The desktop shortcuts are pre-populated in ROM along with a whole bunch of applications. The marquee set that came on H/PC Pro machines was Microsoft Pocket Office (Pocket Word, Pocket Excel, Pocket Access and Pocket PowerPoint), Pocket Outlook (Calendar, Contacts, Inbox and Tasks) and Pocket Internet Explorer, but Microsoft also included Calculator, InkWriter (not too useful on the z50 without a touch screen), Microsoft Voice Recorder, World Clock, ActiveSync (a la Palm HotSync), PC Link (direct connect, not networked), Remote Networking, Terminal (serial port and modem), Windows Explorer and, of course, Solitaire. IBM additionally licensed and included some of bSquare's software suite, including bFAX Pro for sending and receiving faxes with the softmodem, bPRINT for printing and bUSEFUL Backup Plus for system backups, along with a battery calibrator and a Rapid Access quick configuration tool. There is also aCMD.EXEcommand shell, though it too is smaller and less functional than its desktop counterpart. Old Vintage Computing Research Using especially these older versions of Windows CE is a wild experience, because you can clearly tell Microsoft was trying really hard to make it look and feel like normal' Windows, but as anyone who used Windows CE back then can attest, it was a rather poor imitation with a ton of weird limitations and design decisions borne from the limited hardware it was designed to run on. I absolutely adore the various incarnations of Windows CE and associated graphical shells it ran - especially the PocketPC days - but there's no denying it always felt quite clunky. Moving on, the second Old VCR article I'm covering today is more difficult for me to write about, since I am too young to have any experience with the 8 bit era - save for some experience with the MSX platform as a wee child - so I have no affinity for machines like the Commodore 64 and similar machines from that era. And, well, this article just so happens to be covering something called the Commodore HHC-4. Once upon a time (and that time was Winter CES 1983), Commodore announced what was to be their one and only handheld computer, the Commodore HHC-4. It was never released and never seen again, at least not in that form. But it turns out that not only did the HHC-4 actually exist, it also wasn't manufactured by Commodore - it was a Toshiba. Like Superman had Clark Kent, the Commodore HHC-4 had a secret identity too: the Toshiba Pasopia Mini IHC-8000, the very first portable computer Toshiba ever made. And like Clark Kent was Superman with glasses, compare the real device to the Commodore marketing photo and you can see that it's the very same machine modulo a plastic palette swap. Of course there's more to the story than that. Old Vintage Computing Research Of course, Kaiser hunted down an IHC-8000, and details his experiences with the little handheld, calculator-like machine. It turns out it's most likely using some unspecified in-house Toshiba architecture, running at a few hundred kHz, and it's apparently quite sluggish. It never made it to market in Commodore livery, most likely because of its abysmal performance. The amount of work required to make this little machine more capable and competitive probably couldn't be recouped by its intended list price, Kaiser argues.
Firmware, software that's intimately involved with hardware at a low level, has changed radically with each of the different processor architectures used in Macs. Howard Oakley A quick but still detailed overview of the various approach to Mac firmware Apple has employed over the years, from the original 68k firmware and Mac OS ROMs, to the modern Apple M-specific approach.
There's a date looming on the horizon for the vast majority of Windows users. While Windows 11 has been out for a long time now, most Windows users are using Windows 10 - about 63% - while Windows 11 is used by only about 33% of Windows users. In October 2025, however, support for Windows 10 will end, leaving two-thirds of Windows users without the kind of updates they need to keep their system secure and running smoothly. Considering Microsoft is in a lot of hot water over its security practices once again lately, this must be a major headache for the company. The core of the problem is that Windows 11 has a number of very strict hardware requirements that are mostly entirely arbitrary, and make it impossible for huge swaths of Windows 10 users to upgrade to Windows 11 even if they wanted to. And that is a problem in and of itself too: people don't seem to like Windows 11 very much, and definitely prefer to stick to Windows 10 even if they can upgrade. It's going to be quite difficult for Microsoft to convince those people to upgrade, which likely won't happen until these people buy a new machine, which in turn in something that just isn't necessary as often as it used to be. That first group of users - the ones who want to upgrade, but can't - do have unofficial options, a collection of hacks to jank Windows 11 into installing on unsupported hardware. This comes with a number of warnings from Microsoft, so you may wonder how much of a valid option this really is. Ars Technica has been running Windows 11 on some unsupported machines for a while, and concludes that while it's problem-free in day-to-day use, there's a big caveat you won't notice until it's time for a feature update. These won't install without going through the same hacks you needed to use when you first installed Windows 11 and manually downloading the update in question. This essentially means you'll need to repeat the steps for doing a new unsupported Windows 11 install every time you want to upgrade.As we detail in our guide, that'srelativelysimple if your PC has Secure Boot and a TPM but doesn't have a supported processor. Make a simple registry tweak, download the Installation Assistant or an ISO file to run Setup from, and the Windows 11 installer will let you off with a warning and then proceed normally, leaving your files and apps in place. Without Secure Boot or a TPM, though, installing these upgrades in place is more difficult. Trying to run an upgrade install from within Windows just means the system will yell at you about the things your PC is missing. Booting from a USB drive that has been doctored to overlook the requirements will help you do a clean install, but it will delete all your existing files and apps. Andrew Cunningham at Ars Technica The only way around this that may work is yet another hack, which tricks the update into thinking it's installing Windows Server, which seems to have less strict requirements. This way, you may be able to perform an upgrade from one Windows 11 version to the next without losing all your data and requiring a fresh installation. It's one hell of a hack that no sane person should have to resort to, but it looks like it might be an inevitability for many. October 2025 is going to be a slaughter for Windows users, and as such, I wouldn't be surprised to see Microsoft postponing this date considerably to give the two-thirds of Windows users more time to move to Windows 11 through their regular hardware replacements cycles. I simply can't imagine Microsoft leaving the vast majority of its Windows users completely unprotected. Spare a thought for our Windows 10-using friends. They're going to need it.
If you love exploit mitigations, you may have heard of a new system call namedmseallanding into the Linux kernel's 6.10 release, providing a protection called memory sealing." Beyond notes from the authors, very little information about this mitigation exists. In this blog post, we'll explain what this syscall is, including how it's different from prior memory protection schemes and how it works in the kernel to protect virtual memory. We'll also describe the particular exploit scenarios thatmsealhelps stop in Linux userspace, such as stopping malicious permissions tampering and preventing memory unmapping attacks. Alan Cao The goal of mseal is to, well, literally seal a part of memory and protect its contents from being tampered with. It makes regions of memory immutable so that while a program is running, its memory contents cannot be modified by malicious actors. This article goes into great detail about this new feature, explains how it works, and what it means for security in the Linux kernel. Excellent light reading for the weekend.
One-third of payments tocontractors training AI systems used by companies such as Amazon, Meta and Microsoft have not been paid on time after the Australian company Appen moved to a new worker management platform. Appen employs 1 million contractors who speak more than 500 languages and are based in 200 countries. They work to label photographs, text, audio and other data to improve AI systems used by the large tech companies and have been referred to as ghost workers" - the unseen human labour involved in training systems people use every day. Josh Taylor at The Guardian It's crazy that if you peel back the layers on top of a lot of tools and features sold to us as artificial intelligence", you'll quite often find underpaid workers doing the labour technology companies are telling us are done by computers running machine learning algorithms. The fact that so many of them are either deeply underpaid or, as in this case, not even paid at all, while companies like Google, Apple, Microsoft, and OpenAI are raking in ungodly amounts of profits, is deeply disturbing. It's deeply immoral on so many levels, and just adds to the uncomfortable feeling people have with AI". Again I'd like to reiterate I'm not intrinsically opposed to the current crop of artificial intelligence tools - I just want these mega corporations to respect the rights of artists, and not use their works without permission to earn immense amounts of money. On top of that, I don't think it should be legal for them to lie about how their tools really work under the hood, and the workers who really do the work claimed to be done by AI" to be properly paid. Is any of that really too much to ask? Fix these issues, and I'll stop putting quotation marks around AI".
Windows 11, version 24H2 represents significant improvements to the already robust update foundation of Windows. With the latest version, you get reduced installation time, restart time, and central processing unit (CPU) usage for Windows monthly updates. Additionally, enhancements to the handling of feature updates further reduce download sizes for most endpoints by extending conditional downloads to include Microsoft Edge. Let's take a closer look at these advancements. Steve DiAcetis at the Windows IT Pro Blog Now this is the kind of stuff we want to see in new Windows releases. Updating Windows feels like a slow, archaic, and resource-intensive process, whereas on, say, my Fedora machines it's such an effortless, lightweight process I barely even notice it's happening. This is an area where Windows can make some huge strides that materially affect people - Windows updates are a meme - and it's great to see Microsoft working on this instead of shoving more ads onto Windows users' desktops. In this case, Microsoft managed to reduce installation time, make reboots faster, and lower CPU and RAM usage through a variety of measures roughly falling in one of three groups: improved parallel processing, faster and optimised reading of update manifests, and more optimal use of available memory. We're looking at some considerable improvements here, such as a 45% reduction in installation time, 15-25% less CPU usage, and more. Excellent work. On a related note, at the Qualcomm Snapdragon Summit, Microsoft also unveiled a number of audio improvements for Windows on ARM that will eventually also make their way to Windows on x86. I'm not exactly an expert on audio, but from what I understand the Windows audio stack is robust and capable, and what Microsoft announced today will improve the stack even further. For instance, support for MIDI 2.0 is coming to Windows, with backwards compatibility for MIDI 1.0 devices and APIs, and Microsoft worked together with Yamaha and Qualcomm to develop a new USB Audio Class 2 Driver. In the company's blog post, Microsoft explains that the current USB Audio Class 2 driver in Windows is geared towards consumer audio applications, and doesn't fulfill the needs of professional audio engineers. This current driver does not support the standard professional software has standardised on - ASIO - forcing people to download custom, third-party kernel drivers to get this functionality. That's not great for anybody, and as such they're working on a new driver. The new driver will support the devices that our current USB Audio Class 2 driver supports, but will increase support for high-IO-count interfaces with an option for low-latency for musician scenarios. It will have an ASIO interface so all the existing DAWs on Windows can use it, and it will support the interface being used by Windows and the DAW application at the same time, like a few ASIO drivers do today. And, of course, it will handle power management events on the new CPUs. Pete Brown at the Dev Blogs The code for this driver will be published as open source on GitHub, so that anyone still opting to make a specialised driver can use Microsoft's code to see how things are done. That's a great move, and one that I think we'll be seeing more often from Microsoft. This is great news for audio professionals using Windows.
The processor in the Game Boy Advance, the ARM7TDMI, has a weird characteristic where the carry flag is set to a meaningless value" after a multiplication operation. What this means is that software cannot and should not rely on the value of the carry flag after multiplication executes. It can be set to anything. Any value. 0, 1, a horse, whatever. This has been a source of memes in the emulator development community for a few years - people would frequently joke about how the implementation of the carry flag may as well becpu.flags.c = rand() & 1;. And they had a point - the carry flag seemed to defy all patterns; nobody understood why it behaves the way it does. But the one thing we did know, was that the carry flag seemed to bedeterministic. That is, under the same set of inputs to a multiply instruction, the flag would be set to the same value. This was big news, because it meant that understanding the carry flag could give us key insight into how this CPU performs multiplication. And just to get this out of the way, the carry flag's behavior after multiplication isn't an important detail to emulate at all. Software doesn't rely on it. And if softwaredidrely on it, then screw the developers who wrote that software. But the carry flag is a meme, and it's a really tough puzzle, and that was motivation enough for me to give it a go. Little did I know it'd take3 yearsof on and off work. bean machine Please don't make me understand any of this.
When I think about bhyve Live Migration, it's something I encounter almost daily in my consulting calls. VMware's struggles with Broadcom's licensing issues have been a frequent topic, even as we approach the end of 2024. It's surprising that many customers still feel uncertain about how to navigate this mess. While VMware has been a mainstay in enterprise environments for years, these ongoing issues make customers nervous. And they should be - it's hard to rely on something when even the licensing situation feels volatile. Now, as much as I'm a die-hard FreeBSD fan, I have to admit that FreeBSD still falls short when it comes to virtualization - at least from an enterprise perspective. In these environments, it's not just about running a VM; it's about having the flexibility and capabilities to manage workloads without interruption. Years ago, open-source solutions like KVM (e.g., Proxmox) and Xen (e.g., XCP-ng) introduced features like live migration, where you can move VMs between hosts with zero downtime. Even more recently, solutions like SUSE Harvester (utilizing KubeVirt for running VMs) have shown that this is now an essential part of any virtualization ecosystem. gyptazy FreeBSD has bhyve, but the part where it falls short, according to gyptazy, is the tool's lack of live migration. While competitors and alternatives allow for virtual machines to be migrated without downtime, bhyve users still need to shut down their VMs, interrupt all connections, and thus experience a period of downtime before everything is back up and running again. This is simply not acceptable in most enterprise environments, and as such, bhyve is not an option for most users of that type. Luckily for enterprise FreeBSD users, things are improving. Live migration of bhyve virtual machines is being worked on, and basic live migration is now supported, but with limitations. For instance, only virtual machines with a maximum of 3GB could be migrated live, but that limit has been raised in recent years to 13 to 14GB, which is a lot more palatable. There are also some issues with memory corruption, as well as some other issues. Still, it's a massive feat to have live migration at all, and it seems to be improving every year. The linked article goes into much greater detail about where things stand, so if you're interested in keeping up with the latest progress regarding bhyve's live migration capabilities, it's a great place to start.
At the Snapdragon Summit today, Qualcomm is officially announcing the Snapdragon 8 Elite, its flagship SoC for smartphones. The Snapdragon 8 Elite is a major upgrade from its predecessor, with improvements across the board. Qualcomm is also changing its naming scheme for its flagship SoCs from Snapdragon 8 Gen X to Snapdragon X Elite. Pradeep Viswanathan at Neowin It's wild - but not entirely unexpected - how we always seem to end up in a situation in technology where crucial components, such as the operating system or processor, are made by one, or at most two, companies. While there are a few other smartphone system-on-a-chip vendors, they're mostly relegated to low-end devices, and can't compete on the high end, where the money is, at all. It's sadness. Speaking of our mobile SoC overlords, they seem to be in a bit of a pickle when it comes to their core business of, well, selling SoCs. In short, Qualcomm bought Nuvia to use its technology to build the current crop of Snapdragon X Elite and Pro laptop chips. According to ARM, Qualcomm does not have an ARM license to do so, and as such, a flurry of lawsuits between the two companies followed. ARM is now cancelling certain Qualcomm ARM licenses, arguing specifically its laptop Snapdragon X chips should be destroyed. What we're looking at here is two industry giants engaged in very public, and very expensive, contract negotiations, using the legal system as their arbiter. This will eventually fizzle out into a new agreement between the two companies with renewed terms and conditions - and flows of money - but until that dust has settled, be prepared for an endless flurry of doomerist news items about this story. As for us normal people? We don't have to worry one bit about this legal nonsense. It's not like we have any choice in smartphone chips anyway.
I commented on Lobsters that/tmpis usually a bad idea, which caused some surprise. I suppose/tmpsecurity bugs were common in the 1990s when I was learning Unix, but they are pretty rare now so I can see why less grizzled hackers might not be familiar with the problems. I guess that's some kind of success, but sadly the fixes have left behind a lot of scar tissue because they didn't address the underlying problem:/tmpshould not exist. Tony Finch Not only is this an excellent, cohesive, and convincing argument against the existence of /tmp, it also contains some nice historical context as to why things are the way they are. Even without the arguments against /tmp, though, it just seems entirely more logical, cleaner, and sensible to have /tmp directories per user in per user locations. While I never would've been able to so eloquently explain the problem as Finch does, it just feels wrong to have every user resort to the exact same directory for temporary files, like a complex confluence of bad decisions you just know is going to cause problems, even if you don't quite understand the intricate interplay.
Apple announced a trio of major new hearing health features for the AirPods Pro 2 in September, including clinical-grade hearing aid functionality, a hearing test, and more robust hearing protection. All three will roll out next week with the release of iOS 18.1, and theycould mark a watershed momentfor hearing health awareness. Apple is about to instantly turn the world's most popular earbuds into an over-the-counter hearing aid. Chris Welch at The Verge Rightfully so, most of us here have a lot of issues with the major technology companies and the way they do business, but every now and then, even they accidentally stumble into doing something good for the world. AirPods are already a success story, and gaining access to hearing aid-level features at their price point is an absolute game changer for a lot of people with hearing issues - and for a lot of people who don't even yet know they have hearing issues in the first place. If you have people in your life with hearing issues, or whom you suspect may have hearing issues, gifting them AirPods this Christmas season may just be a perfect gift. Yes, I too think hearing aids should be a thing nobody has to pay for and which should just be part of your country's universal healthcare coverage - assuming you have such a thing - but this is not a bad option as a replacement.
System76, purveyor of Linux computers, distributions, and now also desktop environments, has just unveiled its latest top-end workstation, but this time, it's not an x86 machine. They've been working together with Ampere to build a workstation based around Ampere's Altra ARM processors: the Thelio Astra. Phoronix, fine purveyor of Linux-focused benchmarks, were lucky enough to benchmark one, and has more information on the new workstation. System76 designed the Thelio Astra in collaboration with Ampere Computing. The System76 Thelio Astra makes use of Ampere Altra processors up to the Ampere Altra Max 128-core ARMv8 processor that in turn supports 8-channel DDR4 ECC memory. The Thelio Astra can be configured with up to 512GB of system memory, choice of Ampere Altra processors, up to NVIDIA RTX 6000 Ada Generation graphics, dual 10 Gigabit Ethernet, and up to 16TB of PCIe 4.0 NVMe SSD storage. System76 designed the Thelio Astra ARM64 workstation to be complemented by NVIDIA graphics given the pervasiveness of NVIDIA GPUs/accelerators for artificial intelligence and machine learning workloads. The Astra is contained within System76's custom-designed, in-house-manufactured Thelio chassis. Pricing on the System76 Thelio Astra will start out at $3,299 USD with the 64-core Ampere Altra Q64-22 processor, 2 x 32GB of ECC DDR4-3200 memory, 500GB NVMe SSD, and NVIDIA A402 graphics card. Michael Larabel This pricing is actually remarkably favourable considering the hardware you're getting. System76 and its employees have been dropping hints for a while now they were working on an ARM variant of their Thelio workstation, and knowing some of the prices others are asking, I definitely expected the base price to hit $5000, so this is a pleasant surprise. With the Altra processors getting a tiny bit long in the tooth, you do notice some oddities here, specifically the DDR4 RAM instead of the modern DDR5, as well as the lack of PCIe 5.0. The problem is that while the Altra has a successor in the AmpereOne processor, its availability is quite limited, and most of them probably end up in datacentres and expensive servers for big tech companies. This newer variant does come with DDR5 and PCIe 5.0 support, but doesn't yet have a lower core count version, so even if it were readily available it might simply push the price too far up. Regardless, the Altra is still a ridiculously powerful processor, and at anywhere between 64 and 128 cores, it's got power to spare. The Thelio Astra will be available come 12 November, and while I would perform a considerable number of eyebrow-raising acts to get my hands on one, it's unlikely System76 will ship one over for a review. Edit: here's an excellent and detailed reply to our Mastodon account from an owner of an Ampere Altra workstation, highlighting some of the challenges related to your choice of GPU. Required reading if you're interested in a machine like this.
It's no secret that a default Windows installation is... Hefty. In more ways than one, Windows is a bit on the obese side of the spectrum, from taking up a lot of disk space, to requiring hefty system requirements (artificial or not), to coming with a lot of stuff preinstalled not everyone wants to have to deal with. As such, there's a huge cottage industry of applications, scripts, modified installers, custom ISOs, and more, that try to slim Windows down to a more manageable size. As it turns out, even Microsoft itself wants in on this action. The company that develops and sells Windows also provides a Windows debloat script. Over on GitHub, Microsoft maintains a repository of scripts simplify setting up Windows as a development environment, and amid the collection of scripts we find RemoveDefaultApps.ps1, a PowerShell script to Uninstall unnecessary applications that come with Windows out of the box". The script is about two years old, and as such it includes a few applications no longer part of Windows, but looking through the list is a sad reminder of the kind of junk Windows comes with, most notably mobile casino games for children like Bubble Witch and March of Empires, but also other nonsense like the Mixed Reality Portal or Duolingo. It also removes something called ActiproSoftwareLLC, which are apparently a set of third-party, non-Microsoft UI controls for WPF? Which comes preinstalled with Windows sometimes? What is even happening over there? The entire set of scripts makes use of Chocolatey wrapped in Boxstarter, which is a wrapper for Chocolatey and includes features like managing reboots for you", because of course, the people at Microsoft working on Windows can't be bothered to fix application management and required reboots themselves. Silly me, expecting Microsoft's Windows developers to address these shortcomings internally instead of using third-party tools. The repository seems to be mostly defunct, but the fact it even exists in the first place is such a damning indictment of the state of Windows. People keep telling us Windows is fine, but if even Microsoft itself needs to resort to scripts and third-party tools to make it usable, I find it hard to take claims of Windows being fine seriously in any way, shape, or form.
In early 2022I got several Sun SPARC serversfor free off of a FreeCycle ad: I was recentlycalled outfor not providing any sort of update on those devices... so here we go! Sidneys1.com Some information on booting old-style SPARC machines, as well as pretty pictures. Nice palate-cleanser if you've had to deal with something unpleasant this weekend. This world would be a better place if we all had our own Sun machines to play with when we get sad.
I don't think most people realize how Firefox and Safari depend on Google for more than just" revenue from default search engine deals and prototyping new web platform features. Off the top of my head, Safari and Firefox use the following Chromium libraries: libwebrtc, libbrotli, libvpx, libwebp, some color management libraries, libjxl (Chromium may eventually contribute a Rust JPEG-XL implementation to Firefox; it's a hard image format to implement!), much of Safari's cryptography (from BoringSSL), Firefox's 2D renderer (Skia)...the list goes on. Much of Firefox's security overhaul in recent years (process isolation, site isolation, user namespace sandboxes, effort on building with ControlFlowIntegrity) is directly inspired by Chromium's architecture. Rohan Seirdy" Kumar Definitely an interesting angle on the browser debate I hadn't really stopped to think about before. The argument is that while Chromium's dominance is not exactly great, the other side of the coin is that non-Chromium browsers also make use of a lot of Chromium code all of us benefit from, and without Google doing that work, Mozilla would have to do it by themselves, and let's face it, it's not like they're in a great position to do so. I'm not saying I buy the argument, but it's an argument nonetheless. I honestly wouldn't mind a slower development pace for the web, since I feel a lot of energy and development goes into things making the web worse, not better. Redirecting some of that development into things users of the web would benefit from seems like a win to me, and with the dominant web engine Chromium being run by an advertising company, we all know where their focus lies, and it ain't on us as users. I'm still firmly on the side of less Chromium, please.
Google has gotten a bad reputation as of late for beinga bit overzealouswhen it comes to fighting ad blockers. Most recently, it's been spottedautomatically turning off popular ad blocking extension uBlock Originfor some Google Chrome users. To a degree, that makes sense-Google makes its money off ads. But withmalicious adsanddata trackersall over the internet these days, users have legitimate reasons to want to block them. The uBlock Origin controversy is just one facet of a debate that goes back years, and it's not isolated: your favorite ad blocker will likely be affected next. Here are the best ways to keep blocking ads now that Google is cracking down on ad blockers. Michelle Ehrhardt at LifeHacker Here's the cold and harsh reality: ad blocking will become ever more difficult as time goes on. Not only is Google obviously fighting it, other browser makers will most likely follow suit. Microsoft is an advertising company, so Edge will follow suit in dropping Manifest v2 support. Apple is an advertising company, and will do whatever they can to make at least their own ads appear. Mozilla is an advertising company, too, now, and will continue to erode their users' trust in favour of nebulous nonsense like privacy-respecting advertising in cooperation with Facebook. The best way to block ads is to move to blocking at the network level. Get a cheap computer or Raspberry Pi, set up Pi-Hole, and enjoy some of the best adblocking you're ever going to get. It's definitely more involved than just installing a browser extension, but it also happens to be much harder for advertising companies to combat. If you're feeling generous, set up Pi-Holes for your parents, friends, and relatives. It's worth it to make their browsing experience faster, safer, and more pleasant. And once again I'd like to reiterate that I have zero issues with anyone blocking the ads on OSNews. Your computer, your rules. It's not like display ads are particularly profitable anyway, so I'd much rather you support us through Patreon or a one-time donation through Ko-Fi, which is a more direct way of ensuring OSNews continues to exist. Also note that the OSNews Matrix room - think IRC, but more modern, and fully end-to-end encrypted - is now up and running and accessible to all OSNews Patreons as well.
Something odd happened to Qualcomm's Snapdragon Dev Kit,an $899 mini PCpowered by Windows 11 and the company's latest Snapdragon X Elite processor. Qualcomm decided to abruptly discontinue the product, refund all orders (including for those with units on hand), and cease its support, claiming the device has not met our usual standards of excellence." Taras Buria at Neowin The launch of the Snapdragon X Pro and Elite chips seems to have mostly progressed well, but there have been a few hiccups for those of us who want ARM but aren't interested in Windows and/or laptops. There's this story, which is just odd all around, with an announced, sold, and even shipped product suddenly taken off the market, which I think at this point was the only non-laptop device with an X Elite or Pro chip. If you are interested in developing for Qualcomm's new platform, but don't want a laptop, you're out of luck for now. Another note is that the SoC SKU in the Dev Kit was clocked a tiny bit higher than the laptop SKUs, which perhaps plays a role in its cancellation. The bigger hiccup is the problematic Linux bring-up, which is posing many more problems and is taking a lot longer than Qualcomm very publicly promised it would take. For now, if you want to run Linux on a Snapdragon X Elite or Pro device, you're going to need a custom version of your distribution of choice, tailored to a specific laptop model, using a custom kernel. It's an absolute mess and basically means that at this point in time, months and months after release, buying one of these to run Linux on them is a bad idea. Quite a few important bits will arrive with Linux 6.12 to supposedly greatly improve the experience, but seeing is believing. Qualcomm made a lot of grandiose promises about Linux support, and they simply haven't delivered.
I want to take advantage of Go's concurrency and parallelism for some of my upcoming projects, allowing for some serious number crunching capabilities. But what if I wanted EVEN MORE POWER?!? Enter SIMD,SameInstructionMulipleData . Simd instructions allow for parallel number crunching capabilities right down at the hardware level. Many programming languages either have compiler optimizations that use simd or libraries that offer simd support. However, (as far as I can tell) Go's compiler does not utilizes simd, and I cound not find a general propose simd package that I liked.I just want a package that offers a thin abstraction layer over arithmetic and bitwise simd operations. So like any good programmer I decided to slightly reinvent the wheel and write my very own simd package. How hard could it be? After doing some preliminary research I discovered that Go uses its own internal assembly language called Plan9. I consider it more of an assembly format than its own language. Plan9 uses target platforms instructions and registers with slight modifications to their names and usage. This means that x86 Plan9 is different then say arm Plan9. Overall, pretty weird stuff. I am not sure why the Go team went down this route. Maybe it simplifies the compiler by having this bespoke assembly format? Jacob Ray Pehringer Another case of light reading for the weekend. Even as a non-programmer I learned some interesting things from this one, and it created some appreciation for Go, even if I don't fully grasp things like this. On top of that, at least a few of you will think this has to do with Plan9 the operating system, which I find a mildly entertaining ruse to subject you to.
We've pulled together all kinds of resources to create a comprehensive guide to installing and upgrading to Windows 11. This includes advice and some step-by-step instructions for turning on officially required features like your TPM and Secure Boot, as well as official and unofficial ways to skirt the system-requirement checks on unsupported" PCs, because Microsoft is not your parent and therefore cannot tell you what to do. There are some changes in the 24H2 update that will keep you from running it on every ancient system that could run Windows 10, and there are new hardware requirements for some of the operating system's new generative AI features. We've updated our guide with everything you need to know. Andrew Cunningham at Ars Technica In the before time, the things you needed to do to make Windows somewhat usable mostly came down to installing applications replicating features other operating systems had been enjoying for decades, but as time went on and Windows 10 came out, users now also had to deal with disabling a ton of telemetry, deleting preinstalled adware, dodge the various dark patterns around Edge, and more. You have wonder if it was all worth it, but alas, Windows 10 at least looked like Windows, if you squinted. With Windows 11, Microsoft really ramped up the steps users have to take to make it usable. There's all of the above, but now you also have to deal with an ever-increasing number of ads, even more upsells and Edge dark patterns, even more data gathering, and the various hacks you have to employ to install it on perfectly fine and capable hardware. With Windows 10's support ending next year, a lot of users are in a rough spot, since they can't install Windows 11 without resorting to hacks, and they can't keep using Windows 10 if they want to keep getting updates. And here comes 24H2, which makes it all even worse. Not only have various avenues to make Windows 11 installable on capable hardware been closed, it also piles on a whole bunch of AI" garbage, and accompanying upsells and dark patterns, Windows users are going to have to deal with. Who doesn't want Copilot regurgitating nonsense in their operating system's search tool, or have Paint strongly suggest it will improve" your quick doodle to illustrate something to a friend with that unique AI StyleTM we all love and enjoy so much? Stay strong out there, Windows folks. Maybe it'll get better. We're rooting for you.
If you read my previous article onDOS memory models, you may have dismissed everything I wrote as legacy cruft from the 1990s that nobody cares about any longer". After all, computers have evolved from sporting 8-bit processors to 64-bit processors and, on the way, the amount of memory that these computers can leverage has grown orders of magnitude: the 8086, a 16-bit machine with a 20-bit address space, could only use 1MB of memory while today's 64-bit machines can theoretically access 16EB. All of this growth has been in service of ever-growing programs. But... even if programs are now more sophisticated than they were before, do they allreallyrequire access to a 64-bit address space? Has the growth from 8 to 64 bits been a net positive in performance terms? Let's try to answer those questions to find some very surprising answers. But first, some theory. Julio Merino It's not quite weekend yet, but I'm still calling this some light reading for the weekend.
Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts. In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google's own docs and blogs and various news site's reports. Kevin Purdy at Ars Technica It's a welcome collection of changes and features to better align Android' theft and personal privacy protection with how thieves steal phones in this day and age. I'm not sure I understand all of them, though - the Private Space, where you can drop applications to lock them behind an additional pin code, confuses me, since everyone can see it's there. I assumed Private Space would also give people in vulnerable positions - victims of abuse, journalists, dissidents, etc. - the option to truly hide parts of their life to protect their safety, but it doesn't seem to work that way. Android 15 will also use AI" to recognise when a device is yanked out of your hands and lock it instantly, which is a great use case for AI" that actually benefits people. Of course, it will be even more useful once thieves are aware this feature exists, so that they won't even try to steal your phone in the first place, but since this is Android, it'll be a while before Android 15 makes its way to enough users for it to matter.
Earlier this year we talked about Huawei's HarmonyOS NEXT, which is most likely the only serious competitor to Android and iOS in the world. HarmonyOS started out as a mere Android skin, but over time Huawei invested heavily into the platform to expand it into a full-blown, custom operating system with a custom programming language, and it seems the company is finally ready to take the plunge and release HarmonyOS NEXT into the wild. It's indicated that HarmonyOS made up 17% of China's smartphone market in Q1 of 2024. That's a significant amount of potential devices breaking off from Android in a market dominated by either it or iOS. HarmonyOS NEXT is set to begin rolling out to Huawei devices next week. The OS will first come to the Mate 60, Mate X5, and MatePad Pro on October 15. Andrew Romero at 9To5Google Huawei has been hard at work making sure there's no application gap' for people using HarmonyOS NEXT, claiming it has 10000 applications ready to go that cover 99.9%" of their users' use case. That's quite impressive, but of course, we'll have to wait and see if the numbers line up with the reality on the ground for Chinese consumers. Here in the est HarmonyOS NEXT is unlikely to gain any serious traction, but that doesn't mean I would mind taking a look at the platform if at all possible. It's honestly not surprising the most serious attempt at creating a third mobile ecosystem is coming from China, because here in the west the market is so grossly rusted shut we're going to be stuck with Android and iOS until the day I die.
Engineers at Google started work on a new Terminal app for Android a couple of weeks ago. This Terminal app is part of the Android Virtualization Framework (AVF) and contains a WebView that connects to a Linux virtual machine via a local IP address, allowing you to run Linux commands from the Android host. Initially, you had to manually enable this Terminal app using a shell command and then configure the Linux VM yourself. However, in recent days, Google began work on integrating the Terminal app into Android as well as turning it into an all-in-one app for running a Linux distro in a VM. Mishaal Rahman at Android Authority There already are a variety of ways to do this today, but having it as a supported feature implemented by Google is very welcome. This is also going to greatly increase the number of spammy articles and lazy YouTube videos telling you how to run Ubuntu on your phone", which I'm not particularly looking forward to.
Next up in my backlog of news to cover: the US Department of Justice's proposed remedies for Google's monopolistic abuse. Now that Judge Amit Mehta hasfound Google is a monopolist, lawyers for the Department of Justice have begun proposing solutions to correct the company's illegal behavior and restore competition to the market for search engines. In a new32-page filing(included below), they said they are considering both behavioral and structural remedies. That covers everything from applying a consent decree to keep an eye on the company's behavior to forcing it to sell off parts of its business, such as Chrome, Android, or Google Play. Richard Lawler at The Verge While I think it would be a great idea to break Google up, such an action taken in a vacuum seems to be rather pointless. Say Google is forced to spin off Android into a separate company - how is that relatively small Android, Inc. going to compete with the behemoth that is Apple and its iOS to which such restrictions do not apply? How is Chrome Ltd. going to survive Microsoft's continued attempts at forcing Edge down our collective throats? Being a dedicated browser maker is working out great for Firefox, right? This is the problem with piecemeal, retroactive measures to try and correct" a market position that you have known for years is being abused - sure, this would knock Google down a peg, but other, even larger megacorporations like Apple or Microsoft will be the ones to benefit most, not any possible new companies or startups. This is exactly why a market-wide, equally-applied set of rules and regulations, like the European Union's Digital Markets Act, is a far better and more sustainable approach. Unless similar remedies are applied to Google's massive competitors, these Google-specific remedies will most likely only make things worse, not better, for the American consumer.
Internet Archive's The Wayback Machine" has suffered a data breach after a threat actor compromised the website and stole a user authentication database containing 31 million unique records. News of the breach began circulating Wednesday afternoon after visitors to archive.org began seeing a JavaScript alert created by the hacker,stating that the Internet Archive was breached. Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!," reads a JavaScript alert shown on the compromised archive.org site. Lawrence Abrams at Bleeping Computer To make matters worse, the Internet Archive was also suffering from waves of distributed denial-of-service attacks, forcing the IA to take down the site while strengthening everything up. It seems the attackers have no real motivation, other than the fact they can, but it's interesting, shall we say, that the Internet Archive has been under legal assault by big publishers for years now, too. I highly doubt the two are related in any way, but it's an interesting note nonetheless. I'm still catching up on all the various tech news stories, but this one was hard to miss. A lot of people are rightfully angry and dismayed about this, since attacking the Internet Archive like this kind of feels like throwing Molotov cocktails at a local library - there's literally not a single reason to do so, and the only people you're going to hurt are underpaid librarians and chill people who just want to read some books. Whomever is behind this are just assholes, no ifs and buts about it.