Feed osnews OSnews

Favorite IconOSnews

Link https://www.osnews.com/
Feed http://www.osnews.com/files/recent.xml
Updated 2025-07-19 01:02
OpenAI beta tests SearchGPT search engine
Normally I'm not that interested in reporting on news coming from OpenAI, but today is a little different - the company launched SearchGPT, a search engine that's supposed to rival Google, but at the same time, they're also kind of not launching a search engine that's supposed to rival Google. What? We're testing SearchGPT, a prototype of new search features designed to combine the strength of our AI models with information from the web to give you fast and timely answers with clear and relevant sources. We're launching to a small group of users and publishers to get feedback. While this prototype is temporary, we plan to integrate the best of these features directly into ChatGPT in the future. If you're interested in trying the prototype, sign up for the waitlist. OpenAI website Basically, before adding a more traditional web-search like feature set to ChatGPT, the company is first breaking them out into a separate, temporary product that users can test, before parts of it will be integrated into OpenAI's main ChatGPT product. It's an interesting approach, and with just how stupidly popular and hyped ChatGPT is, I'm sure they won't have any issues assembling a large enough pool of testers. OpenAI claims SearchGPT will be different from, say, Google or AltaVista, by employing a conversation-style interface with real-time results from the web. Sources for search results will be clearly marked - good - and additional sources will be presented in a sidebar. True to the ChatGPT-style user interface, you can keep talking" after hitting a result to refine your search further. I may perhaps betray my still relatively modest age, but do people really want to talk" to a machine to search the web? Any time I've ever used one of these chatbot-style user interfaces -including ChatGPT - I find them cumbersome and frustrating, like they're just adding an obtuse layer between me and the computer, and that I'd rather just be instructing the computer directly. Why try and verbally massage a stupid autocomplete into finding a link to an article I remember from a few days ago, instead of just typing in a few quick keywords? I am more than willing to concede I'm just out of touch with what people really want, so maybe this really is the future of search. I hope I can just always disable nonsense like this and just throw keywords at the problem.
Two threads, one core: how simultaneous multithreading works under the hood
Simultaneous multithreading (SMT) is a feature that lets a processor handle instructions from two different threads at the same time. But have you ever wondered how this actually works? How does the processor keep track of two threads and manage its resources between them? In this article, we're going to break it all down. Understanding the nuts and bolts of SMT will help you decide if it's a good fit for your production servers. Sometimes, SMT can turbocharge your system's performance, but in other cases, it might actually slow things down. Knowing the details will help you make the best choice. Abhinav Upadhyay Some light reading for the (almost) weekend.
Intel: Raptor Lake faults excessive voltage from microcode, fix coming in August
In what started last year as a handful of reports about instability with Intel's Raptor Lake desktop chips has, over the last several months, grown into a much larger saga. Facing their biggest client chip instability impediment in decades, Intel has been under increasing pressure to figure out the root cause of the issue and fix it, as claims of damaged chips have stacked up and rumors have swirled amidst the silence from Intel. But, at long last, it looks like Intel's latest saga is about to reach its end, as today the company has announced that they've found the cause of the issue, and will be rolling out a microcode fix next month to resolve it. Ryan Smith at AnandTech It turns out the root cause of the problem is elevated operating voltages", caused by a buggy algorithm in Intel's own microcode. As such, it's at least fixable through a microcode update, which Intel says it will ship sometime mid-August. AnandTech, my one true source for proper reporting on things like this, is not entirely satisfied, though, as they state microcode is often used to just cover up the real root cause that's located much deeper inside the processor, and as such, Intel's explanation doesn't actually tell us very much at all. Quite coincidentally, Intel also experienced a manufacturing flaw with a small batch of very early Raptor Lake processors. An oxidation manufacturing flaw" found its way into a small number of early Raptor Lake processors, but the company claims it was caught early and shouldn't be an issue any more. Of course, for anyone experiencing issues with their expensive Intel processors, this will linger in the back of their minds, too. Not exactly a flawless launch for Intel, but it seems its main only competitor, AMD, is also experiencing issues, as the company has delayed the launch of its new Ryzen 9000 chips due to quality issues. I'm not at all qualified to make any relevant statements about this, but with the recent launch of the Snapdragon Elite X and Pro chips, these issues couldn't come at a worse time for Intel and AMD.
FreeBSD as a platform for your future technology
Choosing an operating system for new technology can be crucial for the success of any project. Years down the road, this decision will continue to inform the speed and efficiency of development.But should you build the infrastructure yourself or rely on a proven system? When faced with this decision, many companies have chosen, and continue to choose, FreeBSD. Few operating systems offer the immediate high performance and security of FreeBSD, areas where new technologies typically struggle. Having a stable and secure development platform reduces upfront costs and development time. The combination of stability, security, and high performance has led to the adoption of FreeBSD in a wide range of applications and industries. This is true for new startups and larger established companies such as Sony, Netflix, and Nintendo. FreeBSD continues to be a dependable ecosystem and an industry-leading platform. FreeBSD Foundation A FreeBSD marketing document highlighting FreeBSD's strengths is, of course, hardly a surprise, but considering it's fighting what you could generously call an uphill battle against the dominance of Linux, it's still interesting to see what, exactly, FreeBSD highlights as its strengths. It should come as no surprise that its licensing model - the simple BSD license - is mentioned first and foremost, since it's a less cumbersome license to deal with than something like the GPL. It's philosophical debate we won't be concluding any time soon, but the point still stands. FreeBSD also highlights that it's apparently quite easy to upstream changes to FreeBSD, making sure that changes benefit everyone who uses FreeBSD. While I can't vouch for this, it does seem reasonable to assume that it's easier to deal with the integrated, one-stop-shop that is FreeBSD, compared to the hodge-podge of hundreds and thousands of groups whose software all together make up a Linux system. Like I said, this is a marketing document so do keep that in mind, but I still found it interesting.
You can contribute to KDE with non-C++ code
Not everything made by KDE uses C++. This is probably obvious to some people, but it's worth mentioning nevertheless. And I don't mean this as just well duh, KDE uses QtQuick which is written with C++ and QML". I also don't mean this as well duh, Qt has a lot of bindings to other languages". I mean explicitly KDE has tools written primarily in certain languages and specialized formats". Thiago Sueto If you ever wanted to contribute to KDE but weren't sure if your preferred programming language or tools were relevant, this is a great blog post detailing how you can contribute if you are familiar with any of the following: Python, Ruby, Perl, Containerfile/Docker/Podman, HTML/SCSS/JavaScript, Web Assembly, Flatpak/Snap, CMake, Java, and Rust. A complex, large project like KDE needs people with a wide variety of skills, so it's definitely not just C++. An excellent place to start.
New Samsung phones block sideloading by default
The assault on a user's freedom to install whatever they want on what is supposed to be their phone continues. This time, it's Samsung adding an additional blocker to users installing applications from outside the Play Store and its own mostly useless Galaxy Store. Technically, Android already blocks sideloading by default at an operating system level. The permission that's needed to silently install new apps without prompting the user, INSTALL_PACKAGES, can only be granted to preinstalled app stores like the Google Play Store, and it's granted automatically to apps that request it. The permission that most third-party app stores end up using, REQUEST_INSTALL_PACKAGES, has to be granted explicitly by the user. Even then, Android will prompt the user every time an app with this permission tries to install a new app. Samsung's Auto Blocker feature takes things a bit further. The feature, first introduced in One UI 6.0, fully blocks the installation of apps from unauthorized sources, even if those sources were granted the REQUEST_INSTALL_PACKAGES permission. Mishaal Rahman I'm not entirely sure why Samsung felt the need to add an additional, Samsung-specific blocking mechanism, but at least for now, you can turn it off in the Settings application. This means that in order to install an application from outside of the Play Store and the Galaxy Store on brand new Samsung phones - the ones shipping with OneUI 6.1.1 - you need to both give the regular Android permission to do so, but also turn off this nag feature. Having two variants of every application on your Samsung phone wasn't enough, apparently.
Google won’t be deprecating third-party cookies from Chrome after all
This story just never ever ends. After delays, changes in plans, more delays, we now have more changed plans. After years of stalling, Google has now announced it is, in fact, not going to deprecate third-party cookies in Chrome by default. In light of this, we are proposing an updated approach that elevates user choice. Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they'd be able to adjust that choice at any time. We're discussing this new path with regulators, and will engage with the industry as we roll this out. Anthony Chavez Google remains unclear about what, exactly, users will be able to choose between. The consensus seems to be that users will be able to choose between retaining third-party cookies and turning them off, but that's based on a statement by the British Competition and Market Authority, and not on a statement from Google itself. It seems reasonable to assume the CMA knows what it's talking about, but with a company like Google you never know what's going to happen tomorrow, let alone a few months from now. While both Safari and Firefox have already made this move ages ago, it's taking Google and Chrome a lot longer to deal with this issue, because Google needs to find different ways of tracking you that are not using third-party cookies. Google's own testing with Privacy Sandbox, Chrome's sarcastically-named alternative to third-party cookies, shows that it seems to perform reasonable well, which should definitely raise some alarm bells about just how private it really is. Regardless, I doubt this saga will be over any time soon.
No, Southwest Airlines is not still using Windows 3.1
A story that's been persistently making the rounds since the CrowdStrike event is that while several airline companies were affected in one way or another, Southwest Airlines escaped the mayhem because they were still using windows 3.1. It's a great story that fits the current zeitgeist about technology and its role in society, underlining that what is claimed to be technological progress is nothing but trouble, and that it's better to stick with the old. At the same time, anybody who dislikes Southwest Airlines can point and laugh at the bumbling idiots working there for still using Windows 3.1. It's like a perfect storm of technology news click and ragebait. Too bad the whole story is nonsense. But how could that be? It's widely reported by reputable news websites all over the world, shared on social media like a strain of the common cold, and nobody seems to question it or doubt the veracity of the story. It seems that Southwest Airlines running on an operating system from 1992 is a perfectly believable story to just about everyone, so nobody is questioning it or wondering if it's actually true. Well, I did, and no, it's not true. Let's start with the actual source of the claim that Southwest Airlines was unaffected by CrowdStrike because they're still using Windows 3.11 for large parts of their primary systems. This claim is easily traced back to its origin - a tweet by someone called Artem Russakovskii, stating that the reason Southwest is not affected is because they still run on Windows 3.1". This tweet formed the basis for virtually all of the stories, but it contains no sources, no links, no background information, nothing. It was literally just this one line. It turned out be a troll tweet. A reply to the tweet by Russakovskii a day later made that very lear: To be clear, I was trolling last night, but it turned out to be true. Some Southwest systems apparently do run Windows 3.1. lol." However, that linked article doesn't cite any sources either, so we're right back where we started. After quite a bit of digging - that is, clicking a few links and like 3 minutes of searching online - following the various reference and links back to their sources, I managed to find where all these stories actually come from to arrive at the root claim that spawned all these other claims. It's from an article by The Dallas Morning News, titled What's the problem with Southwest Airlines scheduling system?" At the end of last year, Southwest Airlines' scheduling system had a major meltdown, leading to a lot of cancelled flights and stranded travelers just around the Christmas holidays. Of course, the media wanted to know what caused it, and that's where this The Dallas Morning News article comes from. In it, we find the paragraphs that started the story that Southwest Airlines is still using Windows 3.1 (and Windows 95!): Southwest uses internally built and maintained systems called SkySolver and Crew Web Access for pilots and flight attendants. They can sign on to those systems to pick flights and then make changes when flights are canceled or delayed or when there is an illness. Southwest has generated systems internally themselves instead of using more standard programs that others have used," Montgomery said. Some systems even look historic like they were designed on Windows 95." SkySolver and Crew Web Access are both available as mobile apps, but those systems often break down during even mild weather events, and employees end up making phone calls to Southwest's crew scheduling help desk to find better routes. During periods of heavy operational trouble, the system gets bogged down with too much demand. Kyle Arnold at The Dallas Morning News That's it. That's where all these stories can trace their origin to. These few paragraphs do not say that Southwest is still using ancient Windows versions; it just states that the systems they developed internally, SkySolver and Crew Web Access, look historic like they were designed on Windows 95". The fact that they are also available as mobile applications should further make it clear that no, these applications are not running on Windows 3.1 or Windows 95. Southwest pilots and cabin crews are definitely not carrying around pocket laptops from the '90s. These paragraphs were then misread, misunderstood, and mangled in a game of social media and bad reporting telephone, and here we are. The fact that nobody seems to have taken the time to click through a few links to find the supposed source of these claims, instead focusing on cashing in on the clicks and rage these stories would illicit, is a rather damning indictment of the state of online (tech) media. Many of the websites reporting on these stories are part of giant media conglomerates, have a massive number of paid staff, and they're being outdone by a dude in the Arctic with a small Patreon, minimal journalism training, and some common sense. This story wasn't hard to debunk - a few clicks and a few minutes of online searching is all it took. Ask yourself - why do these massive news websites not even perform the bare minimum?
A brief history of Dell UNIX
Dell UNIX? I didn't know there was such a thing." A couple of weeks ago I had my new XO with me for breakfast at a nearby bakery cafe.Other patrons weredrawn to seeing an XO for the first time, including a Linux person from Dell. I mentioned Dell UNIX and we talked a little about the people who had worked on Dell UNIX. He expressed surprise that mention of Dell UNIX evokes the above quote so often and pointed out that Emacs source still has #ifdef for Dell UNIX. Quick Googling doesn't reveal useful history of Dell UNIX, so here's my version, a summary of the three major development releases. Charles H. Sauer I sure had never heard of Dell UNIX, and despite the original version of the linked article being very, very old - 2008 - there's a few updates from 2020 and 2021 that add links to the files and instructions needed to install, set up, and run Dell UNIX in a virtual machine; 86Box or VirtualBox specifically. What was Dell UNIX? in the late '80s, Dell started a the Olympic project, an effort to create a completely new architecture spanning desktops, workstations, and servers, some of which would be using multiple processors. When searching for an operating system for this project, the only real option was UNIX, and as such, the Olympic team set out to developer a UNIX variant. The first version was based on System V Release 3.2, used Motif and the X Window System, a DOS virtual machine to run, well, DOS applications called Merge, and compatibility with Microsoft Xenix. It might seem strange to us today, but Microsoft's Xenix was incredibly popular at the time, and compatibility with it was a big deal. The Olympic project turned out to be too ambitious on the hardware front so it got cancelled, but the Dell UNIX project continued to be developed. The next release, Dell System V Release 4, was a massive release, and included a full X Window System desktop environment called X.desktop, an office suite, e-mail software, and a lot more. It also contained something Windows wouldn't be getting for quite a few years to come: automatic configuration of device drivers. This was apparently so successful, it reduced the number of support calls during the first 90 days of availability by 90% compared to the previous release. Dell SVR4 finally seemed like real UNIX on a PC. We were justifiably proud of the quality and comprehensiveness, especially considering that our team was so much smaller than those of our perceived competitors at ISC, SCO and Sun(!). The reviewers were impressed. Reportedly, Dell SVR4 was chosen by Intel as their reference implementation in their test labs, chosen by Oracle as their reference Intel UNIX implementation, and used by AT&T USL for in house projects requiring high reliability, in preference to their own ports of SVR4.0. (One count showed Dell had resolved about 1800 problems in the AT&T source.) I was astonished one morning in the winter of 1991-92 when Ed Zander, at the time president of SunSoft, and three other SunSoft executives arrived at my office, requesting Dell help with their plans to put Solaris on X86. Charles H. Sauer Sadly, this would also prove to be the last release of Dell UNIX. After a few more point release, the brass at Dell had realised that Dell UNIX, intended to sell Dell hardware, was mostly being sold to people running it on non-Dell hardware, and after a short internal struggle, the entire project was cancelled since it was costing them more than it was earning them. As I noted, the article contains the files and instructions needed to run Dell UNIX today, on a virtual machine. I'm definitely going to try that out once I have some time, if only to take a peek at that X.desktop, because that looks absolutely stunning for its time.
OpenBSD workstation for the people
This is an attempt at building an OpenBSD desktop than could be used by newcomers or by people that don't care about tinkering with computers and just want a working daily driver for general tasks. Somebody will obviously need to know a bit of UNIX but we'll try to limit it to the minimum. Joel Carnat An excellent, to-the-point, no-nonsense guide about turning a default OpenBSD installation into a desktop operating system running Xfce. You definitely don't need intimate, arcane knowledge of OpenBSD to follow along with this one.
OpenBSD gets hardware accelerated video decoding/encoding
Only yesterday, I mentioned one of the main reasons I decided to switch back to Fedora from OpenBSD were performance issues - and one of them was definitely the lack of hardware acceleration for video decoding/encoding. The lack of such technology means that decoding/encoding video is done using the processor, which is far less efficient than letting your GPU do it - which results in performance issues like stuttering and tearing, as well as a drastic reduction in battery life. Well, that's changed now. Thanks to the work of, well, many, a major commit has added hardware accelerated video decoding/encoding to OpenBSD. Hardware accelerated video decode/encode (VA-API) support is beginning to land in #OpenBSD -current. libva has been integrated into xenocara with the Intel userland drivers in the ports tree. AMD requires Mesa support, hence the inclusion in base. A number of ports will be adjusted to enable VA-API support over time, as they are tested. Bryan Steele This is great news, and a major improvement for OpenBSD and the community. Apparently, performance in Firefox is excellent, and with simply watching video on YouTube being something a lot of people do with their computers - especially laptops - anyone using OpenBSD is going to benefit immensely from this work.
1989 networking: NetWare 386
NetWare 386 or 3.0 was a very limited release, with very few copies sold before it was superseded by newer versions. As such, it was considered lost to time, since it was only sold to large corporations - for a massive almost 8000 dollar price tag - who obviously didn't care about software preservation. There are no original disks left, but a recent warez" release has made the software available once again. As always, pirates save the day.
Managing Classic Mac OS resources inResEdit
The Macintosh was intended to be different in many ways. One of them was its file system, which was designed for each file to consist of two forks, one a regular data fork as in normal file systems, the other a structured database of resources, the resource fork. Resources came to be used to store a lot of standard structured data, such as the specifications for and contents of alerts and dialogs, menus, collections of text strings, keyboard definitions and layouts, icons, windows, fonts, and chunks of code to be used by apps. You could extend the types of resource supported by means of a template, itself stored as a resource, so developers could define new resource types appropriate to their own apps. Howard Oakley And using ResEdit, a tool developed by Apple, you could manipulate the various resources to your heart's content. I never used the classic Mac OS when it was current, and only play with it as a retro platform every now and then, so I ever used ResEdit when it was the cool thing to do. Looking back, though, and learning more about it, it seems like just another awesome capability that Apple lost along the way towards modern Apple. Perhaps I should load up on my old Macs and see with my own eyes what I can do with ResEdit.
Google URL Shortener links will no longer be available
In 2018, we announced the deprecation and transition of Google URL Shortener because of the changes we've seen in how people find content on the internet, and the number of new popular URL shortening services that emerged in that time. This meant that we no longer accepted new URLs to shorten but that we would continue serving existing URLs. Today, the time has come to turn off the serving portion of Google URL Shortener. Please read on below to understand more about how this will impact you if you're using Google URL Shortener. Sumit Chandel and Eldhose Mathokkil Babu It should cost Google nothing to keep this running for as long as Google exists, and yet, this, too, has to be killed off and buried in the Google Graveyard. We'll be running into non-resolving Google URL Shortener links for decades to come, both on large, popular websites a well as on obscure forums and small websites. You'll find a solution to some obscure problem a decade from now, but the links you need will be useless, and you'll rightfully curse Google for being so utterly petty. Relying on anything Google that isn't directly serving its main business - ads - is a recipe for disaster, and will cause headaches down the line. Things like Gmail, YouTube, and Android are most likely fine, but anything consumer-focused is really a lottery.
Why I like NetBSD, or why portability matters
All that to say, I find that NetBSDs philosophy aligns with mine. The OS is small and cozy, and compared to many minimal Linux distributions, I found it faster to setup. Supported hardware is automatically picked up, for my Thinkpad T480s almost everything (except the trackpad issue I solved above) worked out of the box, and it comes with a minimal window manager and display manager to get you started. It is simple and minimal but with sane defaults. It is a hackable system that teaches you a ton. What more could you want? Marc Coquand I spent quite some time using OpenBSD earlier this year, and I absolutely, positively loved it. I can't quite put into words just how nice OpenBSD felt, how graspable the configuration files and commands were, how good and detailed the documentation, and how welcoming and warm the community was over on Mastodon, with even well-known OpenBSD developers taking time out of their day to help me out with dumb newbie questions. The only reason I eventually went back to Fedora on my workstation was performance. OpenBSD as a desktop operating system has some performance issues, from a slow file system to user interface stutter to problematic Firefox performance, that really started to grind my gears while trying to get work done. Some of these issues stem from OpenBSD not being primarily focused on desktop use, and some of them simply stem from lack of manpower or popularity. Regardless, nobody in the OpenBSD community was at all surprised or offended by me going back to Fedora. NetBSD seems to share a lot of the same qualities as OpenBSD, but, as the linked article notes, with a focus on different things. Like I said yesterday, I'm looking to building and testing a system entirely focused on tiled terminal emulators and TUI applications, and I've been pondering if OpenBSD or NetBSD would be a perfect starting point for that experiment.
Introduction to NanoBSD
This document provides information about the NanoBSD tools, which can be used to create FreeBSD system images for embedded applications, suitable for use on a USB key, memory card or other mass storage media. It can be used to build specialized install images, designed for easy installation and maintenance of systems commonly called computer appliances". Computer appliances have their hardware and software bundled in the product, which means all applications are pre-installed. The appliance is plugged into an existing network and can begin working (almost) immediately. FreeBSD documentation Some of the primary features of NanoBSD are exactly what you'd expect out of a tool like this, such as the system being entirely read-only at runtime, so you don't have to worry about shutdowns or data loss, and of course, the entire creation process of NanoBSD images using a simple shell script with any arbitrary set of requirements. For the rest, it remains a FreeBSD system, so ports and packages work just as you'd expect, and assuming your specific settings for the NanoBSD image didn't remove it, anything that works in FreeBSD, works in a NanoBSD image, too. The documentation is, as is often the case in the BSD world, excellent, and very easy to follow, even for someone not at all specialised in things like this. Reading through it, I'm pretty sure even I could create a customised NanoBSD image and run it, since it very much looks like you're just creating a custom installation script, adding just the things you need. I don't have a use for something like this, but I'm not sure how well-known NanoBSD is, and I feel like there's definitely some among you who would appreciate this.
CrowdStrike issue is causing massive computer outages worldwide
Well, this sure is something to wake up to: a massive worldwide outage of computer systems due to a problem with CrowdStrike software. Payment systems, airlines, hospitals, governments, TV stations - pretty much anything or anyone using computers could be dealing with bluescreens, bootloops, and similar issues today. Open-heart surgeries had to be stopped mid-surgery, planes can't take off, people can't board trains, shoppers can't pay for their groceries, and much, much more, all over the world. The problem is caused by CrowdStrike, a sort-of enterprise AV/monitoring software that uses a Windows NT kernel driver to monitor everything people do on corporate machines and logs it for... Security purposes, I guess? I've never worked in a corporate setting so I have no experience with software like this. From what I hear, software like this is deeply loathed by workers the world over, as it gets in the way and slows systems down. And, as can happen with a kernel driver, a bug can cause massive worldwide outages which is costing people billions in damages and may even have killed people. There is a workaround, posted by CrowdStrike: This is a solution for individually fixing affected machines, but I've seen responses like great, how do I apply this to 70k endpoints?", indicating that this may not be a practical solution for many affected customers. Then there's the issue that this may require a BitLocker password, which not everyone has on hand either. To add insult to injury, CrowdStrike's advisory about the issue is locked behind a login wall. A shitshow all around. Do note that while the focus is on Windows, Linux machines can run CrowdStrike software too, and I've heard from Linux kernel engineers who happen to also administer large numbers of Linux servers that they're seeing a huge spike in Linux kernel panics... Caused by CrowdStrike, which is installed on a lot more Linux servers than you might think. So while Windows is currently the focus of the story, the problems are far more widespread than just Windows. I'm sure we're going to see some major consequences here, and my - misplaced, I'm sure - hope is that this will make people think twice about one, using these invasive anti-worker monitoring tools, and two, employing kernel drivers for this nonsense.
NVIDIA transitions fully towards open-source GPU Linux kernel modules
It's a bit of a Linux news day today - it happens - but this one is good news we can all be happy about. After earning a bad reputation for mishandling its Linux graphics drivers for years, almost decades, NVIDIA has been turning the ship around these past two years, and today they made a major announcement: from here on out, the open source NVIDIA kernel modules will be the default for all recent NVIDIA cards. We're now at a point where transitioning fully to the open-source GPU kernel modules is the right move, and we're making that change in the upcoming R560 driver release. Rob Armstrong, Kevin Mittman and Fred Oh There are some caveats regarding which generations, exactly, should be using the open source modules for optimal performance. For NVIDIA's most cutting edge generations, Grace Hopper and Blackwell, you actually must use the open source modules, since the proprietary ones are not even supported. For GPUs from the Turing, Ampere, Ada Lovelace, or Hopper architectures, NVIDIA recommends the open source modules, but the proprietary ones are compatible as well. Anything older than that is restricted to the proprietary modules, as they're not supported by the open source modules. This is a huge milestone, and NVIDIA becoming a better team player in the Linux world is a big deal for those of us with NVIDIA GPUs - it's already paying dividend in vastly improved Wayland support, which up until very recently was a huge problem. Do note, though, that this only covers the kernel module; the userspace parts of the NVIDIA driver are still closed-source, and there's no indication that's going to change.
Linux patch to disable Snapdragon X Elite GPU by default
Not too long ago it seemed like Linux support for the new ARM laptops running the Snapdragon X Pro and Elite processors was going to be pretty good - Qualcomm seemed to really be stepping up its game, and detailed in a blog post exactly what they were doing to make Linux a first-tier operating system on their new, fancy laptop chips. Now that the devices are in people's hand, though, it seems all is not so rosy in this new Qualcomm garden. A recent Linux kernel DeviceTree patch outright disables the GPU on the Snapdragon X Elite, and the issue is, as usual, vendor nonsense, as it needs something called a ZAP shader to be useful. The ZAP shader is needed as by default the GPU will power on in a specialized secure" mode and needs to be zapped out of it. With OEM key signing of the GPU ZAP shader it sounds like the Snapdragon X laptop GPU support will be even messier than typically encountered for laptop graphics. Michael Larabel This is exactly the kind of nonsense you don't want to be dealing with, whether you're a user, developer, or OEM, so I hope this gets sorted out sooner rather than later. Qualcomm's commitments and blog posts about ensuring Linux is a first-tier platform are meaningless if the company can't even get the GPU to work properly. These enablement problems should've been handled well before the devices entered circulation, so this is very disheartening to see. So, for now, hold off on X Elite laptops if you're a Linux user.
Ly: a TUI display manager
Ly is a lightweight TUI (ncurses-like) display manager for Linux and BSD. Ly GitHub page That's it. That's the description. I've been wanting to take a stab at running a full CLI/TUI environment for a while, see just how far I can get in my computing life (excluding games) running nothing but a few tiled terminal emulators running various TUI apps for email, Mastodon, browsing, and so on. I'm not sure I'd be particularly happy with it - I'm a GUI user through and through - but lately I've seen quite a few really capable and just pleasantly usable TUI applications come by, and they've made me wonder. It'd make a great article too.
Unified kernel image
UKIs can run on UEFI systems and simplify the distribution of small kernel images. For example, they simplify network booting with iPXE. UKIs make rootfs and kernels composable, making it possible to derive a rootfs for multiple kernel versions with one file for each pair. A Unified Kernel Image (UKI) is a combination of a UEFI boot stub program, a Linux kernel image, an initramfs, and further resources in a single UEFI PE file (device tree, cpu code, splash screen, secure boot sig/key, ...). This file can either be directly invoked by the UEFI firmware or through a boot loader. Hugues If you're still a bit unfamiliar with unified kernel images, this post contains a ton of detailed practical information. Unified kernel images might become a staple for forward-looking Linux distributions, and I know for a fact that my distribution of choice, Fedora, has been working on it for a while now. The goal is to eventually simplify the boot process as a whole, and make better, more optimal use of the advanced capabilities UEFI gives us over the old, limited, 1980s BIOS model. Like I said a few posts ago, I really don't want to be using traditional bootloaders anymore. UEFI is explicitly designed to just boot operating systems on its own, and modern PCs just don't need bootloaders anymore. They're points of failure users shouldn't be dealing with anymore in 2024, and I'm glad to see the Linux world is seriously moving towards negating the need for their existence.
Inside an IBM/Motorola mainframe controller chip from 1981
In this article, I look inside a chip in the IBM 3274 Control Unit.1 But before I discuss the chip, I need to give some background on mainframes. Ken Shirriff Whenever we talk about mainframes, I am obligated to link to the story of an 18 year old buying a mainframe, while still living at his parents. One of the greatest presentations of all time.
Safari already contains ad tracking technology, and they’re now adding it to Safari’s Private Browsing mode, too
We've been talking a lot about sleazy ways in which the online advertising industry is conspiring with browser makers - who also happen to be in the online advertising industry - to weaken privacy features so they can still track you and the ads they serve you, but with privacy". They're trying really hard to make it seem as if they're doing us a huge favour by making tracking slightly more private, and browser makers are falling over themselves to convince us that allowing some user and ad tracking is the only way to stop the kind of total everything, everywhere, all at once tracking we have now. We've got Google and Chrome pushing something called Privacy Sandbox, and we've got Mozilla and Facebook pushing something called Privacy-Preserving Attribution, both of which are designed to give the advertising industry slightly more private tracking in the desperate hope they won't still be doing a lot more tracking on the side. Safari users, meanwhile, have been feeling pretty good about all of this in the knowledge Apple cares about privacy, so surely Safari won't be doing any of this. You know where this is going, right? Today, the WebKit project published a lengthy blog post detailing all the various additional measures it's taking to make its Private Browsing mode more, well, private, and a lot of them are great moves, very welcome, and ensure that private browsing on Safari is a little bit more private than it is on Chrome, as the blog post gleefully points out. However, not long into the blog post, the shoe drops. We also expanded Web AdAttributionKit (formerly Private Click Measurement) as a replacement for tracking parameters in URL to help developers understand the performance of their marketing campaigns even under Private Browsing. John Wilander, Charlie Wolfe, Matthew Finkel, Wenson Hsieh, and Keith Holleman A little further down, they go into more detail: Web AdAttributionKit (formerly Private Click Measurement) is a way for advertisers, websites, and apps to implement ad attribution and click measurement in a privacy-preserving way. You can read more about it here. Alongside the new suite of enhanced privacy protections in Private Browsing, Safari also brings a version of Web AdAttributionKit to Private Browsing. This allows click measurement and attribution to continue working in a privacy-preserving manner. John Wilander, Charlie Wolfe, Matthew Finkel, Wenson Hsieh, and Keith Holleman So not only does Safari already include the kind of tracking technology everyone is - rightfully - attacking Mozilla over for adding it to Firefox, Apple and the Safari team are actually taking it a step further and making this ad tracking technology available in private browsing mode. The technology is limited a bit more in Private Browsing mode, but its intent is preserved: to track you and the ads you see online. I would hazard a guess that when you enable a browser's private browsing or incognito mode, you assume that means zero tracking. We already know that Chrome's Incognito mode leaks data like a sieve with bullet holes in it, and now it seems Safari's Private Browsing mode, too, is going to allow advertisers to track you and the ads you see - blog post full of fancy privacy features be damned. Do you know those Around the web" chumboxes? Even if you're unfamiliar with the term, you've most definitely seen these things all over the web, and really hate them. A major player in the chumbox business is a company called Taboola, a name that's quite despised and reviled online. Popular Apple blogger John Gruber called Taboola a slumlord" and the lowest common denominator clickbait property. Do you want to know which major technology company just signed a massive deal with Taboola? Ad tech giant Taboola has struck a deal with Apple to power native advertising within the Apple News and Apple Stocks apps, Taboola founder and CEO Adam Singolda told Axios. Sara Fischer at Axios Apple needs to find new markets to keep growing, and clearly, pestering its users with upsells and subscriptions to its services isn't enough. The online advertising industry is massive - just look at Google's and Facebook's financial disclosures - and Apple seems to be interested in taking a bigger slice of that fat pie. And as Google and now Mozilla are finding out, a browser that blocks ads and ad tracking kind of gets in the way of that. Anyone who can make and sell plug-and-play Pi-Hole devices even normal people can use is going to make a killing.
I told you so: Mozilla working with Facebook to weaken Firefox’ privacy and anti-tracking features
I've long been warning about the dangers of relying on just one browser as the bullwark against the onslaught of Chrome, Chrome skins, and Safari. With Firefox' user numbers rapidly declining, now stuck at a mere 2% or so - and even less on mobile - and regulatory pressure possibly ending the Google-Mozilla deal with makes up roughly 80% of Mozilla's income, I've been warning that Mozilla will most likely have to start making Firefox worse to gain more temporary revenue. As the situation possibly grows even more dire, Firefox for Linux would be the first on the chopping block. I've received quite a bit of backlash over expressing these worries, but over the course of the last year or so we've been seeing my fears slowly become reality before our very eyes, culminating in Mozilla recently acquiring an online advertising analytics company. Over the last few days, things have become even worse: with the release of Firefox 128, the enshitification of Firefox has now well and truly begun. Less than a month after acquiring the AdTech company Anonym, Mozilla has added special software co-authored by Meta and built for the advertising industry directly to the latest release of Firefox, in an experimental trial you have to opt out of manually. This Privacy-Preserving Attribution" (PPA) API adds another tool to the arsenal of tracking features that advertisers can use, which is thwarted by traditional content blocking extensions. Jonah Aragon If you have already upgraded to Firefox 128, you have automatically been opted into using this new API, and for now, you can still opt-out by going to Settings > Privacy & Security > Website Advertising Preferences, and remove the checkmark Allow websites to perform privacy-preserving ad measurement". You were opted in without your consent, without any widespread announcement, and if it wasn't for so many Firefox users being on edge about Mozilla's recent behaviour, it might not have been snuffed out this quickly. Over on GitHub, there's a more in-depth description of this new API, and the first few words are something you never want to hear from an organisation that claims to fight tracking and protect your privacy: Mozilla is working with Meta". I'm not surprised by this at all - like I, perhaps gleefully, pointed out, I've been warning about this eventuality for a long time - but I've noted that on the wider internet, a lot of people were very much unpleasently surprised, feeling almost betrayed by this, the latest in a series of dubious moves by Mozilla. It's not even just the fact they're working with Meta", which is entirely disqualifying in and of itself, but also the fact there's zero transparency or accountability about this new API towards Firefox' users. Sure, we're all technologically inclined and follow technology news closely, but the vast majority of people don't, and there's bound to be countless people who perhaps only recently moved to Firefox from Chrome for privacy reasons, only to be stabbed in the back by Mozilla partnering up with Facebook, of all companies, if they even find out about this at all. It's right out of Facebook's playbook to secretly experiment on users. This is what I wrote a year ago: I'm genuinely worried about the state of browsers on Linux, and the future of Firefox on Linux in particular. I think it's highly irresponsible of the various prominent players in the desktop Linux community, from GNOME to KDE, from Ubuntu to Fedora, to seemingly have absolutely zero contingency plans for when Firefox enshittifies or dies, despite everything we know about the current state of the browser market, the state of Mozilla's finances, and the future prospects of both. Desktop Linux has a Firefox problem, but nobody seems willing to acknowledge it. Thom Holwerda It seems my warnings are turning into reality one by one, and if, at this point, you're still not worried about where you're going to go after Firefox starts integrating even more Facebook technologies or Firefox for Linux gets ever more resources pulled away from it until it eventually gets cancelled, you're blind.
The AMD Zen 5 microarchitecture: powering Ryzen AI 300 series for mobile and Ryzen 9000 for desktop
Built around the new Zen 5 CPU microarchitecture with some fundamental improvements to both graphics and AI performance, the Ryzen AI 300 series, code-named Strix Point, is set to deliver improvements in several areas. The Ryzen AI 300 series looks set to add another footnote in the march towards the AI PC with its mobile SoC featuring a new XDNA 2 NPU, from which AMD promises 50 TOPS of performance. AMD has also upgraded the integrated graphics with the RDNA 3.5, which is designed to replace the last generation of RDNA 3 mobile graphics, for better performance in games than we've seen before. Further to this, during AMD's recent Tech Day last week, AMD disclosed some of the technical details regarding Zen 5, which also covers anumber of key elements under the hood on both the Ryzen AI 300 and the Ryzen 9000 series.On paper, theZen 5 architecture looks quite a big step up compared to Zen 4, with the key component driving Zen 5 forwardthrough higher instructions per cycle than its predecessor, which is something AMD has managed to do consistently from Zen to Zen 2, Zen 3, Zen 4, and now Zen 5. Gavin Bonshor at AnandTech Not the review and deep analysis quite yet, but a first thorough look at what Zen 5 is going to bring us, straight from AnandTech.
Fusion OS: writing an OS in Nim
I decided to document my journey of writing an OS in Nim. Why Nim? It's one of the few languages that allow low-level systems programming with deterministic memory management (garbage collector is optional) with destructors and move semantics. It's also statically typed, which provides greater type safety. It also supports inline assembly, which is a must for OS development. Other options include C, C++, Rust, and Zig. They're great languages, but I chose Nim for its simplicity, elegance, and performance. Fusion OS documentation website I love it when a hobby operating system project not only uses a less common programming language, but the author also details the entire development process in great detail. It's not a UNIX-like, and the goals are a single 64 bit address space, capability-based security model, and a lot more. It's targeting UEFI machines, and the code is, of course, open source and available on GitHub.
Google can totally explain why Chromium browsers quietly tell only its websites about your CPU, GPU usage
It's time for Google being Google, this time by using an undocumented APIs to track resource usage when using Chrome. When visiting a *.google.com domain, the Google site can use the API to query the real-time CPU, GPU, and memory usage of your browser, as well as info about the processor you're using, so that whatever service is being provided - such as video-conferencing with Google Meet - could, for instance, be optimized and tweaked so that it doesn't overly tax your computer. The functionality is implemented as an API provided by an extension baked into Chromium - the browser brains primarily developed by Google and used in Chrome, Edge, Opera, Brave, and others. Brandon Vigliarolo at The Register The original goal of the API was to give Google's various video chat services - I've lost count - the ability to optimise themselves based on the available system resources. Crucially, though, this API is only available to Google's domains, and other, competing services cannot make use of it. This is in clear violation of the European Union's Digital Markets Act, and with Chrome being by far the most popular browser in the world, and thus a clear gatekeeper, the European Commission really should have something to say about this. For its part, Google told The Register it claims to comply with the DMA, so we might see a change to this API soon. Aside from optimising video chat performance, the API, which is baked into a non-removable extension, also tracks performance issues and crashes and reports these back to Google. This second use, too, is at its core not a bad thing - especially if users are given the option to opt out of such crash analytics. Still, it seems odd to use an undocumented API for something like this, but I'm not a developer so what do I know. Mind you, other Chromium-based browsers also report this data back to Google, which is wild when you think about it. Normally I would suggest people switch to Firefox, but I've got some choice words for Firefox and Mozilla, too, later today.
Pretty pictures, bootable floppy disks, and the first Canon Cat demo?
About a month ago, Cameron Kaiser first introduced us to the Canon Cat, a computer designed by Jeff Raskin, but abandoned within six months by Canon, who had no idea what to do with it. In his second article on the Cat, Kaiser dives much deeper into the software and operating system of the Cat, even going so far as to become the first person to write software for it. One of the most surprising aspects of the Cat is that it's collaborative; other users can call into your Cat using a landline and edit the same document you're working on remotely. Selecting text has other functions too. When I say everything goes in the workspace, I do mean everything. The Cat is designed to be collabourative: you can hook up your Cat to a phone line, or at least you could when landlines were more ubiquitous, and someone could call in and literally type into your document remotely. If you dialed up a service, you would type into the document and mark and send text to the remote system, and the remote system's response would also become part of your document. (That goes for the RS-232 port as well, by the way. In fact, we'll deliberately exploit this capability for the projects in this article.) Cameron Kaiser You can also do calculations right into the text, going so far as allowing the user to define variables and reuse those variables throughout the text to perform various equations and other mathematic operations. If you go back and change the value of a variable, all other equations using those variables are updated as well. That's quite nifty, especially considering the age of the Cat, and since the Cat is fixed width, you can effectively create spreadsheets this way, too. There's really far too much to cover here, and I strongly suggest you head on over and read the entire thing.
Microsoft quietly updates official lightweight Windows 11 Validation OS ISOs for 24H2
Microsoft has again quietly updated its Validation OS ISOs. In case you are not familiar with it, Validation OS is an official lightweight variant of Windows and it is designed for hardware vendors to test, validate and repair hardware defects. Sayan Sen at Neowin I had no idea this variant of Windows existed, but it kind of makes sense when you think about it. OEMs or other companies making devices that run or work with Windows may need to test, reboot, test, reboot, and so on, endlessly, and having a lightweight and fast version of Windows that doesn't load any junk you don't need - or just loads straight into your company's hardware testing application - is incredibly valuable. According to Microsoft, the Windows Validation OS boots to a command line that allows you to run Win32 applications. This has made me wonder if I can use it for the one thing I am forced to use Windows for: playing League of Legends (I cobbled together a spare parts machine solely for this purpose). My guess is that either the Validation OS will lack certain components or frameworks League of Legends requires, or is so different from regular Windows that it will trip Riot Games' rootkit, or both. Still, I'm curious. I might load this up on a spare hard drive and what's possible.
GitHub is starting to feel like legacy software
The corporate branding, the new AI-powered developer platform" slogan, makes it clear that what I think of as GitHub"-the traditional website, what are to me the core features-simply isn't Microsoft's priority at this point in time. I know many talented people at GitHub who care, but the company's priorities just don't seem to value what I value about the service. This isn't an anti-AI statement so much as a recognition that the tool I still need to use every day is past its prime. Copilot isn't navigating the website for me, replacing my need to the website as it exists today. I've had tools hit this phase of decline and turn it around, but I'm not optimistic. It's still plenty usable now, and probably will be for some years to come, but I'll want to know what other options I have now rather than when things get worse than this. Misty De Meo Apparently, GitHub is in the middle of a long, drawn-out process where it's rewriting its frontend using React. De Meo was trying to use a particular feature of GitHub - the blame view, which also works through the command line but is apparently much harder to parse there - and realised the browser search feature just couldn't find the line of code they absolutely knew for sure was there. After scrolling for a while, the browser search feature suddenly found the line of code. I'd heard rumblings that GitHub's in the middle of shipping a frontend rewrite in React, and I realized this must be it. The problem wasn't that the line I wanted wasn't on the page-it's that the whole document wasn't being rendered at once, so my browser's builtin search bar just couldn't find it. On a hunch, I tried disabling JavaScript entirely in the browser, and suddenly it started working again. GitHub is able to send a fully server-side rendered version of the page, which actually works like it should, but doesn't do so unless JavaScript is completely unavailable. Misty De Meo Seem like a classic case of people being told to develop something in too little time, with the wrong tools, while management is breathing down their necks and pulling engineers away to work on buzzwords like AI".
Windows NT 4.0 ported to run on certain Apple PowerPC Macs
The most fascinating time for Windows NT were its first few years on the market, when the brand new operating system supported a wide variety of architectures, from default x86, all the way down to stuff like Alpha, MIPS, and exotic things like Intel i860, and even weirder stuff like Clipper (even a SPARC port was planned, but never released). One of the more conventional architectures that saw a Windows NT port - one that was actually released to the public, no less - was PowerPC. The last version of Windows NT to support exotic architectures was 4.0, with Windows 2000 only supporting x86, dropping everything else, including PowerPC (although Windows 2000 for Alpha reached RC1 status). The PowerPC version of Windows NT only supported IBM and Motorola systems using the PowerPC Reference Platform, and never the vastly more popular PowerPC systems from Apple. Well, it's 2024, and that just changed: Windows NT 4.0 can now be installed and run on certain Apple New World Power Macintosh systems. This repository currently contains the source code for the ARC firmware and its loader, targeting New World Power Macintosh systems using the Gossamer architecture (that is, MPC106 Grackle" memory controller and PCI host, and Heathrow" or Paddington" super-I/O chip on the PCI bus). NT4 only, currently. NT 3.51 may become compatible if HAL and drivers get ported to it. NT 3.5 will never be compatible, as it only supports PowerPC 601. (The additional suspend/hibernation features in NT 3.51 PMZ could be made compatible in theory but in practise would require all of the additional drivers for that to be reimplemented.) maciNTosh GitHub page This is absolutely wild, and one of the most interesting projects I've seen in a long, long time. The deeply experimental nature of this effort does mean that NT 4.0 is definitely not stable on any of the currently supported machines, and the number of drivers implemented is the absolute bare minimum to run NT 4.0 on these systems. It does, however, support dual-booting both NT 4.0 and Mac OS8, 9, and X, which would be quite something to set up. I'm not definitely going to keep an eye on eBay for a supported machine, because running NT on anything other than x86 has always been a bit of a weird fascination for me. Sadly, period-correct PowerPC machines that support NT are extremely rare and thus insanely expensive, and will often require board-level repairs that I can't perform. Getting a more recent Yikes PowerMac G4 should be easy, since those just materialise out of thin air randomly in the world. I'm incredibly excited about this.
Package AmigaOS software for Linux and Windows with AxRuntime
This solution lets developers compile their Amiga API-based applications as Linux binaries. Once the features are implemented, tested and optimized using the runtime on Linux or Windows, developers re-compile their applications for their Amiga-like system of choice and perform final quality checking. Applications created with AxRuntime can be distributed to Linux or Windows communities, giving developers a much broader user base and a possibility to invite developers from outside general Amiga community to contribute to the application. AxRuntime website I had never considered this as an option, but with AmigaOS 3.x basically being frozen in time, it's a relatively easy target for an effort such as this. It won't surprise you to learnt hat AxRuntime is using code from AROS, which itself is fully compatible with AmigaOS 3.1. This should technically mean that any AmigaOS application that runs on AROS should be able to be made to run using this runtime, which is great news for Amiga developers. Why? Well, the cold, harsh truth is that the number of Amiga users is probably still dwindling as the sands of time cause people to, well, die, and the influx of new users, who also happen to possess the skillset to develop AmigaOS software, must be a very, very slow trickle, at best. This runtime will allow AmigaOS developer to package their software to run on Linux and Windows machines, getting a lot more eyes on the software in the process. Amiga devices are not exactly cheap or easy to come by, so this is a great alternative.
Google is ending support for Lacros, the experimental version of Chrome for ChromeOS
Back in August 2023, we previewed our work on an experimental version of Chrome browser for ChromeOS named Lacros.The original intention was to allow Chrome browser on Chromebooks to swiftly get the latest feature and security updates without needing a full OS update. As we refocus our efforts on achieving similar objectives with ChromeOS embracing portions of the Android stack, we have decided to end support for this experiment. We believe this will be a more effective way to help accelerate the pace of innovation on Chromebook. ChromeOS Beta Tester Community To refresh your memory, Lacros was an attempt by Google to decouple the Chrome browser from ChromeOS itself, so that the browser could be updated indepdnently from ChromeOS as a whole. This would obviously bring quite a few benefits with it, from faster and easier updates, to the ability to keep updating the Chrome browser after device support has ended. This was always an experimental feature, so the end of this experiment really won't be affecting many people. The interesting part is the reference to the recent announcement that ChromeOS' Linux kernel and various subsystems will be replaced by their Android counterparts. I'm not entirely sure what this means for the Chrome browser on ChromeOS, since it seems unlikely that they're going to be using the Android version of Chrome on ChromeOS. It's generally impossible to read the tea leaves when it comes to whatever Google does, so I'm not even going to try.
Ubuntu security updates are a confusing mess
I've read this article several times now, and I'm still not entirely sure how to properly summarise the main points without leaving important details out. If you really boil it down to the very bare essentials, which packages get updates on which Ubuntu release is a confusing mess that most normal users will never be able to understand, potentially leaving them vulnerable to security flaws that have already been widely patched and are available on Ubuntu - just not your specific Ubuntu version, your specific customer type, or the specific package type in question. So, in the case of McPhail here, they needed a patched version of tomcat 9 for Ubuntu 22.04. This patched version was available for Ubuntu 18.04 users because not only is 18.04 an LTS release - meaning five years of support - Canonical also offers a commercial Extended Security Maintenance (ESM) subscription for 18.04, so if you're paying for that, you get the patched tomcat9. On Ubuntu 20.04, another LTS release, the patched version of tomcat9 is available for everyone, but for the version McPhail is running, the newer LTS release 22.04, it's only available for Ubuntu Pro subscribers (24.04 is not affected, so not relevant for this discussion). Intuitively, this doesn't make any sense. The main cause of the weird discrepancy between 20.04 and 22.04 is that Canonical's LTS support only covers the packages in main (about 10% of the total amount of packages), whereas tomcat9 lives in universe (90% of packages). LTS packages in universe are only supported on a best effort" basis, and one of the ways a patched universe package can be made available to non-paying LTS users is if it is inhereted from Debian, which happens to be the case for tomcat9 in 20.04, while in 22.04, it's considered part of an Ubuntu Pro subscription. So, there's a fixed package, but 22.04 LTS users, who may expect LTS to truly mean LTS, don't get the patched version that exists and is ready to go without issues. Wild. This is incredibly confusing, and would make me run for the Debian hills before my next reboot. I understand maintaining packages is a difficult, thankless task, but the nebulousness here is entirely of Canonical's own making, and it's without a doubt leaving users vulnerable who fully expect to be safe and all patched up because they're using an LTS release.
Qualcomm’s Oryon core: a long time in the making
In 2019, a startup called Nuvia came out of stealth mode. Nuvia was notable because its leadership included several notable chip architects, including one who used to work for Apple. Apple chips like the M1 drew recognition for landing in the same performance neighborhood as AMD and Intel's offerings while offering better power efficiency. Nuvia had similar goals, aiming to create a power efficient core that could could surpass designs from AMD, Apple, Arm, and Intel. Qualcomm acquired Nuvia in 2021, bringing its staff into Qualcomm's internal CPU efforts. Bringing on Nuvia staff rejuvenated Qualcomm's internal CPU efforts, which led to the Oryon core in Snapdragon X Elite. Oryon arrives nearly five years after Nuvia hit the news, and almost eight years after Qualcomm last released a smartphone SoC with internally designed cores. For people following Nuvia's developments, it has been a long wait. Chips and Cheese Now that the Snapdragon X Elite and Pro chips are finally making their way to consumers, we're also finally starting to see proper deep-dives into the brand new hardware. Considering this will set the standard for ARM laptops for a long time to come - including easy availability of powerful ARM Linux laptops - I really want to know every single quirk or performance statistic we can find.
Iconography of the X Window System: the boot stipple
For the uninitiated, what are we looking at? Could it be the Moire Error from Doom? Well, no. You are looking at (part of) the boot up screen for the X Window System, specifically the pattern it uses as the background of the root window. This pattern is technically called a stipple. What you're seeing is pretty important and came to symbolize a lot for me as a computer practitioner. Matt T. Proud The X bootup pattern is definitely burnt onto my retina, as it probably is for a lot of late '90s, early 2000s Linux users. Setting up X correctly, and more importantly, not breaking it later, was almost an art at the time, so any time you loaded up your PC and this pattern didn't greet you, you'd get this sinister feeling in the pit of your stomach. There was now a very real chance you were going to have to debug your X configuration file, and nobody - absolutely nobody - liked doing that, and if you did, you're lying. Matt T. Proud dove into the history of the X stipple, and discovered it's been part of X since pretty much the very beginning, and even more esoteric X implementations, like the ones used by Solaris or the various commercial versions, have the stipple. He also discovered several other variants of the stipple included in X, so there is a chance your memory might be just a tiny bit different. The stipple eventually disappeared at around 2008 or so, it disappeared as part of the various efforts to modernise, sanitise, and speed up the Linux boot process on desktops. On modern distributions still using X, you won't encounter it anymore by default, but in true X fashion, the code is still there and you can easily bring it back using a flag specifically designed for it, -retro, that you can use with startx or your X init file. There's a ton more information in Proud's excellent article, but this one paragraph made me smile: I will remark that in spite of my job being a software engineer, I had never spent a lot of time looking at the source code for the X Server (XFree86 or X.Org) before. It's really nuts to see that a lot of the architecture from X10R3 and X11R1 still persists in the code today, which is a statement that can be said in deep admiration for legacy code but also disturbance from the power of old decisions. Without having looked at the internals of any Wayland implementation, I can sympathize sight unseen with the sentiments that some developers have toward the X Window System: the code is a dead end. I say that with the utmost respect to the X Window System as a technology and an ecosystem. I'll keep using X, and I will be really sad when it's no longer possible for me to do so for one reason or another, as I'm extremely attached to it quirks. But it's clear the future is limited. Matt T. Proud We all have great - and not so great - memories of X, but I am really, really happy I no longer have to use it.
Palestinians say Microsoft unfairly closing their accounts
Palestinians living abroad have accused Microsoft of closing their email accounts without warning - cutting them off from crucial online services. They say it has left them unable to access bank accounts and job offers - and stopped them using Skype, which Microsoft owns, to contact relatives in war-torn Gaza. Microsoft says they violated its terms of service - a claim they dispute. Mohamed Shalaby and Joe Tidy at the BBC Checking up on your family members to see if they survived another day of an ongoing genocide doesn't seem like something that should be violating any terms of any services, but that's just me.
“Majority of websites and mobile apps use dark patterns”
A global internet sweep that examined the websites and mobile apps of 642 traders has found that 75,7% of them employed at least one dark pattern, and 66,8% of them employed two or more dark patterns. Dark patterns are defined as practices commonly found in online user interfaces and that steer, deceive, coerce, or manipulate consumers into making choices that often are not in their best interests. International Consumer Protection and Enforcement Network Dark patterns are everywhere, and it's virtually impossible to browse the web, use certain types of services, or install mobile applications, without having to dodge and roll just to avoid all kinds of nonsense being thrown at you. It's often not even ads that make the web unusable - it's all the dark patterns tricking you into viewing ads, entering into a subscription, enabling notifications, sharing your email address or whatever, that's the real reason. This is why one of the absolute primary demands I have for the next version of OSNews is zero dark patterns. I don't want any dialogs begging you to enable ads, no modal windows demanding you sign up for a newsletter, no popups asking you to enable notifications, and so on - none of that stuff. My golden standard is your computer, your rules", and that includes your right to use ad blockers or anything else to change the appearance or functioning of our website on your computer. It'd be great if dark patterns became illegal somehow, but it would be incredibly difficult to write any legislation that would properly cover these practices.
AmigaKit launches a new Amiga that’s not an Amiga at all
I try to keep tabs on a huge number of operating system projects out there - for obvious reasons - but long ago I learned that when it comes to the world of Amiga, it's best to maintain distance and let any important news find its way out of the Amiga bubble, lest one loses their sanity. Keeping up with the Amiga world requires following every nook and cranny of various forums and websites with different allegiances to different (shell) companies, with often barely coherent screeching and arguments literally nobody cares about. It's a mess is what I'm trying to say. Anyway, it seems one of the many small companies still somehow making a living in the Amiga world, AmigaKit, has recently released a new device, the A600GS. It's a retrogaming-oriented Amiga computer, but it does come with something called AmiBench, that's apparently a weird hybrid between bits of Amiga OS 4 and AROS, so it does also support running a proper desktop and associated applications, but only AmigaOS 3.x applications (I think? It's a bit unclear). It has HDMI at up to 1080p, and even WiFi and Bluetooth support, which is pretty neat. Wait, Wifi and Bluetooth support? What are we really dealing with here? Once again the information is hard to find because AmigaKit is incredibly stingy with specifications - I had to read goddamn YouTube comments to get some hints - but it seems to be a custom board with an Orange Pi Zero 3 stuck on top doing most of the work. In other words, the meat of this thing is just an emulator, which in and of itself isn't a bad thing, it's just weird to me that they're not upfront and direct about this. While this answers some questions, it also raises a whole bunch more. If this is running on low-end Allwinner ARM hardware from 2022, how is this AmiBench desktop environment (or operating system?) a fork of OS4 with AROS code in it? AmigaOS 4 is PowerPC-only, which may explain why AmigaKit only mentions AmigaOS 3.x and 68K compatibility, and not AmigaOS 4 compatibility. And what's AROS doing in there? I mean, this is an interesting product in the sense that it's a relatively cheap turnkey solution for classic Amiga enthusiasts, but a new Amiga this is definitely not. At about 130, this is not a bad deal, but other than hardcore fans of the classic 68K Amiga, I don't see many people being interested in this. The Apollo Standalone V4+ piques my interest way more, but at 700-800, it's also a lot more expensive, but at least they're much clearer about what the Apollo is, what software it's running, and that they're giving back their work to AROS.
“I fixed a 6-year-old .deb installation bug in Ubuntu MATE and Xubuntu”
I love a good bug hunting story, and this one is right up there as a great one. Way back in 2018, Doug Brown discovered that after installing Ubuntu MATE 18.04, if he launched Firefox from the icon in the default panel arrangement to install Chrome from the official Chrome website, the process was broken. After downloading the .deb and double-clicking it, GDebi would appear, but after clicking Install", nothing happened. What was supposed to happen is that after clicking Install", an authentication dialog should appear where you enter your root password, courtesy of gksu. However, this dialog did not appear, and without thinking too much of it, Brown shrugged and just installed the downloaded Chrome .deb through the terminal, which worked just fine. While he didn't look any deeper into the cause of the issue, he did note that as the years and new Ubuntu releases progressed, the bug would still be there, all the way up until the most recent release. Finally, 2.5 years ago, he decided to dive into the bug. It turned out there were lots of reports about this issue, but nobody stepped up to fix it. While workarounds were made available through wrapper scripts, and deeper investigations into the cause revealed helpful information. The actual error message was a doozy: Refusing to render service to dead parents", which is quite metal and a little disturbing. In summary, the problem was that GDebi was using execv() to replace itself with an instance of pkexec, which was intended to bring up an authentication dialog and then allow GDebi to run as a superuser. pkexec didn't like this arrangement, because it wants to have a parent process other than init. Alkis mentioned that you could recreate the problematic scenario in a terminal window by running gdebi-gtk with setsid to run it in a new session. Doug Brown Backing up a few steps, if the name gksu" rings a bell for you, you might have already figured out where the problem most likely originated from. Right around that time, 2018, Ubuntu switched to using PolicyKit instead, and gksu was removed from Ubuntu. GDebi was patched to work with PolicyKit instead, and this was what introduced the actual bug. Sadly, despite having a clear idea of the origin of the bug, as well as where to look to actually fix it, nobody picked it up. It sat there for years, causing problems for users, without a fix in sight. Brown was motivated enough to fix it, submitted the patch, but after receiving word it would be looked at within a few days, he never heard anything back for years, not helped by the fact that GDebi has long been unmaintained. It wasn't until very recently that he decided to go back again, and this time, after filling out additional information required for a patch for an unmaintained package, it was picked up, and will become available in the next Ubuntu release (and will most likely be backported, too). Brown further explains why it took so long for the bug to be definitely fixed. Not only is GDebi unmaintained, the bug also only manifested itself when launching Firefox from the panel icon - it did not manifest when launching Firefox from the MATE menu, so a lot of people never experienced it. On top of that, as we all sadly know, Ubuntu replaced the Firefox .deb package with the SNAP version, which also doesn't trigger the bug. It's a long story for sure, but a very interesting one. It shows how sometimes, the stars just align to make sure a bug does not get fixed, even if everyone involved knows how to fix it, and even if fixes have been submitted. Sometimes, things just compound to cause a bug to fall through the cracks.
Google extends Linux kernel support to keep Android devices secure for longer
Android, like many other operating systems, uses the open-source Linux kernel. There are several different types of Linux kernel releases, but the type that's most important to Android is the long-term support (LTS) one, as they're updated regularly with important bug fixes and security patches. Starting in 2017, the support lifetime of LTS releases of Linux was extended from two years to six years, but early last year, this extension was reversed. Fortunately, Google has announced that moving forward, they'll support their own LTS kernel releases for four years. Here's why that's important for the security of Android devices. Mishaal Rahman at Android Authority I fully support the Linux kernel maintainers dropping the LTS window from six to two years. The only places where such old kernels were being used were embedded devices and things like smartphones vendors refused to update to newer Android releases, and it makes no sense for kernel maintainers to be worrying about that sort of stuff. If an OEM wants to keep using such outdated kernels, the burden should be on that OEM to support that kernel, or to update affected devices to a newer, supported kernel. It seems Google, probably wisely, realised that most OEMs weren't going to properly upgrade their devices and the kernels that run on them, and as such, the search giant decided to simply create their own LTS releases instead, which will be supported for four years. Google already maintains various Android-specific Linux kernel branches anyway, so it fits right into their existing development model for the Android Linux kernel. Some of the more popular OEMs, like Google itself or Samsung, have promised longer support life cycles for new Android versions on their devices, so even with this new Android-specific LTS policy, there's still going to be cases where an OEM will have to perform a kernel upgrade where they didn't have to before with the six year LTS policy. I wonder if this is going to impact any support promises made in recent years.
Mozilla opts to extended Windows 7/8/8.1 support
Among them, Byron Jourdan, Senior Director, Product Management of Mozilla, under the Reddit username ComprehensiveDoor643 revealed that Mozilla plans to support Firefox on Windows 7 for longer. When asked separately about whether it also included Windows 8 and 8.1 too, Jourdan added that it was certainly the plan, though for how long the extended support would last was still undecided. Sayan Sen at Neowin Excellent move by Mozilla. I doubt there's that many new features and frameworks in Windows 10 or 11 that are absolutely essential to Firefox working properly, so assuming it can gracefully disable any possible Windows 10/11-exclusive features, it should be entirely possible to use Firefox as an up-to-date, secure, and capable browser on Windows 7/8.x. Windows 7 and 8.x users still make up about 2.7% of Windows users worldwide, and with Windows' popularity, that probably still translates to millions and millions of people. Making sure these people have access to a safe and secure browser is a huge boon, and I'm very happy Mozilla is going to keep supporting these platforms as best they can, at least for now. For those of us who already consider especially Windows 7 a retrocomputing platform - I sure do - this is also great news, as any retro box or VM we load up with it will also get a modern browser. Just excellent news all around.
No more boot loader: please use the kernel instead
Most people are familiar with GRUB, a powerful, flexible, fully-featured bootloader that is used on multiple architectures (x86_64, aarch64, ppc64le OpenFirmware). Although GRUB is quite versatile and capable, its features create complexity that is difficult to maintain, and that both duplicate and lag behind the Linux kernel while also creating numerous security holes. On the other hand, the Linux kernel, which has a large developer base, benefits from fast feature development, quick responses to vulnerabilities and greater overall scrutiny. We (Red Hat boot loader engineering) will present our solution to this problem, which is to use the Linux kernel as its own bootloader. Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target. All necessary drivers, filesystem support, and networking are already built in and code duplication is avoided. Marta Lewandowska I'm not a fan of GRUB. It's too much of a single point of failure, and since I'm not going to be dual-booting anything anyway I'd much rather use something that isn't as complex as GRUB. Systemd-boot is an option, but switching over from GRUB to systemd-boot, while possible on my distribution of choice, Fedora, is not officially supported and there's no guarantee it will keep working from one release to the next. The proposed solution here seems like another option, and it may even be a better option - I'll leave that to the experts to discuss. It seems like to me that the ideal we should be striving for is to have booting the operating system become the sole responsibility of the EUFI firmware, which usually already contains the ability to load any operating system that supports UEFI without explicitly installing a bootloader. It'd be great if you could set your UEFI firmware to just always load its boot menu, instead of hiding it behind a function key or whatever. We made UEFI more capable to address the various problems and limitations inherent in BIOS. Why are we still forcing UEFI to pretend it still has the same limitations?
Design and build the next version of OSNews
Despite being live since 1997, OSNews has had fairly few redesigns in the grand scheme of things. If my memory serves me correctly, we've had a grand total of 6 designs, and we're currently on version 6, introduced about 5 years ago because of unpleasant reasons. It's now 2024, and for a variety of reasons, we're looking to work towards version 7 of our almost 30 year old website, and we need help. I have a very clear idea of what I want OSNews 7 to be like - including mockups. The general goals are making the site visually simpler, reducing our dependency on WordPress extensions, and reducing the complexity of our theme and website elements to make it a bit easier for someone like me to change small things without breaking anything. Oh and a dark mode that works. Note that we're not looking to change backends or anything like that - WordPress will stay. If you have the WordPress, design, and developer skills to make something like this a reality, and in the process shape the visual identity of one of the oldest continuously running technology news websites in the world, send me an email.
Getting the most out of TWM, X11’s default window manager
Graham's TWM page has been around for like two decades or so and still isn't even remotely as old as TWM itself, and in 2021 they published an updated version with even more information, tips, and tricks for TWM. The Tab Window Manager finds its origins in the lat 1980s, and has been the default window manager for the X Windowing System for a long time, now, too. Yet, few people know it exists - how many people even know X has a default window manager? - and even fewer people know you can actually style it, too. OK, so TWM is fairly easy to configure but alot of people, upon seeing the default config, scream Ugh, thats awful!' and head off to the ports tree or their distro sources in search of the latest and greatest uber desktop environment. There are some hardcore TWM fans and mimimalists however who stick around and get to liking the basic feel of TWM. Then they start to mod it and create their own custom dekstop. All part of the fun in Unix :). Graham's TWM page I'll admit I have never used TWM properly, and didn't know it could be themed at all. I feel very compelled to spend some time with it now, because I've always liked the by-now classic design where the right-click desktop menu serves as the central location for all your interactions with the system. There are quite a few more advanced, up-to-date forks of TWM as well, but the idea of sticking to the actual default X window manager has a certain charm. I almost am too afraid to ask, because the answer on OSNews to these sorts of questions is almost always yes" - do we have any TWM users in the audience? I'm extremely curious to find out if TWM actually has a reason to exist at this point, or if, in 2024, it's just junk code in the X.org source repository, because I'm looking at some of these screenshots and I feel a very strong urge to give it a serious go.
A brief summary of click-to-raise and drag-and-drop interaction on X11 andWayland
The goal is to be able to drag an icon from a background window without immediately raising that window and obscuring the drop target window when using the click-to-focus mode. This is a barebones description of what needs to happen. It assumes familiarity with code, protocols, etc. as needed. Quod Video The articles describes how to get there using both X and Wayland, and it's clear there's still quite a bit of work to do. At least on my KDE Wayland setups, the way it works now is that when I click to drag an icon from a lower Dolphin window to a higher one, it brings the lower window forward, but then, when I hover for a bit over the other window, it brings it back up. Of course, this only works if the destination window remains at least partially visible, which might not always be the case. For usability's sake, there needs to be an option to start a drag operation from one window to the next without altering the Z-order.
Android 15 could include a desktop mode — but what for?
If there was ever a will they, won't they?" love story in mobile computing, it's definitely Google's on and off again relationship with Android's desktop mode. There have been countless hints, efforts, and code pertaining to the mythical desktop mode for Android, but so far, Google has never flipped the switch and made it available. It's 2024, Android 15 development is in full swing, and it seems Google and Android's desktop mode are dating again. This past spring, Google added DisplayPort support to the Pixel 8 and Pixel 8 Pro in a Feature Drop update, allowing for easy wired connections to external monitors. Then, tinkering in Android 14 QPR3 Beta 2.1, Mishaal Rahman was able to get a new desktop interface up and running, complete with Android apps running in resizeable floating windows. It's not confirmed that Android 15 will ship with a built-in desktop mode, but the bones are there. It does make me wonder, though: why? What would a desktop interface add to Android? Taylor Kerns at Android Police I'm actually fairly convinced Android could, indeed, serve as an excellent desktop operating system, but without any official backing by Google, it's always been a massive hack to use Android with a mouse and keyboard. It's not so much the hardware support - it's all there - but rather the software support, and the clunky way common Android UI tasks feel when performing them with a mouse. I've installed Android desktop distributions' countless times, and the third-party hacks they use, like clunky taskbars and custom menus and so on, make for a horrid user experience. Samsung DEX seems to be the only somewhat successful attempt at adding a desktop mode to Android, but it can't be installed on any regular PC or laptop, and requires cumbersome cabling or expensive docks, making it more of a curiosity than a true desktop mode in the sense most of us are thinking of. This feature needs to come from Google itself, and it needs to be something third parties can use in their ROMs and x86 builds so we can truly use Android on a desktop. I don't believe that's going to happen, though. It's clear Google is more interested in pushing Chrome OS for desktop and laptop use, and it seems more likely that any desktop mode that gets added to Android is going to be similar in nature to DEX - something you can only use by hooking up your phone to a display and configuring wireless input devices. Cool, but not exactly something that will turn Android into a desktop contender.
Breaking: comment editing is back
I've just confirmed with, well, myself, that comment editing on OSNews finally works again. We're finally free. Our trying times are behind us, and we can begin to rebuild. Stay safe out there, and be kind to each other.
Google is bringing Fuchsia OS to Android devices, but not in the way you’d think
To evolve Fuchsia beyond smart home devices, Google has been working on projects such as Starnix to run unmodified Linux binaries on Fuchsia devices. In addition, since late April of this year, Google has been working on a new project called microfuchsia" that aims to make Fuchsia bootable on existing devices via virtualization. Microfuchsia, according to Google, is a Fuchsia OS build that targets virtual machines and is designed to be bootable in virtualization solutions such as QEMU and pKVM. Mishaal Rahman at Android Authority The goal here might be, according to Mishaal Rahman, might be to use this new microfuchsia thing to replace the stripped-down Android version that's currently being used inside Android's pKVM to run certain secured workloads. Relevant patches have been submitted to both the Fuchsia and Android side of things for this very purpose. At this point, it really seems that Google's grand ambitions with Fuchsia simply didn't survive the massive employee culling, with leadership probably reasoning that Android and Chrome OS are good enough, and that replacing them with something homegrown and possibly more suited - speculation, of course - simply isn't worth the investment in both time and money. It probably makes sense from a financial standpoint, but it's still sad.
Apple bows to Russian censorship once more, removes VPN apps from Russian App Store
A few weeks ago, I broke the news that Mozilla had removed several anti-censorship Firefox extensions from its store in Russia, and a few days later I also broke the news they reversed course on their decision and reinstated the extensions. Perhaps not worthy of a beauty prize, as a Dutch saying goes, but at least the turnaround time was short, and they did the right thing in the end. Well, let's see how Apple is going to deal with the exact same situation. Novaya Gazeta Europe reports that bowing under pressure from the same Russian censors that targeted Mozilla, the company has removed a whole slew of VPN applications used by Russians to evade the stringent totalitarian censorship laws in the warmongering nation. Apple has removed several apps offering virtual private network (VPN) services from the Russian AppStore, following a request from Roskomnadzor, Russia's media regulator, independent news outlet Mediazona reported on Thursday. The VPN services removed by Apple include leading services such as ProtonVPN, Red Shield VPN, NordVPN and Le VPN. Those living in Russia will no longer be able to download the services, while users who already have them on their phones can continue using them, but will be unable to update them. Novaya Gazeta Europe Apple has a long history of falling in line with the demands from dictators and totalitarian regimes, and Russia is no stranger to telling Apple what to do. Earlier this year, Apple was ordered to remove an application developed by the team of the murdered opposition figure Alexey Navalny, and of course, Apple rolled over and complied. Much like Apple's grotesque suck-up behaviour in China, This stands in stark contrast to Apple's whining, complaining, and tantrums in the European Union. It seems Apple finds it more comfortable to operating under dictators than in democracies.
...11121314151617181920...