Feed osnews OSnews

Favorite IconOSnews

Link https://www.osnews.com/
Feed http://www.osnews.com/files/recent.xml
Updated 2025-06-30 21:31
Linux scores a surprising gaming victory against Windows 11
The conversation around gaming on Linux has changed significantly during the last several years. It's a success story engineered by passionate developers working on the Linux kernel and the open-source graphics stack (and certainly bolstered by the Steam Deck). Many of them are employed by Valve and Red Hat. Many are enthusiasts volunteering their time to ensure Linux gaming continues to improve. Don't worry, this isn't going to be a history lesson, but it's an appropriate way to introduceyet anotherperformance victory Linux is claiming over Windows. I recently spent some time with theFramework 13laptop, evaluating it with the new Intel Core Ultra 7 processor and the AMD Ryzen 7 7480U. It felt like the perfect opportunity to test how a handful of games ran on Windows 11 and Fedora 40. I was genuinely surprised by the results! Jason Evangelho I'm not surprised by these results. At all. I've been running exclusively Linux on my gaming PC for years now, and gaming has pretty much been a solved issue on Linux for a while now. I used to check ProtonDB before buying games on Steam without a native Linux version, but I haven't done that in a long time, since stuff usually just works. In quite a few cases, we've even seen Windows games perform better on Linux through Proton than they do on Windows. An example that still makes me chuckle is that when Elden Ring was just released, it had consistent stutter issues on Windows that didn't exist on Linux, because Valve's Proton did a better job at caching textures. And now that the Steam Deck has been out for a while, people just expect Linux support from developers, and if it's not there on launch, Steam reviews will reflect that. It's been years since I bought a game that I've had to refund on Steam because it didn't work properly on Linux. The one exception remains games that employ Windows rootkits for their anticheat functionality, such as League of Legends, which recently stopped working on Linux because the company behind the game added a rootkit to their anticheat tool. Those are definitely an exception, though, and honestly, you shouldn't be running a rootkit on your computer anyway, Windows or not. For my League of Legends needs, I just grabbed some random spare parts and built a dedicated, throwaway Windows box that obviously has zero of my data on it, and pretty much just runs that one stupid game I've sadly been playing for like 14 years. We all have our guilty pleasures. Don't kink-shame. Anyway, if only a few years ago you had told me or anyone else that gaming on Linux would be a non-story, a solved problem, and that most PC games just work on Linux without any issues, you'd be laughed out of the room. Times sure have changed due to the dedication and hard work of both the community and various companies like Valve.
Apple to let EU users set new defaults for multiple apps, delete App Store, Photos, Messages and more
Apple ismaking additional changesto its app ecosystem in the European Union to comply with the terms of the Digital Markets Act. The default browser selection experience that's already in place will be updated, Apple will allow EU users to set defaults for more types of apps, and core iOS apps like Messages and theApp Storewill also be deletable. iPhoneowners in the EU can already set different defaults for the browser, mail app, app marketplace, and contactless payments, but Apple is going to allow users to select new defaults for phone calls, messaging, password managers, keyboards, call spam filters, navigation, and translation. That means, for example, that EU users will be able to choose an app like WhatsApp instead of Messages to be their default texting app, or a mapping app like Waze to be the default instead ofApple Maps. Juli Clover at MacRumors It's clear by now that Apple's malicious DMA compliance attempts have proven to be an abject failure. Apple continuously needs to backtrack and give in more and more to the European Commission, without the Commission even having to really do anything at all. Slowly but surely, Apple is complying with the DMA, all while its toddler tantrums have done serious damage to the company's standing and reputation without having any of the desired effects for Apple. Whoever set out this toddler DMA strategy at Apple should probably be fired for incompetence. This latest round of additional changes to comply with the DMA are very welcome ones, and further solidify the EU version of iOS as the best version. Not only do iOS users in the European Union get different browser engines, they can also remove larger numbers of default applications, set more default applications, replace more Apple-services with third-party ones, and so on. Thanks to the DMA, iOS is finally becoming more of a real operating system, instead of a set of shackles designed primarily to lock users in. It's only a matter of time before laws similar to the DMA spread to the rest of the world, and I honestly don't think the United States is going to stay behind. Corruption in the US is widespread, but there's only so much money can do, even in US politics.
DOS’s last Stand on a modern Thinkpad: X13 Gen 1 with Intel i5-10310U
When one thinks of modern technologies like Thunderbolt, 2.5 Gigabit Ethernet and modern CPUs, one would associate them with modern operating systems. How about DOS? It might seem impossible, however I did an experiment on a relatively modern 2020 Thinkpad and found that it can still run MS-DOS 6.22. MS-DOS 6.22 is the last standalone version of DOS released by Microsoft in June 1994. This makes it 30 years old today. I shall share the steps and challenges in locating a modern laptop capable of doing so and making the 30-year-old OS work on it with audio and networking functions. This is likely among the final generation of laptops able to run DOS natively. Yeo Kheng Meng I was unaware that the legacy boot mode through a UEFI Compatibility Support Module (CSM) was being phased out on Intel systems (I can't find anything definitive on what AMD is planning to do with CSM). This will definitely be an end-of-the-line kind of thing for people interested in running old, outdated operating systems on modern hardware, as doing so would require proper EFI support. I'm not actually salty about this at all by the way - there's no place in modern PCs for something designed in 1981. We have ATX for that. Anyway, it turns out MS-DOS 6.22 actually runs pretty well on this 2020 Thinkpad X13 Gen 1. Of course you have to enable CSM, and disable secure boot and kernel DMA proection, but once that's done, you can just install MS-DOS 6.22 like it's 1994. Thanks to SBEMU, you can use modern sound cards in pure DOS mode, and due to various backwards compatibility affordances in network chipsets, you can even use some of those - even through Thunderbolt, which is just PCI over a cable, after all (more or less). Running MS-DOS on a modern laptop may not allow you to get the most out of your modern hardware, but at least you can run DOS games very well, as the benchmarks Meng ran show.
AMD says Microsoft’s next big Windows 11 update will improve Zen 5 CPU performance
AMD says Microsoft's upcoming Windows 11 version 24H2 update will improve performance for its new Zen 5 CPUs. The Ryzen 9000 series launched earlier this month, andfailed to live up toAMD's performance promises in most reviews. After rumors of a Windows bug, AMD has revealed that AMD-specific branch prediction code will be optimized in Windows 11 version 24H2, which is expected to ship next month. Tom Warren at The Verge It's wild how seemingly small things can have a major impact on the launch of a new processor (or GPU) line these days. The main culprit behind the disappointing benchmarks upon launch of the Ryzen 9000 series turned out to be the 9000's new branch prediction method, the code for which is not yet available in Windows. However, AMD ran their tests in Admin mode", which yielded results as if such code was actually present. AMD has said the branch prediction code needed to unlock the full potential of Ryzen 9000 chips in Windows and yield benchmark results comparable to AMD's own internal tests and PR promiseswill be released next month as part of Windows 11, version 24H2 in preview through the Windows Insider Program (Release Preview Channel - Build 26100) or by downloading the ISOhere. AMD claims this update will benefit users of previous Zen 5 and Zen 3 processors as well, but to a lesser degree. No word on if this issue affects Linux users in any way.
Microsoft update breaks GRUB on dual-boot systems
Ah, secure boot, the bane of many running anything other than Windows. While it's already been found to be utterly useless by now, it's still a requirement for Windows 11, and ever since it became part of PCs about a decade or so ago, it's been causing headaches for people who don't use Windows. Yesterday, Microsoft released a patch for a two-year-old vulnerability in the GRUB bootloader, and while the company claimed it would only be installed on single-boot Windows machines, that clearly wasn't the case as right after its release, people dual-booting Linux and Windows found their Linux installations unbootable. Tuesday's update left dual-boot devices-meaning those configured to run both Windows and Linux-no longer able to boot into the latter when Secure Boot was enforced. When users tried to load Linux, they received the message: Verifying shim SBAT data failed: Security Policy Violation. Something has gone seriously wrong: SBAT self-check failed: Security Policy Violation." Almost immediately support and discussion forums lit up withreportsof the failure. Dan Goodin at Ars Technica The fix is both easy and hilarious: disable secure boot, and you're good to go. You can also get a bit more technical and remove the SBAT installed by this update, but while that will allow you to keep booting with secure boot enabled, it will leave you vulnerable to the issue the SBAT was supposed to fix. The efficacy of secure boot in home environments is debatable, at best, and while I'm not going to advise anyone to just turn it off and forget about it, I think most OSNews readers can make an informed decision about secure boot by themselves. If you're using corporate machines managed by your employer's IT department, you obviously need to refer to them. Microsoft itself has not yet commented on this issue, and is not responding to questions from press outlets, so we're currently in the dark about how such a game-breaking update got out in the wild. Regardless, this once again shows just how annoying secure boot is. In many cases, the boot problems people trying out Linux run into caused by secure boot, but of course, the blame is placed squarely on Linux, and not on secure boot itself being a hot mess.
Automating ZFS snapshots for peace of mind
One feature I couldn't live without anymore is snapshots. As system administrators, we often find ourselves in situations where we've made a mistake, need to revert to a previous state, or need access to a log that has been rotated and disappeared. Since I started using ZFS, all of this has become incredibly simple, and I feel much more at ease when making any modifications. However, since I don't always remember to create a manual snapshot before starting to work, I use an automatic snapshot system. For this type of snapshot, I use theexcellentzfs-autobackuptool- which I alsouse for backups. The goal is to have a single, flexible, and configurable tool without having to learn different syntaxes. Stefano Marinelli I'm always a little sad about the fact that the kind of advanced features modern file systems like ZFS, btrfs, and others offer are so inaccessible to mere desktop users like myself. While I understand they're primarily designed for server use, they're still making their way to desktops - my Fedora installations all default to btrfs - and I'd love to be able to make use of their advanced features straight from within KDE (or GNOME or whatever it is you use). Of course, that's neither here or there for the article at hand, which will be quite useful for people administering FreeBSD and/or Linux systems, and who would like to get the most out of ZFS by automating some of its functionality.
So you want to build an embedded Linux system?
This article is targeted at embedded engineers who are familiar with microcontrollers but not with microprocessors or Linux, so I wanted to put together something with a quick primer on why you'd want to run embedded Linux, a broad overview of what's involved in designing around application processors, and then a dive into some specific parts you should check out - and others you should avoid - for entry-level embedded Linux systems. Jay Carlson Quite the detailed guide about embedded Linux.
FreeBSD and AMD collaborating on FreeBSD IOMMU driver
The FreeBSD project has published its latest quarterly status report, and there's a lot in there. The most prominent effort listed in the report is a close collaboration between FreeBSD and AMD on an IOMMU driver for AMD's server processors. Work continued on a joint project between Advanced Micro Devices (AMD) and The FreeBSD Foundation to develop a complete FreeBSD AMD IOMMU driver. This work will allow FreeBSD to fully support greater than 256 cores with features such as CPU mapping and will also include bhyve integration. Konstantin Belousov has been working on various parts of the project, including driver attachment, register definitions, an ACPI table parser, and utility functions. Two key components that need to be completed are context handling, which is mostly a generalization of Intel DMAR code, and page table creation. After this, the AMD driver's enable bit can be turned on for testing. FreeBSD status report page It's great to see AMD and FreeBSD working together like this, and it highlights that FreeBSD is a serious player in the server space. Other things mentioned in the status report are continued work in improving the audio experience, wireless networking, RISC-V support, OpenZFS, and more. Through the work of Tom Jones, FreeBSD is also getting the Vector Packet Processor, a userspace networking stack that delivers fast packet processing suitable for software-defined networking and network function virtualization applications. Of course, this is just a selection, and there's way more listed in the report. I would also like to highlight the ongoing, neverending work of improving the experience of using KDE on FreeBSD. The FreeBSD KDE team notes that due to the massive release of KDE 6, and the associated flurry of follow-up releases, requiring a lot of work and testing, KDE on FreeBSD still hasn't fully caught up with the latest releases. KDE Frameworks is currently at 6.3.0 (6.5.0 is current), KDE Plasma Desktop is currently 6.0.3 (6.1.4 is current), and KDE Gear 6 hasn't been ported at all yet. In other words, while progress is being made, it's clear the team could use a hand, too.
Google to websites: let us train our AI on your content, or we’ll remove you from Google Search
Google now displays convenient artificial intelligence-based answers at the top of its search pages -meaning users may never click through to the websites whose data is being used to power those results. But many site owners say they can't afford to block Google's AI from summarizing their content. That's because the Google tool that sifts through web content to come up with its AI answers is the same one that keeps track of web pages for search results, according to publishers. Blocking Alphabet Inc.'s Google the way sites have blocked some of its AI competitors would also hamper a site's ability to be discovered online. Julia Love and Davey Alba OSNews still relies partially on advertising right now, and thus Google continues to play a role in our survival. You can help by reducing our dependency on Google by supporting us through Patreon, making donations using Ko-Fi, or buying our merch. The more of you support us, the closer to reality the dream of an ad-free OSNews not dependent on Google becomes. OSNews is my sole source of income, and if that does not work out, OSNews will cease to exist if I'm forced to find another job. Due to Google's utter dominance on the internet, websites and publishers have no choice but to accept whatever Google decides to do. Not being indexed by the most popular search engine on the web with like 90% market share is a death sentence, but feeding Google's machine learning algorithms will be a slow death by a thousands cuts, too, for many publishers. The more content is fed to Google's AI tools, the better they'll get at simply copying your style to a T, and the better they'll get at showing just the little paragraph or line that matters as a Google result, meaning you won't have to visit the site in question. It's also not great for Google in the long-term, either. Google Search relies on humans making content for people to find; if there's no more quality content for people to find, people aren't going to be using Google as much anymore. In what is typical of the search giant, it seems they're not really looking ahead very far into the future, chasing short-term profits riding the AI hype train, while long-term profits take a back seat. Maybe I'm just too stupid to understand the Silicon Valley galaxy brain business boys, but to a simple man like me it seems rather stupid to starve the very websites, publishers, authors, and so on that your main product relies on to be useful in the first place. I honestly don't even know how much of OSNews' traffic comes from Google, so I don't know how much it would even affect us were we to tell Google's crawlers to get bent. My guess is that search traffic is still a sizable portion of our traffic, so I'm definitely not going to gamble the future of OSNews. Luckily we're quite small and I doubt many people are interested in AI generating my writing style and the topics I cover anyway, so I don't think I have to worry as much as some of the larger tech websites do.
MenuetOS gets basic X server
There's been a few new releases since the last time we talked about MenuetOS, back in March of this year when version 1.50.00 was released, so I figured it was time to take a look at what the project's been up to. And just in case you don't remember - MenuetOS is 64 bit operating system written in assembly that fits on a single 1.44 MB floppy disk. There's also a 32 bit version that's no longer being developed - I think. Weirdly enough, the 1.50.00 released is no longer listed, but recent changes include Mplayer being part of the disk image, further updates to the included X-Window Server, the usual bugfixes, and a few more things. The X server is quite cool - with it, you can run, say, Firefox on your Linux installation, but have the MenuetOS X server render the UI. In addition, thanks to MenuetOS now including a basic POSIX layer, it's possible to create basic applications that run unmodified on both MenuetOS and a Linux distribution like Ubuntu. Neat.
Parents rage against new fee to keep their smart bassinets smart
But last month, that hand-me-down network was dealt a blow when Happiest Baby, the company that makes Snoo, began charging for access to some of the bassinet's premium features - features that used to be available to Snoo users indefinitely, at no extra cost. Now, access to the app needed to lock in the bassinet's rocking level, to track the baby's sleep and to use the so-called weaning mode, among other features, will cost parents $20 a month. The change has angered secondhand users and original buyers alike. On Reddit, the new subscription model has promptedreview bombs,group brainstormsfor collective action anddetailed instructionsfor outraged parents seeking recourse. Some have taken to filing complaints with the Federal Trade Commission, Better Business Bureau and state-run consumer protection offices. Sandra E. Garcia and Rachel Sherman at The New York Times My wife had our first baby a little over three years ago, and our second one a little over a year ago, and let me tell you - the amount of smart" and connected" stuff they sell targeted at babies and young parents is insane. The only smart" thing we got was a camera that pipes sound to my phone and detects movement, and sends a notification to our phone so we can take a peek and see if everything's alright. Our oldest has outgrown it, and our youngest doesn't really need it, so it's just being useless at the moment, fitted to the wall. It definitely improved our nights, though, since it made sure we would never have to get up for no reason. Other than that, we are very analog. I had heard of smart" bassinets, but we didn't think we needed one. That's just our decision, though, and you can rightfully argue that using a camera and open microphone is not that different. All of these new smart" tools are just that, tools, and can be useful and make your life just a little bit easier, and there's absolutely nothing wrong with that. Being a parent of a newborn is hard enough as it is without outsiders judging you and pressuring you into doing things you don't think are right, especially since you know your own newborn - and yourself - better than some random outsiders do. The Snoo is one of the more popular smart bassinets, apparently, and at an entry price of 1700 dollars it's bonkers expensive. The thing is, though, as a new parent you know a lot of the stuff you buy has a relatively limited shelf life - they grow so fast - so you kind of take into account that you'll be selling some of the more expensive stuff down the line to recoup some of the costs. We have an insanely expensive stroller from a Norwegian brand, because it needed to be able to handle the Arctic climate and its endless snow, including specialised wheels and tires for trudging through the snow. The resale value of these is quite decent, so we know we'll get a decent part of the initial cost back, especially since we take extremely good care of it. And this is where the company that makes the Snoo, Happiest Baby, decided to screw over its customers. The company clearly realised the theoretical loss of revenue from the used market, and came up with this subscription model to lock in some of that theoretical revenue. However, since Happiest Baby always promised all of its features would work perpetually, this came as a huge shock to both buyers of used Snoo bassinets, as well as to parents intending to sell their Snoo, who now see their resale value plummet. The reasoning behind the sudden subscription model given by the company is absolutely wild. Harvey Karp, the founder and chief executive of Happiest Baby, defended the move as a business necessity. We don't have any dollar from the government, we don't have a dollar from a university," said Dr. Karp, a former pediatrician who created the Snoo after becoming frustrated with the lack of progress in reducing rates of sudden infant death syndrome, or SIDS. We have to sell products and bring in revenue to be able to get to this goal." That goal, according to Dr. Karp, is that everyone will have access to this, and it will be paid for not by your friend, but it will be paid for by your corporation, the government or your insurance company," the way breast pumps are often covered. He also pointed to Happiest Baby's efforts to make the Snoo available in the inner city and in rural areas." For many parents, however, paying into that ideal is of little comfort to their bottom line. Sandra E. Garcia and Rachel Sherman at The New York Times He's basically stating that because he doesn't get free money from the government, universities, customers' employers, or insurance companies, he can't make any profit off the Snoo products. He's arguing that a $1700 bassinet with some sensors and chips is not a profitable product, which sounds absolutely like a flat-out lie to me. If he really can't make a profit with such a price for such a product, there's clearly something else wrong with the way the company is spending its money. Anyone who has ever watched Last Week Tonight with Jon Oliver knows just how many healthcare-related markets and businesses in the United States rely almost exclusively on government money through programs like Medicare and Medicaid, leading to an insane amount of scams and wasted money because there aren't even remotely enough inspectors and related personnel to ensure such money is effectively spent, made worse by the fact such tasks are delegated to the states. This whole Snoo thing almost make me think Karp intended to profit off these often nebulous government money streams, but somehow failed to do so. I feel for the parents, though. They bought a product that didn't include a hint of a subscription or paywalled features, and now they have
Installing FreeBSD with OpenZFS via the Linux rescue system
Hetzner no longer offers a FreeBSD rescue system but it is possible to install and manage FreeBSD with OpenZFS from the Linux rescue system on a dedicated server with UEFI boot. The installation is done on a mirrored OpenZFS pool consisting of two drives. Martin Matuska Not much to add here - Hetzner is a popular hosting and server provider, and if you want to use FreeBSD on their machines, here's how.
Google threatened tech influencers unless they ‘preferred’ the Pixel
The tech review world has been full of murky deals between companies and influencers for years, but it appears Google finally crossed a line with the Pixel 9. The company's invite-only Team Pixel program - which seeds Pixel products to influencers before public availability - stipulated that participating influencers were not allowed to feature Pixel products alongside competitors, and those who showed a preference for competing phones risked being kicked out of the program. For those hoping to break into the world of tech reviews, the new terms meant having to choose between keeping access or keeping their integrity. Victoria Song at The Verge Even though this ended up being organised and run by a third party, and Google addressed it immediately, it doesn't surprise me at all that stuff like this happens. Anyone who has spent any time on tech YouTube, popular tech news sites, and content farms knows full well just how... Odd a lot of reviews and videos often feel. This is because a lot of review programs subtly - or not so subtly - imply that if you're not positive enough, you're going to be kicked out and won't get the next batch of cool products to review, thereby harming your channel or website. Apple is a great example of a company that uses the threat of not getting review samples, event invites, and similar press benefits to gain positive media attention. I myself was kicked out of Apple's review program and press pool way back during the Intel transition, because I mentioned the new Intel MacBook Pro got uncomfortably hot, and Apple really didn't like that. They tried to pressure me to change the wording, but I didn't budge, and consequently, that was the end of me getting any review items or press invites. I only ever accepted one Apple press invite, by the way, to their headquarters in The Netherlands, which was in Bunnik, of all places. Not much of value was lost without Apple press invites. Nobody wants to go to Bunnik. With every review of a loaned item on OSNews, you can be 100% sure there are no shenanigans, because I simply do not let anyone influence me. OSNews doesn't live or die by getting reviews of the latest and greatest tech, so I have no incentive to deal with pushy, manipulative companies or PR people. I refused to budge to Apple 17 years ago, during my first year at OSNews, when I was in my early 20s - and I've never budged since, either. Now look at everyone getting press access from Apple, and think to yourself - would any of them tell Apple to get bent? That being said, I'd love to review the new Google Pixel 9 Pro Fold, if only to make fun of that horrid name. Hit me up, Google.
Single-command Windows 11 system requirements bypass trick for unsupported PCs blocked
In October last year, we covered a very simple bypass trick that involved just a single command when running the Windows 11 Setup. While this passthrough got popular in the tech community during this time as a result of the media coverage from Neowin as well as others, it was actually something even older. To use this, all a user had to do wasadd /product server"when running the setup, and Windows would just skip the hardware requirements check entirely. As it turns out, Microsoft has blocked this bypass method on the latestCanary build 27686as discovered by X user and tech enthusiast Bob Pony. When trying to use the Server trick now, the hardware requirements check is not bypassed. Sayan Sen It's such an own goal to limit Windows 11 as much as Microsoft is doing. Windows 11 runs pretty much identically, performance-wise, to Windows 10 on the same hardware, so there's no reason other than to enable the various security features through TPMs and the like. The end result is that people simply aren't upgrading to Windows 11 - not only because Windows 10 is working just fine for them, but also because even if they want to upgrade, they often can't. Most people don't just buy a brand new PC because a new version of Windows happens to be available. There's been a variety of tricks and methods to circumvent the various minimum specifications checks Microsoft added to the regular consumer versions of Windows, and much like with the activation systems of yore, Microsoft is now engaging in a game of whack-a-mole where as soon as it kills on method, ten more pop up to take its place. There's a whole cottage industry of methods, tools, registry edits, and much more, spread out across the most untrustworthy-looking content farms you can find on the web, which all could've been avoided if Microsoft just offered consumers the choice of disabling these restrictions, accompanied by a disclaimer. So Microsoft is now in the unfortunate situation where most of its Windows users are still using Windows 10, yet the end of Windows 10's support is coming up next year. Either Microsoft extends this date by at least another five years to catch the wave of natural' PC upgrades to a point where Windows 10 is a minority, or it's going to have to loosen some of the restrictions to give more people the ability to upgrade. If they don't, they're going to be in a world of hurt with security issues and 0-days affecting the vast majority of Windows users.
Popular AI “nudify” sites sued amid shocking rise in victims globally
San Francisco's city attorney David Chiu issuingto shut down 16 of the most popular websites and apps allowing users to nudify" or undress" photos of mostly women and girls who have been increasingly harassed and exploited by bad actors online. These sites, Chiu's suit claimed, are intentionally" designed to create fake, nude images of women and girls without their consent," boasting that any users can upload any photo to see anyone naked" by using tech that realistically swaps the faces of real victims onto AI-generated explicit images. Ashley Belanger at Ars Technica This is an incredibly uncomfortable topic to talk about, but with the advent of ML and AI making it so incredibly easy to do this, it's only going to get more popular. The ease with which you can generate a fake nude image of someone is completely and utterly out of whack with the permanent damage it can do the person involved - infinitely so when it involves minors, of course - and with these technologies getting better by the day, it's only going to get worse. So, how do you deal with this? I have no idea. I don't think anyone has any idea. I'm pretty sure all of us would like to just have a magic ban button to remove this filth from the web, but we know such buttons don't exist, and trying to blast this nonsense out of existence is a game of digital whack-a-mole where there are millions of moles and only one tiny hammer that explodes after one use. It's just not going to work. The best we can hope for is to get a few of the people responsible behind bars to send a message and create some deterrent effect, but how much that would help is debatable, at best. As a side note, I don't want to hang this up on AI and ML alone. People - men - were doing this to to other people - women - even before the current crop of AI and ML tools, using Photoshop and similar tools, but of course it takes a lot more work to do it manually. I don't think we should focus too much on the role ML and AI plays, and focus more on finding real solutions - no matter how hard, or impossible, that's going to be.
The Apple IIGS megahertz myth
A story you hear all the time about the Apple IIGS is that Apple purposefully underclocked or limited its processor in some way to protect the nascent Macintosh, and ensure the IIGS, which could build upon the vast installed base of Apple II computers, would not outcompete the Macintosh. I, too, have always assumed this was a real story - or at least, a story with a solid kernel of truth - but Dan Vincent decided to actually properly research this claim, and his findings tell an entirely different story. His research is excellent - and must have been incredibly time-consuming - and his findings paint a much different story than Apple intentionally holding the IIGS back. The actual issue lied with the production of the 65816 processor that formed the beating heart of the IIGS. It turns out that the 65816 had serious problems with yields, was incredibly difficult to scale, and had a ton of bugs and issues when running at higher speeds. What a ride, huh? Thanks for making it this far down a fifty-plus minute rabbit hole. I can't claim that this is the final take on the subject-so many of the players aren't on the record, but I'm pretty confident in saying that Apple did not artificially limit the IIGS' clock speed during its development for marketing purposes. Now, I'm not a fool-I know Apple didn't push the IIGS as hard as it could, and it was very much neglected towards the end of its run. If the REP/SEP flaws hadn't existed and GTE could've shipped stable 4MHz chips in volume, I'm sure Apple would've clocked them as fast as possible in 1986. Dan Vincent Promise me you'll read this article before the weekend's over. It's a long one, but it's well-written and a joy to read. You'll also run into Tony Fadell - the creator of the iPod - somewhere in the story, as well as a public shouting match, and an almost fistfight, between the creator of the 65816 and Jean-Louis Gassee during San Francisco AppleFest in September 1989, right after Gassee placed the blame for the lack of a faster IIGS on the 65816's design. This is an evergreen article.
Windows can now create 2TB FAT32 file systems
Even though FAT32 supports disk sizes of up to 2TB, and even though Windows can read FAT32 file systems of up to 2TB, Windows can't actually create them. The maximum file system limit Windows can create with FAT32 is 32GB, a limitation that dates back to Windows 95 which has never been changed. It seems Microsoft is finally changing this with the latest Insider Preview build of Windows 11, as the format command can now finally create FAT32 file systems of up to 2TB. When formatting disks from the command line using theformatcommand, we've increased the FAT32 size limit from 32GB to 2TB. Amanda Langowski and Brandon LeBlanc Sadly, this only works through the format command; it's not yet reflected in the graphical user interface, which is just so typically Microsoft. Of course, most of us will be using exFAT at this point for tasks that require an interoperable file system, but not every device accepts exFAT properly, and even those that do sometimes have issues with exFAT that are not present when using FAT32. A more interesting new addition in this preview build is the Windows Sandbox Client Preview. This build includes the new Windows Sandbox Client Preview that is now updated via the Microsoft Store. As part of this preview, we're introducing runtime clipboard redirection, audio/video input control, and the ability to share folders with the host at runtime. You can access these via the new ..." icon at the upper right on the app. Additionally, this preview includes a super early version of command line support (commands may change over time). You can use wsb.exe -help' command for more information. Amanda Langowski and Brandon LeBlanc Windows Sandbox is a pretty cool feature that provides a lightweight desktop environment in which you can run applications entirely sandboxed, separate from your actual Windows installation. Changes and files made in the sandbox do not persist, unless the sandbox is shut down from within the sandbox itself. There's a whole variety of uses this could be good for, and having it integrated into Windows is awesome. Windows Sandbox is available in Windows Pro or Enterprise - not Home - and is quite easy to use. Open up its window, copy/paste an executable to the sandbox, and run it inside the sandbox. As said, after closing the sandbox, all your changes will be lost. That process is still a bit clunky, but with a bit more work it should be possible for Microsoft to smooth this out, and, say, add an option in the right-click menu to just launch any executable in the sandbox that way.
Cartridge software for the Psion Series 3
Similar to less popular handheld of the era, the Gameboy, the Psion used a proprietary cartridge format for distributing commercial software. Psion sold blank cartridges, flashing hardware and duplicators to software houses, as well as releasing a number of titles under their own license. There's a wide range of commercial software available for the Series 3 family, and only some of it was ported to the Series 5 (I really wish Scrabble had been released on Series 5). The range of software available was significant. Cartridges unlocked the Psion 3's ability to play a large number of games, provide phrase book translation to a number of languages (Berliz Interpreter), route plan your car journeys (Microsoft Autoroute), look up the best wines for this year (Hugh Johnson's Wine Guide) or build your organisation chart Purple Software's OrgChart. Kian Ryan I have a Psion 3, but the only cartridges I have are empty ones you can use for personal storage. I've always wanted to buy a selection of cartridges on eBay, but sadly, my Psion 3 died due to me forgetting to remove the batteries when immigrating to Sweden, something I only discovered like five years later. I was smart enough to remove all batteries from every single device in my massive collection, but I guess the Psion 3 slipped through my fingers. Anyway, this article is a great look at some of the cartridges that existed for the Psion 3, and it's really making me want to replace my broken Psion 3 and buy one that comes with a set of cartridges. There's something really attractive about how the Psion 3's EPOC operating system worked, and the third party programs look like so much fun to explore and use.
US judge says he’ll ‘tear the barriers down’ on Google’s app store monopoly
Last week wasn't the first time Google was declared a monopoly - eight months ago, in the Epic vs. Google case, Google's control over the Play Store was also declared monopolistic. The judge, Google, and Epic have been arguing ever since over possible remedies, and in two weeks' time, we'll know what the judge is going to demand of Google. Eight months after a federal jury unanimously decidedthat Google's Android app store is an illegal monopoly inEpic v. Google, Donato held his final hearing on remedies today. While we don't yet know what will happen, he repeatedly shut down any suggestion that Google shouldn't have to open up its store to rival stores, that it'd be too much work or cost too much, or that the proposed remedies go too far. We're going to tear the barriers down, it's just the way it's going to happen," said Donato. The world that exists today is the product of monopolistic conduct. That world is changing." Donato will issue his final ruling in a little over two weeks. Sean Hollister at The Verge I was a bit confused by what opening up" the Play Store really meant, since Android is already quite friendly to installing whatever other applications and application stores you want, but what they're talking about here is allowing rival application stores inside the Play Store. This way, instead of downloading, say, the F-Droid APK from the web and installing it, you could just install the F-Droid application store straight from within the Play Store. Epic wants the judge to take it a step further and force Google to also give rival application stores access to every Play Store application, allowing them to take ownership of said applications, I guess? I'm not entirely sure how that would work, considering I doubt there'd be much overlap between the offerings of the various stores. The prospect of micromanaging where every application gets its updates from seems like a lot of busywork, but at the same time, it's the kind of fine-grained control power users would really enjoy. A point of contention is whether or not Google would have to perform human review on every application store and their applications inside the Play Store, and even if Google should have any form of control at all. What's interesting about all these court cases in the United States is how closely the arguments and proposed remedies align with the European Digital Markets Act. Where the EU made a set of pretty clear and straightforward rules for megacorporations to follow, thereby creating a level playing field for all of them, the US seems to want to endlessly take each offending company to court, which feels quite messy, time-consuming, and arbitrary, especially when medieval nonsense like jury trials are involved. This is probably a result of the US using common law, whereas the EU uses civil (Napoleonic) law, but it's interesting nonetheless.
US said to consider a breakup of Google to address search monopoly
While a US judge ruled last week that Google is a monopoly, and hat it has abused its monopoly position, potential remedies were not part of the case up until this point. Now, though, the US Department of Justice is mulling over potential remedies, and it seems everything is on the table - down to breaking Google up. Justice Department officials are considering what remedies to ask a federal judge to order against the search giant, said three people with knowledge of the deliberations involving the agency and state attorneys general who helped to bring the case. They are discussing various proposals, including breaking off parts of Google, such as its Chrome browser or Android smartphone operating system, two of the people said. Other scenarios under consideration include forcing Google to make its data available to rivals, or mandating that it abandon deals that made its search engine the default option on devices like the iPhone, said the people, who declined to be identified because the process is confidential. The government is meeting with other companies and experts to discuss their proposals for limiting Google's power, the people said. David McCabe and Nico Grant The United States has a long history of breaking companies up, but the real question here is how, exactly, you would break Google up. Google makes virtually all of its money using its advertising business, and products like Chrome or Android in an of themselves make little to no money - they probably only cost Google money. Their real purpose is to direct people to using Google Search, which is where the various ads are Google's real money maker. In other words, what would happen if you were to split off Chrome or Android? How are these products supposed to make money and survive, financially? I don't understand entirely how Google's advertising business spaghetti is organised, but it seems like to me that's where any talk of splitting Google up to create breathing room in the market should be focusing on. Breaking that core business up into several independent online advertising companies, which would suddenly have to compete with each other as well as with others on a more equal footing, would be much better for consumer than turning Chrome or Android into unsustainable businesses. In an advertising market not dominated by one giant player, there's far more room and opportunity for smaller, perhaps more ethical companies to spring up and survive. Perhaps I'm wrong, and maybe there is life in a business that contains everything Google does except for online advertising, but I feel like said new company would not survive in a market where it has to contend with other abusive heavyweights like Facebook and Apple.
Valve confirms it’ll support the ROG Ally with its Steam Deck operating system
Way back, Valve had the intention of making gaming on Linux a reality by allowing anyone to make PCs running SteamOS, with the goal of making Steam less dependent on the whims of Windows. This effort failed and fizzled out, but the idea clearly never died inside Valve, because ten years later the Steam Deck would take the market by storm, spawning a whole slew of copycats running unoptimised, difficult to use Windows installations. There have been hints Valve was toying with the idea of releasing official SteamOS builds for devices other than the Steam Deck, and the company has not confirmed these rumours. The company'slong saidit plans to let other companies use SteamOS, too - and that means explicitly supporting the rival Asus ROG Ally gaming handheld, Valve designer Lawrence Yang now confirms toThe Verge. Sean Hollister at The Verge This is great news for the market, as some of these Steam Deck competitors are interesting from a specifications perspective - although pricing sure goes up with that - but running Windows on a small handheld gaming device is a chore, and relying on OEMs to make gaming overlays" to make Windows at least somewhat usable is not exactly something you want to have to rely on. SteamOS is clearly lightyears ahead of Windows in this department, so having non-Steam Deck handheld gaming PCs officially supported by Valve is great news. We're still a long way off, though, says Valve, and the same applies to Valve's plans to release a generic SteamOS build for any old random PC. That effort, too, is making steady progress, but isn't anywhere near ready. Of course, there's a variety of unofficial SteamOS variants available, so you're not entirely out of luck right now. On top of that, there's things like Bazzite, which offer a SteamOS-like experience, but using the Atomic variants of Fedora.
Gentoo Linux drops IA-64 (Itanium) support
Following the removal of IA-64 (Itanium) support in the Linux kernel and glibc, and subsequent discussions on our mailing list, as well as a vote by the Gentoo Council, Gentoo will discontinue all ia64 profiles and keywords. The primary reason for this decision is the inability of the Gentoo IA-64 team to support this architecture without kernel support, glibc support, and a functional development box (or even a well-established emulator). In addition, there have been only very few users interested in this type of hardware. Gentoo website Et tu, Gentoo? Linux removing Itanium I can understand; the Freemason corporate overlords who pull the strings of Linux kernel development are terrified of just how powerful Itanium really is. GCC removing Itanium makes sense too, as the unwashed communists at the FSF just don't understand the capitalist greatness that is Itanium. But Gentoo? Now I know how Jesus felt when Judas betrayed him; how Caeser felt when he gazed upon Brutus' face. I feel empty inside.
Haiku gets tons of performance fixes, new FAT driver from FreeBSD, and a lot more
There's a new Haiku activity report, and it's a big one. A lot of bottlenecks and performance issues were addressed recently, and the list is too long and detailed for me to cover everything. Haiku developer Waddlesplash does a great job in this report detailing the various things he worked on to solve some of these bottlenecks and performance issues, and they cover everything from speeding up the readv and writev I/O calls, fixing an issue with the kernel's device_manager lock, improving ELF symbol lookup by implementing the DT_GNU_HASH hash table, and much more. As part of working on these performance issues, Waddlesplash also fixed up Haiku's CPU time profiler. Haiku has a built-in CPU time profiler (just called profile.) Unfortunately, it's been rather broken for years, regularly outputting data that was either empty or just didn't make any sense. In order to use it to try and track down some of the other bottlenecks, I spent a bunch of time fixing various bugs in it, as well as the debugger support code that it relies on to function, including to stack trace collection, buffer flushing, symbol lookup, scheduler callbacks, image load reporting, and more. I also implemented userspace-only profiling (ignoring kernel stack frames entirely), fixed some output buffer sizing issues, and fixed a race condition in thread resumption that also affected strace. While it isn't perfect, it's much better than before, and can now be used to profile applications and the kernel to see where CPU time is being spent; and notably it now checks the thread's CPU time counters to detect if it missed" profiling ticks, and if so how many. Haiku's website Beyond these performance fixes, there's a ton of other improvements and fixes, from better handling of HiDPI displays in HaikuDepot, improvements to CharacterMap, fixing subtitles in MediaPlayer, and tons more. Of course, there's the bevy of driver fixes, including a major overhaul of the FAT driver, which was still largely based on old, original BeOS code because Be used the FAT driver as sample code. Haiku's FAT driver is now based on FreeBSD's FAT driver, which addressed a whole slew of issues. This isn't even all of it - there's so much more in this month's activity report, so definitely head on over and give it a read.
SpecOS: an x86_64 OS kernel from scratch
It's been busy in the world of hobby and teaching/learning operating systems these past few months, and today we've got another one - SpecOS. SpecOS is a 64 bit operating system kernel for x86-64 processors, still in quite early stages, written in (questionable quality) C. It is (not very) powerful. This used to be 32 bit, but has been transferred to a 64 bit operating system. It uses a monolithic kernel, because I like having everything in one place. This may take some inspiration from other operating systems, but it is not UNIX based. SpecOS GitHub page It's got the basics covered with PS/2 keyboard and VGA support, a real-time clock driver, a basic hard disk driver, and physical and virtual memory management, among other things. We're clearly looking at a hobby project, and the author is very clear about that. A virtual machine is highly advised, as running it on real hardware is... Well, you're on your own, basically.
Serena: an experimental operating system for 32bit Amiga computers
Serena is an experimental operating system based on modern design principles with support for pervasive preemptive concurrency and multiple users. The kernel is object-oriented and designed to be cross-platform and future proof. It runs on Amiga systems with a 68030 or better CPU installed. One aspect that sets it aside from traditional threading-based OSs is that it is purely built around dispatch queues somewhat similar to Apple's Grand Central Dispatch. There is no support for creating threads in user space nor in kernel space. Instead the kernel implements a virtual processor concept where it dynamically manages a pool of virtual processors. The size of the pool is automatically adjusted based on the needs of the dispatch queues and virtual processors are assigned to processes as needed. All kernel and user space concurrency is achieved by creating dispatch queues and by submitting work items to dispatch queues. Work items are simply closures (a function with associated state) from the viewpoint of the user. Serena GitHub page Serena is a remarkably advanced concept, and since it runs in an Amiga emulator just fine, there's no need for real hardware, which is becoming ever harder to come by. It has its own unique file system, the executable file format is Atari ST GemDos (for now), and it has its own shell. It comes with a variety of drivers and services for your basic needs like keyboard and mouse input, a basic graphics drivers, a VT52 and VT100 series compatible interactive console, a floppy disk driver, and much more. Anyone can load up WinUAE and try SerenaOS out - it's available under the MIT license.
Chrome iOS browser on Blink
Earlier this year, under pressure from the European Union, Apple was finally forced to open up iOS and allow alternative browser engines, at least in the EU. Up until then, Apple only allowed its own WebKit engine to run on iOS, meaning that even what seemed like third-party browsers - Chrome, Firefox, and so on - were all just Safari skins, running Apple's WebKit underneath (with additional restrictions to make them perform worse than Safari). Even with other browser engines now being allowed on iOS in the EU, there's still hurdles, as Apple requires browser makers to maintain two different browsers, one for the EU, and another one for the rest of the world. It seems the Chromium community is already working on bringing the Chromium Blink browser engine to iOS, but there's still a lot of work to be done. A blog post by the open source consultancy company Igalia digs into the details, since they are contributing to the effort. While they've got the basics covered, it's far from completed or ready for release. We've briefly looked at the current status of the project so far, but many functionalities still need to be supported. For example, regarding UI features, functionalities such as printing preview, download, text selection, request desktop site, zoom text, translate, find in page, and touch events are not yet implemented or are not functioning correctly. Moreover, there are numerous failing or skipped tests in unit tests, browser tests, and web tests. Ensuring that these tests are enabled and passing the test should also be a key focus moving forward. Gyuyoung Weblog I don't use iOS, nor do I intend to any time soon, but the coming availability of browser engines that compete with WebKit is going to be great for the web. I've heard from so many web developers that Safari on iOS is a bit of a nightmare to support, since without any competition on iOS it often stagnates and lags behind in supporting features other browsers already implemented. With WebKit on iOS facing competition, that might change. Now, there's a line of thought that all this will do is make Chrome even more dominant, but I don't think that's going to be an issue. Safari is still the default for most people, and changing defaults is not something most people will do, especially not the average iOS user. On top of that, this is only available in the EU, so I honestly don't think we have to worry about this any time soon, but obviously, we do have to remain vigilant.
Apple forces Patreon to charge Patreons 30% tax, or have its iOS application banned from the App Store
I have no contracts, agreements, or business with Apple, I do not use any Apple products, I do not rely on any Apple services, and none of my work requires the use of any of Apple's tools. Yet, I'm forced to deal with Apple's 30% tax. Today, Patreon, which quite a few of you use to support OSNews, announced that Apple is forcing them to change its billing system, or risk being banned from the App Store. This has some serious consequences for people who use Patreon's iOS application to subscribe to Patreons, and for the creators you subscribe to. First: Apple will be applying their 30% App Store fee to all new memberships purchased in the Patreon iOS app, in addition to anything bought in your Patreon shop. Patreon's website First things first: the 30% mafia tax will only be applied to new Patreons subscribers using the Patreon iOS application to subscribe, starting early November 2024. Existing Patreons will not be affected, iOS or no. Anyone who subscribes through the Patreon website or Android application will not be affected either. Since creators like myself obviously have no intention of just handing over 30% of what our iOS-using supporters donate to us, Patreon has added an option to automatically increase the prices of subscriptions inside the Patreon iOS application by 30%. In other words, starting this November, subscribing to the OSNews Patreon through the iOS application will be 30% more expensive than subscribing from anywhere else. As such, I'm pondering updating the description of our Patreon to strongly suggest anyone wishing to subscribe to the OSNews Patreon to do so either on the web, or through the Patreon Android application instead. If you're hell-bent on subscribing through the Patreon iOS application, you'll be charged an additional 30% to pay protection money to Apple. And just to reiterate once more: if you're already a Patreon, nothing will change and you'll continue to pay the regular amounts per tier. Second: Any creator currently on first-of-the-month or per-creation billing plans will have to switch over to subscription billing to continue earning in the iOS app, because that's the only billing type Apple's in-app purchase system supports. Patreon's website This is Patreon inside baseball, but as it stands right now, subscribers to the OSNews Patreon are billed on the first of the month, regardless of when during a month you subscribe. This is intentional, since I really like the clarity it provides to subscribers, and the monthly paycheck it results in for myself. Sadly, Apple is forcing Patreon to force me to change this - I am now forced to switch to subscription billing instead, somewhere before November 2025. This means that once I make that forced switch, new Patreons will be billed on their subscription date every month (if you subscribe on 25 April, you'll be charged every 25th of the month). Luckily, nothing will change for existing subscribers - you will still be billed on the 1st of the month. This whole thing is absolutely batshit insane. Not only is Patreon being forced by Apple to do this at the risk of having their iOS application banned from the App Store, Apple is also making it explicitly impossible for Patreon to go any other route. As we all know, Patreon won't be allowed to advertise that subscribing will be cheaper on the web, but Apple is also not allowing Patreon to remove subscribing in the Patreon iOS application altogether - if Patreon were to do that, Apple will ban the application from the App Store as well. And with how many people use iOS, just outright deprecating the Patreon iOS application is most likely going to hurt a lot of creators, especially ones outside of the tech sphere. Steven Troughton-Smith did some math, and concluded that Apple will be making six times as much from donations to Patreon creators than Patreon itself will. In other words, if you use iOS, and subscribe to a creator from within the Patreon iOS application, you will be supporting Apple - a three trillion dollar corporation - more than Patreon, which is actually making it possible to support the small creators you love in the first place. That is absolutely, balls-to-the-wall, batshit insanity. Remember that ad Apple made where it crushed a bunch of priceless instruments and art supplies into an iPad - the ad it had to pull and apologise for because creators, artists, writers, and so on thought it was tasteless and dystopian? Who knew that ad was literal.
Haiku gets tentative Firefox port
Haiku, the platform grossing in ported browsers while its native WebPositive browser languishes, has added another notch to its belt - and this time, it's a big one. Firefox has been tentatively ported to Haiku, but it's early days and there's no package ready to download - you'll have to compile it yourself if you want to get it running. It's version 128, so it's the latest version, too. Without the ability to easily test and run it, there's not much more to add at this point, but it's still a major achievement. I hope there'll be a nice Haiku package soon.
Microsoft deprecates Paint 3D
Way back in the early before time, Microsoft thought it would be a good idea to brand Windows 10 entirely around the label creators", and one distinctly odd consequence of that was an application called Paint 3D", a replacement for the traditional Paint application that Microsoft had been shipping one way or another since 1985, when it included a simple bitmap editing program called Doodle" with its mouse drivers for DOS. Doodle would be replaced shortly after by a whitelabel version of ZSoft Corporation's PC Paintbrush, and once Windows 1.0 rolled around, it was rebranded as Paint, a name that has stuck until today. Paint 3D was supposed to replace the regular Paint, with a focus on creating and manipulating 3D objects, serving as an extension to Microsoft's failed efforts to bring VR and AR to the masses. Microsoft even went so far as to list the regular Paint as deprecated, but after a lot of outcry, has since reneged and refocused its efforts on improving it. Paint 3D, however, is not officially going to be deprecated, and has been added to Microsoft's list of deprecated Windows features. Paint 3D is deprecated and will be removed from the Microsoft Store on November 4, 2024. To view and edit 2D images, you can use Paint or Photos. For viewing 3D content, you can use 3D Viewer. Microsoft's list of deprecated Windows features I don't think anyone is going to shed a tear on this, but at the same time, as with everything Microsoft changes or removes from Windows, there's bound to be at least a few people whose entire workflow heavily depends on Paint 3D, and they're going to be pissed.
Almost entire Nova Launcher team laid off
About two years ago, the very popular and full-featured Android launcher Nova Launcher was acquired by mobile links and analytics company Branch. This obviously caused quite the stir, and ever since, whenever Nova is mentioned online, people point out what kind of company acquired Nova and that you probably should be looking for an alternative. While Branch claimed, as the acquiring party always does, that nothing was going to change, most people, including myself, were skeptical. Several decades covering this industry have taught me that acquisitions like this pretty much exclusively mean doom, and usually signal a slow but steady decline in quality and corresponding increase in user-hostile features. I'm always open to being proven wrong, but I don't have a lot of hope. Thom Holwerda Up until a few days ago, I have to admit I was wrong. Nova remained largely the same, got some major new features, and it really didn't get any worse in any meaningful way - in fact, Nova just continued to get better, adopted all the new Android Material You and other features, and kept communicating with its users quite well. After a while, I kind of forgot all about Nova being owned by Branch, as nothing really changed for the worse. It's rare, but it happens - apparently. So I, and many others who were skeptical at first as well, kept on using Nova. Not only because it just continued being what I think is the best, most advanced, and most feature-rich launcher for Android, but also because... Well, there's really nothing else out there quite like Nova. I'm sure many of you are already firing up the comment engine, but as someone who has always been fascinated by alternative, non-stock mobile device launchers - from Palm OS, PocketPC, and Zaurus, all the way to the modern day with Android - I've seen them all and tried them all, and while the launcher landscape is varied, abundant, and full of absolutely amazing alternatives for every possible kind of user, there's nothing else out there that is as polished, feature-rich, fast, and endlessly tweakable as Nova. So, I've been continuing to use Nova since the acquisition, interchanged with Google's own Pixel Launcher ever since I bought a Pixel 8 Pro on release, with Nova's ownership status relegated to some dusty, barely used croft of my mind. As such, it came as a bit of a shock this week when it came out that Branch had done a massive round of lay-offs, including firing the entire Nova Launcher team, save for Nova's original creator, Kevin Barry. Around a dozen or so people were working on Nova at Branch, and aside from Barry, they're all gone now. Once the news got out, Barry took to Nova Launcher's website and released a statement about the layoffs, and the future of Nova. There has been confusion and misinformation about the Nova team and what this means for Nova. I'd like to clarify some things. The original Nova team, for many years, was just me. Eventually I added Cliff to handle customer support, and when Branch acquired Nova, Cliff continued with this role. I also had contracted Rob for some dev work prior to the Branch acquisition and some time after the acquisition closed we were able to bring him onboard as a contractor at Branch. The three of us were the core Nova team. However, I've always been the lead and primary contributor to Nova Launcher and that hasn't changed. I will continue to control the direction and development of Nova Launcher. Kevin Barry This sounds great, and I'm glad the original creator will keep control over Nova. However, with such a massive culling of developers, it only makes sense that any future plans will have to be scaled down, and that's exactly what both Barry and other former team members are saying. First, Rob Wainwright, who was laid off, wrote the following in Nova's Discord: To be clear, Nova development is not stopping. Kevin is remaining at Branch as Nova's only full time developer. Development will undoubtedly slow with less people working on the app but the current plan is for updates to continue in some form. Rob Wainwright Barry followed up with an affirmation: I am planning on wrapping up some Nova 8.1 work and getting more builds out. I am going to need to cut scope compared to what was planned. Kevin Barry In other words, while development on Nova will continue, it's now back to being a one-man project, which will have some major implications for the pace of development. It makes me wonder if the adoption of the yearly drop of new Android features will be reduced, and if we're going to see much more unresolved bugs and issues. On top of that, one has to wonder just how long Branch is for this world - they've just laid off about a hundred people, so what will happen to Barry if Branch goes under? Will he have to find some other job, leaving even less time for Nova development? And if Branch doesn't go under, it is still clearly in dire financial straits, which must make somehow monetising Nova users in less pleasant ways come into the picture. The future of Nova was definitely dealt a massive blow this week, and I'm fearful for its future. Again.
Verso: a browser using Servo
I regularly report on the progress made by the Servo project, the Rust-based browser engine that was spun out of Mozilla into its own project. Servo has its own reference browser implementation, too, but did you know there's already other browsers using Servo, too? Sure, it's clearly a work-in-progress thing, and it's missing just about every feature we've come to expect from a browser, but it's cool nonetheless. Verso is a web browser built on top of Servo web engine. It's still under development. We dont' accept any feature request at the moment. But if you are interested, feel free to help test it. Verso GitHub page It runs on Linux, Windows, and macOS.
Wayland merges new screen capture protocols
Nearly three years in the making, the ext-image-capture-source-v1 and ext-image-copy-capture-v1 protocols have been merged into the Wayland Protocols repository for vastly improving screen capture support on the Wayland desktop. The ext-image-capture-source-v1 and ext-image-copy-capture-v1 screen copy protocols build upon wlroots' wlr-screencopy-unstable-v1 with various improvements for better screen capture support under Wayland. These new protocols should allow for better performance and window capturing support for use-cases around RDP/VNC remote desktop, screen sharing, and more. Michael Larabel A very big addition to Wayland, as this has been a sore spot for many people wishing to move to Wayland from X. One of the developers behind the effort has penned a blog post with more details about these new protocols.
Redox gets HTTP server, wget, UEFI improvements, and much more
In line with the release of the COSMIC alpha, parts of which are also available for Redox, we've got another monthly update for the Rust-based operating system. First, in what in hindsight seems like a logical step, Redox is joining hands with Servo, the Rust-based browser engine, and they proposed focus will be on Servo's cross-compilation support and a font stack written in Rust. It definitely makes sense for these two projects to work together in some way, and I hope there can be more cross-pollination in the future. Simple HTTP Server, an HTTP server written in Rust, has been ported to Redox, and the Apache port is getting some work, too. Wget now works on Redox, and several bugs in COSMIC programs were squashed. UEFI also saw some work, including fixing a violation of the EUFI specification, as well as adding several workarounds for buggy firmware, which should increase the number of machines that can boot Redox. Another area of progress is self-hosting, and Redox can now compile hello world-programs in Rust, C and C++ - an important step towards compiling more complex programs and the end-goal of compiling Redox itself on Redox. There's way more in this update, so head on over to get the full details.
COSMIC alpha released
After two year of development, System76 has released the very first alpha of COSMIC, their new Rust-based desktop environment for Linux. This is an alpha release, so they make it clear there's going to be bugs and that there's a ton of missing features at this point. As a whole, COSMIC is a comprehensive operating system GUI (graphical user interface) environment that features advanced functionality and a responsive design. Its modular architecture is specifically designed to facilitate the creation of unique, branded user experiences with ease. System76 website Don't read too much into branded experience" here - it just means other Linux distributions can easily use their colours, branding, and panel configurations. The settings application is also entirely modular, so distributors can easily add additional panels, and replace things like the update panel with one that fits their package management system of choice. COSMIC also supports extensive theming, and if you're wondering - yes, all of these are answers to the very reason COSMIC was made in the first place: GNOME's restrictiveness. There's not much else to say here yet, since it's an alpha release, but if you want to give it a go, the announcement post contains links to instructions for a variety of Linux distributions. COSMIC is also slowly making its way into Redox, the Rust-based operating system led by Jeremy Soller, a System76 employee.
Apple memory holed its broken promise for an OCSP opt-out
When you launch an app, macOS connects to Apple's OCSP service to check whether the app's Developer ID code signing certificate has been revoked by Apple. In November 2020, Apple's OCSP service experienced a mass outage, preventing Mac users worldwide from launching apps. In response and remedy to this outage, Apple made several explicit promises to Mac users in a support document, which can still be seen in a Wayback Machine archive from September 24, 2023. Jeff Johnson One of the explicit promises Apple made was that it would allow macOS users to turn off phoning home to Cupertino every time you launch an application on macOS. It's four years later now, and this promise has not been kept - Apple still does not allow you to turn off phoning home. In fact, it turns out that last year, Apple scrubbed this promise from all of its documentation, hoping we're all going to forget about it. In other words, Apple is never going to allow its macOS users to stop the operating system from phoning home to Cupertino every time you launch an application. Even though the boiling frog story is nonsensical, it's apt here. More and more Apple is limiting its users' control over macOS, locking it down to a point where you're not really the owner of your computer anymore. Stuff like this gives me the creeps.
macOS Sequoia makes it harder to override Gatekeeper security
Speaking of an operating system for toddlers: Apple is eliminating the option to Control-click to open Mac software that is not correctly signed or notarized in macOS Sequoia. To install apps that Gatekeeper blocks, users will need to open up System Settings and go to the Privacy and Security section to review security information" before being able to run the software. Juli Clover at MacRumors On a related note, I've got an exclusive photo of the next MacBook Pro.
macOS Sequia will nag you every week if you have screenshot apps and screen recorders installed
With macOS Sequoia this fall, using apps that need access to screen recording permissions will become a little bit more tedious. Apple is rolling out a change that will require you to give explicit permission on a weekly basis to these types of apps, and every time you reboot your Mac. If you've been using the macOS Sequoia beta this summer in conjunction with a third-party screenshot or screen recording app, you've likely been prompted multiple times to continue allowing that app access to your screen. While many speculated this could be a bug, that's not the case. Chance Miller Everybody is making comparisons to Windows Vista, but I don't think that's fair at all. Windows Vista suffered from an avelanche of permission dialogs because the wider Windows application, driver, and peripheral ecosystem was not at all used to the security boundaries present in Windows NT being enforced. Vista was the first consumer-focused version of Windows that started doing this, and after a difficult transition period, the flood of dialogs settled down, and for a long time now you can blame Windows for a lot of things, but it's definitely not throwing up more permission dialogs than, say, an average desktop-focused Linux distribution. In other words, Vista's UAC dialogs were a desperately necessary evil, an adjustment period the Windows ecosystem simply had to go through, and Windows as a whole is better off for it today. This, however, is different. This is Apple having such a low opinion of its users, and such a deep disregard for basic usability and computer ownership, that it feels entirely okay with bothering its users with weekly - or more, if you tend to reboot - nag dialogs for applications the user has already properly given permission to. I don't have any real issues with a reminder or permission dialog upon first launching a newly installed screen recording application - or when an exisiting application gains this functionality in an update - but nagging users weekly is just beyond insanity. More and more it feels like macOS is becoming an operating system for toddlers - or at least, that's how Apple seems to view its users.
Chrome will let you shop with “AI”
When you're shopping online, you'll likely find yourself jumping between multiple tabs to read reviews and research prices. It can be cumbersome doing all that back and forth tab switching, and online comparison is something we hear users want help with. In the next few weeks, starting in the U.S., Chrome will introduce Tab compare, a new feature that presents an AI-generated overview of products from across multiple tabs, all in one place. Imagine you're looking for a new Bluetooth portable speaker for an upcoming trip, but the product details and reviews are spread across different pages and websites. Soon, Chrome will offer to generate a comparison table by showing a suggestion next to your tabs. By bringing all the essential details - product specs, features, price, ratings - into one tab, you'll be able to easily compare and make an informed decision without the endless tab switching. Parisa Tabriz Is this really what people want from their browser, or am I just completely out of touch? I'm not at all convinced the latter isn't the case, but this just seems like a filler feature. Is this really what all the AI hype is about? Is this kind of nonsense the end game we're killing the planet even harder for?
Developing a cryptographically secure bootloader for RISC-V in Rust
It seems to be bootloader season, because we've got another one - this time, a research project with very limited application for most people. SentinelBoot is a cryptographically secure bootloader aimed at enhancing boot flow safety of RISC-V through memory-safe principles, predominantly leveraging the Rust programming language with its ownership, borrowing, and lifetime constraints. Additionally, SentinelBoot employs public-key cryptography to verify the integrity of a booted kernel (digital signature), by the use of the RISC-V Vector Cryptography extension, establishing secure boot functionality. SentinelBoot achieves these objectives with a 20.1% hashing overhead (approximately 0.27s additional runtime) when compared to an example U-Boot binary (mainline at time of development), and produces a resulting binary one-tenth the size of an example U-Boot binary with half the memory footprint. Lawrence Hunter SentinelBoot is a project undertaken at the University of Manchester, and its goal is probably clear from the description: to develop a more secure bootloader for RISC V devices. An additional element is that they looked specifically at devices that receive updates over-the-air, like smartphones. In addition, scenarios where an attacker has physical access to the device in question were not considered, for obvious reasons - in such cases, the attacker can just replace the bootloader altogether anyway, and no amount of fancy Rust code is going to save you there. The details of the implementation as described in the article are definitely a little bit over my head, but the gist seems to be that the project's been able to achieve a much more secure boot process without giving up much in performance. This being a research project with an intentionally limited scope does mean it's most just something that'll immediately benefit all of us, but it's these kinds of projects that can really push the state of the art and try out the viability of new ideas.
WordStar for DOS 7.0 archive
As you all know, I continue to use WordStar for DOS 7.0 as my word-processing program. It was last updated in December 1992, and the company that made it has been defunct for decades; the program is abandonware. There was no proper archive of WordStar for DOS 7.0 available online, so I decided to create one. I've put weeks of work into this. Included are not only full installs of the program (as well as images of the installation disks), but also plug-and-play solutions for running WordStar for DOS 7.0 under Windows, and also complete full-text-searchable PDF versions of all seven manuals that came with WordStar - over a thousand pages of documentation. Robert J. Sawyer WordStar for DOS is definitely a bit of a known entity in our circles for still being used by a number of world-famous authors. WordStar 4.0 is still being used by George R. R. Martin - assuming he's still even working on The Winds of Winter - and there must be some sort of reason as to why it's still so oddly popular. Thanks to this work by author Robert J. Sawyer, accessing and using version 7 of WordStar for DOS is now easier than ever. One of the reasons Sawyer set out to do this was making sure that if he passes away, the people responsible for his estate and works will have an easy way to access his writings. It's refreshing to see an author think ahead this far, and it will surely help a ton of other people too, since there's quite a few documents lingering around using the WordStar format.
US judge rules Google is a monopoly, search deals with Apple and Mozilla in peril
That sure is a big news drop for a random Tuesday. A federal judge ruled that Google violated US antitrust law by maintaining a monopoly in the search and advertising markets. After having carefully considered and weighed the witness testimony and evidence, the court reaches the following conclusion: Google is a monopolist, and it has acted as one to maintain its monopoly," according to the court's ruling, which you can read in full at the bottom of this story. It has violated Section 2 of the Sherman Act." Lauren Feiner at The Verge Among many other things, the judge mentions Google's own admissions that the company can do pretty much whatever it wants with Google Search and its advertisement business, without having to worry about users opting to go elsewhere or ad buyers leaving the Google platform. Studies from inside Google itself made it very clear that Google could systematically make Search worse without it affecting user and/or usage numbers in any way, shape, or form - because users have nowhere else to realistically go. While the ability to raise prices at will without fear of losing customers is a sure sign of being a monopoly, so is being able to make a product worse without fear of losing customers, the judge argues. Google plans to appeal, obviously, and this ruling has nothing yet to say about potential remedies, so what, exactly, is going to change is as of yet unknown. Potential remedies will be handled during the next phase of the proceedings, with the wildest and most aggressive remedy being a potential break-up of Google, Alphabet, or whatever it's called today. My sights are definitely set on a break-up - hopefully followed by Apple, Amazon, Facebook, and Microsoft - to create some much-needed breathing room into the technology market, and pave the way for a massive number of newcomers to compete on much fairer terms. Of note is that the judge also put yet another nail in the coffin of Google's various exclusivity deals, most notable with Apple and, for our interests, with Mozilla. Google pays Apple well over 20 billion dollars a year to be the default search engine on iOS, and it pays about 80% of Mozilla's revenue to be the default search engine in Firefox. According to the judge, such deals are anticompetitive. Mehta rejected Google's arguments that its contracts with phone and browser makers like Apple were not exclusionary and therefore shouldn't qualify it for liability under the Sherman Act. The prospect of losing tens of billions in guaranteed revenue from Google - which presently come at little to no cost to Apple - disincentivizes Apple from launching its own search engine when it otherwise has built the capacity to do so," he wrote. Lauren Feiner at The Verge If the end of these deals become part of the package of remedies, it will be a massive financial blow to Apple - 20 billion dollars a year is about 15% of Apple's total annual operating profits, and I'm also pretty sure those Google billions are counted as part of Tim Cook's much-vaunted services revenue, so losing it would definitely impact Apple directly where it hurts. Sure, it's not like it'll make Apple any less of a dangerous behemoth, but it will definitely have some explaining to do to investors. Much more worrisome, however, is the similar deal Google has with Mozilla. About 80% of Mozilla's total revenue comes from a search deal with Google, and if that deal were to be dissolved, the consequences for Mozilla, and thus for Firefox, would be absolutely immense. This is something I've been warning about for years now, and the end of this deal would be yet another worry that I've voiced repeatedly becoming reality, right after Mozilla becoming an advertising company and making Firefox worse in the name of quick profits. One by one, every single concern I've voiced about the future of Firefox is becoming reality. Canonical, Fedora, KDE, GNOME, and many other stakeholders - ignore these developments at your own peril.
EveryMicrosoftemployee is now being judged on their security work
After a number of very bug security incidents involving Microsoft's software, the company promised it would take steps to put security at the top of its list of priorities. Today we got another glimpse of the step it's taking, since the company is going to take security into account during performance reviews. Kathleen Hogan, Microsoft's chief people officer, has outlined what the company expects of employees in an internal memo obtained by The Verge. Everyone at Microsoft will have security as a Core Priority," says Hogan. When faced with a tradeoff, the answer is clear and simple: security above all else." A lack of security focus for Microsoft employees could impact promotions, merit-based salary increases, and bonuses. Delivering impact for the Security Core Priority will be a key input for managers in determining impact and recommending rewards," Microsoft is telling employees in an internal Microsoft FAQ on its new policy. Tom Warren at The Verge Now, I've never worked in a corporate environment or something even remotely close to it, but something about this feels off to me. Often, it seems that individual, lower-level employees know all too well they're cutting corners, but they're effectively forced to because management expects almost inhuman results from its workers. So, in the case of a technology company like Microsoft, this means workers are pushed to write as much code as possible, or to implement as many features as possible, and the only way to achieve the goals set by management is to take shortcuts - like not caring as much about code quality or security. In other words, I don't see how Microsoft employees are supposed to make security their top priority, while also still having to achieve any unrealistic goals set by management and other higher-ups. What I'm missing from this memo and associated reporting is Microsoft telling its employees that if unrealistic targets, crunch, low pay, and other factors that contribute to cutting corners get in the way of putting security first, they have the freedom to choose security. If employees are not given such freedom, demanding even more from them without anything in return seems like a recipe for disaster to me, making this whole memo quite moot. We'll have to see what this will amount to in practice, but with how horrible employees are treated in most industries these days, especially in countries with terrible union coverage and laughable labour protection laws like the US, I don't have high hopes for this.
50 years ago, CP/M started the microcomputer revolution
CP/M is turning 50 this year. The ancient Control Program for Microcomputers, or CP/M for short, has been enjoying a modest renaissance in recent years. By 21st century standards, it's unimaginably tiny and simple. The whole OS fits into under 200 kB, and the resident bit of the kernel is only about 3 kB. Today, in the era of end-user OSes in the tens-of-gigabytes size range, this exerts a fascination to a certain kind of hobbyist. Back when it was new, though, this wasn't minimalist - it was all that early hardware could support. Liam Proven I'm a little too young to have experienced CP/M as anything other than a retro platform - I'm from 1984, and we got our first computer in 1990 or so - but its importance and influence cannot be overstated. Many of the conventions set by CP/M made their way to the various DOS variants, and in turn, we still see some of those conventions in Windows today. Had Digital Research, the company CP/M creator Gary Kildall set up to sell CP/M, accepted the deal with IBM to make CP/M the default operating system for the then newly-created IBM PC, we'd be living in a very different world today. Digital Research would also create several other popular and/or influential software products beyond CP/M, such as DR DOS and GEM, as well as various other DOS variants and CP/M versions with DOS compatibility. It would eventually be acquired by Novell, where it faded into obscurity.
FreeBSD as a daily driver
Not too long ago I linked to a blog post by long-time OSNews reader (and silver Patreon) and friend of mine Morgan, about how to set up OpenBSD as a workstation operating system - and in fact, I personally used that guide in my own OpenBSD journey. Well, Morgan's back with another, similar article, this time covering FreeBSD. After going through the basic steps needed to make FreeBSD a bit more amenable to desktop use, Morgan notes about performance: Now let's compare FreeBSD. Well, quite frankly, there is no comparison! FreeBSD just feels snappier and more responsive on the desktop; at the same 170Hz refresh it actually feels like 170Hz. Void Linux always felt fast enough and I thought it had no lag at all at that refresh rate, but comparing them side by side (FreeBSD installed on the NVMe drive, Void running from a USB 4 SSD with similar performance), FreeBSD is smooth as glass and I started noticing just the slightest lag/stutter on Void. The same holds true for Firefox; I use smooth scrolling and on FreeBSD it really is perfectly smooth. Similarly, Youtube performance is unreal, with no dropped frames at any resolution all the way up to 4Kp60, and the videos look so much smoother! Morgan/kaidenshi This is especially relevant for me personally, since the prime reason I switched my workstation back to Fedora KDE was OpenBSD's performance issues. While those performance issues were entirely expected and the result of the operating system's focus on security and hardening, it did mean it's just not suitable for me as a workstation operating system, even if I like the internals and find it a joy to use, even under the hood. If FreeBSD delivers more solid desktop and workstation performance, it might be time I set up a FreeBSD KDE installation and see if it can handle my workstation's 270Hz 4K display. As I keep reiterating - the BSD world has a lot to offer those wishing to run a UNIX-like workstation operating system, and it's articles like these that help people get started. A lot of the steps taken may seem elementary to many of us, but for people coming from Linux or even Windows, they may be unfamiliar and daunting, so having it all laid out in a straightforward manner is quite helpful.
Chrome warns uBlock Origin may soon be disabled
As uBlock Origin lead developer and maintainer Raymond Hill explained on Friday, this is the result of Google deprecating support for the Manifest v2 (MV2) extensions platform in favor of Manifest v3 (MV3). uBO is a Manifest v2 extension, hence the warning in your Google Chrome browser. There is no Manifest v3 version of uBO, hence the browser will suggest alternative extensions as a replacement for uBO," Hill explained. Sergiu Gatlan at Bleeping Computer If you're still using Chrome, or any possible Chrome skins who have not committed to keeping Manifest v2 extensions enabled, it's really high time to start thinking about jumping ship if ad blocking matters to you. Of course, we don't know for how long Firefox will remain able to properly block ads either, but for now, it's obviously the better choice for those of us who care about a better browsing experience. And just to reiterate: I fully support anyone's right to block ads, even on OSNews. Your computer, your rules. There are a variety of other, better means to support OSNews - our Patreon, individual donations through Ko-Fi, or buying our merch - that are far better for us than ads will ever be.
Limine: a modern, advanced, portable, multiprotocol bootloader and boot manager
Limine is an advanced, portable, multiprotocol bootloader that supports Linux, multiboot1 and 2, the native Limine boot protocol, and more. Limine is lightweight, elegant, fast, and the reference implementation of the Limine boot protocol. The Limine boot protocol's main target audience is operating system and kernel developers that want to use a boot protocol which supports modern features in an elegant manner, that GRUB's aging multiboot protocols do not (or do not properly). Limine website I wish trying out different bootloaders was an easier thing to do. Personally, since my systems only run Fedora Linux, I'd love to just move them all over to systemd-boot and not deal with GRUB at all anymore, but since it's not supported by Fedora I'm worried updates might break the boot process at some point. On systems where only one operating system is installed, as a user I should really be given the choice to opt for the simplest, most basic boot sequence, even if it can't boot any other operating systems or if it's more limited than GRUB.
Exploring O3 optimization for Ubuntu
Following our recent work 5 with Ubuntu 24.04 LTS where we enabled frame pointers by default to improve debugging and profiling, we're continuing our performance engineering efforts by evaluating the impact of O3 optimization in Ubuntu. O3 is a GCC optimization 14 level that applies more aggressive code transformations compared to the default O2 level. These include advanced function and the use of sophisticated algorithms aimed at enhancing execution speed. While O3 can increase binary size and compilation time, it has the potential to improve runtime performance. Ubuntu Discourse If these optimisations deliver performance improvements, and the only downside is larger binaries and longer compilation times, it seems like a bit of a no-brainer to enable these, assuming those mentioned downsides are within reason. Are there any downsides they're not mentioning? Browsing around and doing some minor research it seems that -O3 optimisations may break some packages, and can even lead to performance degradation, defeating the purpose altogether. Looking at a set of benchmarks from Phoronix from a few years ago, in which the Linux kernel was compiled with either O2 and O3 and their performance compared, the results were effectively tied, making it seem not worth it at all. However, during these benchmarks, only the kernel was tested; everything else was compiled normally in both cases. Perhaps compiling the entire system with O3 will yield improvements in other parts of the system that do add up. For now, you can download unsupported Ubuntu ISOs compiled with O3 optimisations enabled to test them out.
Servo enables parallel table layout
Another month, another chunk of progress for the Servo rendering engine. The biggest addition is enabling table rendering to be spread across CPU cores. Parallel table layout is now enabled, spreading the work for laying out rows and their columns over all available CPU cores. This change is a great example of the strengths of Rayon and the opportunistic parallelism in Servo's layout engine. Servo blog On top of this, there's tons of improvements to the flexbox layout engine, support generic font families like sans-serif' and monospace' has been added, and Servo now supports OpenHarmony, the operating system developed by Huawei. This month also saw a lot of work on the development tools.
Override xdg-open behavior with xdg-override
Most application on GNU/Linux by convention delegate to xdg-open when they need to open a file or a URL. This ensures consistent behavior between applications and desktop environments: URLs are always opened in our preferred browser, images are always opened in the same preferred viewer. However, there are situations when this consistent behavior is not desired: for example, if we need to override default browser just for one application and only temporarily. This is where xdg-override helps: it replaces xdg-open with itself to alter the behavior without changing system settings. xdg-override GitHub page I love this project ever since I came across it a few days ago. Not because I need it - I really don't - but because of the story behind its creation. The author of the tool, Dmytro Kostiuchenko, wanted Slack, which he only uses for work, to only open his work browser - which is a different browser from his default browser. For example, imagine you normally use Firefox for everything, but for all your work-related things, you use Chrome. So, when you open a link sent to you in Slack by a colleague, you want that specific link to open in Chrome. Well, this is not easily achieved in Linux. Applications on Linux tend to use freedesktop.org's xdg-open for this, which looks at the file mimeapps.list to learn which application opens which file type or URL. To solve Kostiuchenko's issue, changing the variable $XDG_CONFIG_HOME just for Slack to point xdg-open to a different configuration file doesn't work, because the setting will be inherited by everything else spwaned from Slack itself. Changing mimeapps.list doesn't work either, of course, since that would affect all other applications, too. So, what's the actual solution? We'd like also not to change xdg-open implementation globally in our system: ideally, the change should only affect Slack, not all other apps. But foremost, diverging from upstream is very unpractical. However, in the spirit of this solution, we can introduce a proxy implementation of xdg-open, which we'll inject" into Slack by adding it to PATH. Dmytro Kostiuchenko xdg-override takes this idea and runs with it: It is based on the idea described above, but the script won't generate proxy implementation. Instead, xdg-override will copy itself to /tmp/xdg-override-$USER/xdg-open and will set a few $XDG_OVERRIDE_* variables and the $PATH. When xdg-override is invoked from this new location as xdg-open, it'll operate in a different mode, parsing $XDG_OVERRIDE_MATCH and dispatching the call appropriately. I tested this script briefly, but automated tests are missing, so expect some rough edges and bugs. Dmytro Kostiuchenko I don't fully understand how it works, but I get the overall gist of what it's doing. I think it's quite clever, and solves a very specific issue in a non-destructive way. While it's not something most people will ever need, it feels like something that if you do need it, it will quickly become a default part of your toolbox or workflow.
Technology history: where Unix came from
Today, every Unix-like system can trace their ancestry back to the original Unix. That includes Linux, which uses the GNU tools - and the GNU tools are based on the Unix tools. Linux in 2024 is removed from the original Unix design, and for good reason - Linux supports architectures and tools not dreamt of during the original Unix era. But the core command line experience in Linux is still very similar to the Unix command line of the 1970s. The next time you use ls to list the files in a directory, remember that you're using a command line that's been with us for more than fifty years. Jim Hall An excellent overview of some of the more ancient UNIX commands that are still with us today. One thing I always appreciate when I dive into an operating system closer to real" UNIX, like OpenBSD, or a actual UNIX, like HP-UX, is just how much more logical sense they make under the hood than a Linux system does. This is not a dunk on modern Linux - it has to cater to endless more modern needs than something ancient and dead like HP-UX - but what I learn while using these systems closer to the UNIX has made me appreciate proper UNIX more than I used to in the past. In what surely sounds like utter lunacy to system administrators who actually had to seriously administer HP-UX systems back in the day, I genuinely love using HP-UX, setting it up, configuring it, messing around with it, because it just makes so much more logical sense than the systems we use today. The knowledge gained from using BSD, HP-UX, and others, while not always directly applicable to Linux, does aid me in understanding certain Linux things better than I did before. What I'm trying to say is - go and load up an old UNIX, or at least a modern BSD. Aside from being great operating systems in their own right, they're much easier to grasp than a modern Linux system, and you'll learn a lot form the experience.
...9101112131415161718...