Many MacOS users are probably used by now to the annoyance that comes with unsigned applications, as they require a few extra steps to launch them. This feature is called Gatekeeper and checks for an Apple Developer ID certificate. Starting with MacOS Sequoia 15, the easy bypassing of this feature with e.g. holding Control when clicking the application icon is nowno longer an option, with version 15.1 disabling ways to bypass this completely. Not unsurprisingly, this change has caught especially users of open source software like OpenSCAD by surprise, as evidenced bya rangeofforum postsand GitHubtickets. Maya Posch at Hackaday It seems Apple has disabled the ability for users to bypass application signing entirely, which would be just the next step in the company's long-standing effort to turn macOS into iOS, with the same, or at least similar, lockdowns and restrictive policies. This would force everyone developing software for macOS to spend 99 per year in order to get their software signed, which may not be a realistic option for a lot of open source software. Before macOS 15.0, you could ctrl+right-click an unsigned application and force it to run. In macOS 15.0, Apple removed the ability to do this; instead, you had to try and open the application (which would fail), and then open System Settings, go to Privacy & Security, and click the Open Anyway" button to run the application. Stupidly convoluted, but at least it was possible to run unsigned applications. In macOS 15.1, however, even this convoluted method no longer seems to be working. When you try and launch an unsigned application in macOS 15.1, you get a dialog that reads The application Finder" does not have permission to open (null)", and no button to open the application anyway appears under Privacy & Security to launch the application anyway. The wording of the dialog would seem to imply this is a bug, but Apple's lack of attention to UI detail in recent years means I wouldn't be surprised if this is intentional. This means that the only way to run unsigned applications on macOS 15.1 is to completely disable System Integrity Protection and Gatekeeper. To do this, you have to boot into recovery mode, open the terminal, run the command sudo spctl --master-disable, reboot. However, I do not consider this a valid option for 99.9% of macOS users, and having to disable complex stuff like this through recovery mode and several reboots just to launch an application is utterly bizarre. For those of you still stuck on macOS, I can only hope this is a bug, and not a feature.
In a major shift of its release cycle, Google has revealed thatAndroid 16will be released in Q2 of 2025, confirming my report from late last month. Android 16 is the name of the next major release of the Android operating system, and its release in Q2 marks a significant departure from the norm. Google typically pushes out a new major release of Android in Q3 or Q4, but the company has decided to move next year's major release up by a few months so more devices will get the update sooner. Mishaal Rahman at Android Authority That's a considerable shake-up of Android's long-lasting release cadence. The change includes more than just moving up the major Android release, as Google also intends to ship more minor releases of Android throughout the year. The company has already unveiled a rough schedule for Android 16, only weeks after releasing Android 15, with the major Android 16 release coming in the second quarter of 2025, followed by a minor release in the fourth quarter of 2025. There are two reasons Google is doing this. First, this new release schedule better aligns with when new flagship Android devices are released, so that from next year onwards, they can ship with the latest version of Android of that year preinstalled, instead of last year's release. This should help bump up the number of users using the latest release. Second, this will allow Google to push out SDK releases more often, allowing for faster bug fixing. I honestly feel like most users will barely notice this change. Not only is the Android update situation still quite messy compared to its main rival iOS, the smartphone operating system market has also matured quite a bit, and the changes between releases are no longer even remotely as massive as they used to be. Other than Pixel users, I don't think most people will even realise they're on a faster release schedule.
Genode's rapid development carries on apace. Whilst Genode itself is a so-called OS Framework - the computing version of a rolling chassis that can accept various engines (microkernels) and coachwork of the customer's choice - they also have an in-house PC desktop system. This flagship product, Sculpt OS, comes out on a bi-annual schedule and Autumn brings us the second for the year, with what has become an almost a customary big advance: Among the many usability-related topics on our road map, multi-monitor support is certainly the most anticipated feature. It motivated a holistic modernization of Genode's GUI stack over several months, encompassing drivers, the GUI multiplexer, inter-component interfaces, up to widget toolkits. Sculpt OS 24.10 combines these new foundations with a convenientuser interfacefor controlling monitor modes, making brightness adjustments, and setting up mirrored and panoramic monitor configurations. Genode website Sculpt OS 24.10 is available as ready-to-use system image for PC hardware, the PinePhone, and the MNT Reform laptop.
Another day, another Windows Recall problem. Microsoft is delaying the feature yet again, this time from October to December. We are committed to delivering a secure and trusted experience with Recall. To ensure we deliver on these important updates, we're taking additional time to refine the experience before previewing it with Windows Insiders," says Brandon LeBlanc, senior product manager of Windows, in a statement toThe Verge. Originally planned for October, Recall will now be available for preview with Windows Insiders on Copilot Plus PCs by December." Tom Warren at The Verge Making Recall secure, opt-in, and uninstallable is apparently taking more time than the company originally planned. When security, opt-in, and uninstallable are not keywords during your design and implementation process for new features, this is the ungodly mess that you end up with. This could've all been prevented if Microsoft wasn't high on its own AI" supply.
Torvalds said that the current state of AI technology is 90 percent marketing and 10 percent factual reality. The developer, who won Finland's Millennium Technology Prize for the creation of the Linux kernel,was interviewedduring the Open Source Summit held in Vienna, where he had the chance to talk about both the open-source world and the latest technology trends. Alfonso Maruccia at Techspot Well, he's not wrong. AI" definitely feels like a bubble at the moment, and while there's probably eventually going to be useful implementations people might actually want to actively use to produce quality content, most AI" features today produce a stream of obviously fake diarrhea full of malformed hands, lies, and misinformation. Maybe we'll eventually work out these serious kinks, but for now, it's mostly just a gimmick providing us with an endless source of memes. Which is fun, but not exactly what we're being sold, and not something worth destroying the planet for even faster. Meanwhile, Google is going utterly bananas with its use of AI" inside the company, with Sundar Pichai claiming 25% of code inside Google is now AI"-generated. Sundar Pichai We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster. So much here feels wrong. First, who wants to bet those engineers care a whole lot less about the generated code than they do about code they write themselves? Second, who wants to bet that generated code is entirely undocumented? Third, who wants to bet what the additional costs will be a few years from now when the next batch of engineers tries to make sense of that undocumented generated code? Sure, Google might save a bit on engineers' salaries now, but how much extra will they have to spend to unspaghettify that diarrhea code in the future? It will be very interesting to keep an eye on this, and check back in, say, five years, and hear from the Google engineers of the future how much of their time is spent fixing undocumented AI"-generated code. I can't wait.
It seems the GNOME team is getting quite serious about turning GNOME OS into an end-user focused Linux distribution, similar to a project KDE is working on. GNOME OS is GNOME's development, testing, and QA distribution. It's not designed to be useful as a general-purpose system, and so it hasn't been the center of attention. However, that makes it a convenient place to experiment, and ultimately through sheer coincidence the GNOME OS team ended up developing something that follows my vision using the same technology that I was. The onlyrealdifference was intent: carbonOS was intended for mass adoption, and GNOME OS was not. In essentially every other aspect, the projects had the same roadmap: following Lennart Poettering'sFitting Everything Together"proposal, providing a stock GNOME experience, and even using the same build system (BuildStream). Adrian Vovk The goal with GNOME OS is to showcase the best GNOME has to offer, built on top of an immutable base system, using Flatpak as the means to install applications. Basically, we're looking at something very similar to the immutable Fedora GNOME variant, but probably with even less modifications to stock GNOME, and perhaps with few more newer things as default, like perhaps systemd-boot over GRUB. KDE also happens to be working on a very similar project, with many of the same design choices and constraints. I think this is an excellent idea, for both GNOME and KDE. This allows them to offer users a very focused, simple, and resilient way of showcasing the latest and greatest the two desktop environments have to offer, without having to rely on third-party distributions to not make silly choices or mess things up - for which GNOME and KDE developers then tend to take the heat. Systems like these will, of course, also be a great way for developers to quickly spin up the latest stock versions of GNOME and KDE to test their applications. Still, there's also a downside to having official GNOME and KDE distributions. If users find bugs or issues in these desktop environment when running other distributions, like Fedora or Ubuntu, GNOME and KDE developers may be tempted to just shrug them off and point them to the official GNOME and KDE distributions. It works there, so obviously the cause of the bug lies with the unofficial distribution, right? This may be a tempting conclusion, but may not be accurate at all, as the real cause could still lie with GNOME and KDE. Once such official" GNOME and KDE Linux distributions exist, the projects run a real risk of only really caring about how well GNOME and KDE work there, while not caring as much, or even at all, how well they run everywhere else. I'm not sure how they intend to prevent this from happening, but from here, I can already see the drama erupting. I hope this is something they take into consideration. Immutable distributions are not for me, since I prefer the control regular Fedora and RPM gives me, and I don't want to give that up. It also doesn't help I really, really don't like Flatpak as it exists today, which is another major barrier to entry for someone like me, and I assume most OSNews readers. However, there are countless Linux users out there who just want to get stuff done with whatever defaults come with their operating system, and for them, this newly proposed GNOME OS and its KDE counterpart are a great choice. There's a reason Valve opted for an Arch-based immutable KDE distribution for the Steam Deck, after all.
There's been more controversy regarding Microsoft's Recall feature for Windows, with people supposedly discovering Recall was being secretly installed on Windows 11 24H2. Furthermore, trying to remove this secretly installed Recall would break Explorer, as it seemed Explorer had a dependency on Recall. Unsurprisingly, this spread like wildfire all across the web, but I didn't report on it because something about it felt off - reports were sporadic and vague, and there didn't seem to be any consistency in the various stories. Well, it turns out that it is a big misunderstanding arising from Microsoft's usual incompetence. Ever since the Recall security fiasco in summer, all insider and production builds lack Recall completely," explains Windows watcherAlbacore, in messages toThe Verge. Albacore created theAmperage toolthat allowed Recall to run on older Snapdragon chips. The references we're seeing in current installs of 24H2 are related to Microsoft making it easier for system admins to remove Recall or disable it. Ironically, Microsoft going out of its way to make removal easier is being flipped into AI / spying / whatever hoaxes," says Albacore. Microsoft has an ungodly complex and long winded system for integrating development changes into a mainline build, parts of the optional-izing work were most likely not merged at once, and thus produce crash loops in very specific scenarios that slipped testing," explains Albacore. Tom Warren at The Verge What this story really highlights is just how little trust Microsoft has left with its very own users. Microsoft has a history of silently and secretely re-enabling features users turned off, re-installing Edge without any user interaction or consent, lots of disabled telemetry features suddenly being turned on again after an update, and so on. Over the years, this has clearly eroded any form of trust users have in Microsoft, so when a story like this hits, users just assume it's Microsoft doing shady stuff again. Can you blame them? All of this is made worse by the absolutely dreadfully bad messaging and handling of the Recall feature. The shoddy implementation, the complete lack of security, the severe inability to read the room about the privacy implications of a feature like Recall, combined with the lack of trust mentioned above, and you have a very potent cocktail of misinformation entirely of Microsoft's own making. I'm not trying to excuse Microsoft here - they themselves are the only ones to blame for stories like these. I have a feeling we're going to see a lot more Recall problems.
The standard trope when talking about timezones is to rattle off falsehoods programmers believe about them. These lists are only somewhat enlightening - it's really hard to figure out what truth is just from the contours of falsehood. So here's an alternative approach. I'm gonna show you some weird timezones. In fact, theweirdesttimezones. They're each about as weird as timezones are allowed to get in some way. Ulysse Carion The reason why timezones are often weird is not only things like the shape of countries dictating where the actual timezones begin and end, but also because of politics. A lot of politics. The entirety of China runs on Beijing time, even though it covers five geographical timezones. Several islands in the Pacific were forced by their colonisers to run on insanely offset timezones because it made exploiting them easier. Time in Europe is political, too - countries like The Netherlands, Belgium, France, and Spain should really be in the same time zone as the UK, but adopted UTC+1 because it aligns better with the rest of mainland Europe. Although anything is better than whatever the hell Dutch Time was. Then there is, of course, daylight savings, which is a whole pointless nightmare in and of itself that should be abolished. Daylight savings rules and exceptions alone cover a ton of the oddities and difficulties with timezones, which is reason enough to get rid of it, aside from all the other possible issues, but a proposal to abolish it in the EU has sadly stalled.
Speaking of Wayland, one of the most important parts of the transition is Xwayland, which makes sure legacy X applications not yet capable of running under a modern graphics stack can continue to function. Xwayland applications have had this weird visual glitch during resize operations, however, where the opposite side of the window would expand and contract while resizing. KDE developer Vlad Zahorodnii wanted to fix this, and he wrote a very detailed article explaining why, exactly, this bug happens, which takes you deep into the weeds of X and Wayland. Window resizing in X would be a glitchy mess, if it wasn't for the X11 protocol to synchronize window repaints during interactive resize, which ensures that the window resize and the application repainting its window contents remain synchronised. This protocol is supported by Kwin and GNOME's Mutter, so what's the problem here? Shouldn't everything just work? KWin supports the basic frame synchronization protocol, so there should be no visual glitches when resizing X11 windows in the Plasma Wayland session, right? At quick glance, yes, but we forget about the most important detail: Wayland compositors don't useXCompositeNameWindowPixmap()orxcb_composite_name_window_pixmap()to grab the contents of X11 windows, instead they rely on Xwayland attaching graphics buffers towl_surfaceobjects, so there is no strict order between the Wayland compositor receiving an XSync request acknowledgement and graphics buffers for the new window size. Vlad Zahorodnii Basically, the goal of the fix is to make sure these steps are also synchronised when using Xwayland, and that's exactly what Zahorodnii has achieved. This makes the resizing X windows under Xwayland look normal and without weird visual glitches, which is a massive improvement to the overall experience of using a Wayland desktop with a few stray X applications. Thanks to this fix, which was made possible with help from Wayland developers, Kwin is now one of the few compositors that correctly synchronises X windows running under Wayland. KDE has been doing an amazing job moving from X to Wayland, and I don't think there's anyone else who has managed to make the transition quite as painless. Not only do KDE developers focus on difficult bugs like this one that many others would just shrug off as acceptable jank, they also made things like the Wayland to X11 Video Bridge, a desktop-agnostic tool to allow things like screen sharing in Teams, Discord, Slack, etc. to work properly on Wayland.
The slow rise of Wayland hasn't really been slow anymore for years now, and today another major part of the Linux ecosystem is making the jump from X to Wayland. So we made the decision to switch. For most of this year, we have been working on porting labwc to the Raspberry Pi Desktop. This has very much been a collaborative process with the developers of both labwc and wlroots: both have helped us immensely with their support as we contribute features and optimisations needed for our desktop. After much optimisation for our hardware, we have reached the point where labwc desktops run just as fast as X on older Raspberry Pi models. Today, we make the switch with our latest desktop image:Raspberry Pi Desktop now runs Wayland by default across all models. Simon Long Raspberry Pi Desktop already used Wayland on some of the newer models, through the use of Wayfire. However, it turned out Wayfire wasn't a good fit for the older Pi models, and Wayfire'x development direction would move it even further away from that goal, which is obviously important to the Raspberry Pi Foundation. They eventually settled on using labwc instead, which can also be used on older Pi models. As such, all Pi models will now switch to using Wayland with the latest update to the operating system. This new update also brings vastly improved touchscreen support, a rewritten panel application that won't keep removed plugins in memory, a new display configuration utility, and more.
Do you want OSNews to continue to exist? Do you like the selection of news items I manage to scrounge up almost every day? Do you want OSNews free from corporate influence, AI"-generated nonsense, and the kind of SEO-optimised blogspam we all despise? Consider supporting OSNews financially, so I can keep running the site as an independent entity, free from the forces that make the web shittier every day. There are several ways you can support OSNews. First, you can become a Patreon. Being an OSNews Patreon means no more ads on OSNews, access to the OSNews Matrix room, and some fancy flair on your comments. The goal is to eventually have enough Patreons supporting us to make us independent even from regular ads, which means we'll need to hit at least 1500-2000 a month. Once we achieve that, we will turn off ads for everyone. OSNews is my job, and thus my only source of income, so we can only turn off ads once community support is high enough to do so. This is obviously a long-term goal. To help us all get there, I've added a brand new, even higher Patreon tier. If being a Platinum Patreon isn't enough for you, you can now move on up and become an Antimatter Patreon for 50/month. You'll get all the same benefits as the Platinum tier, but on top of that, you can opt to have your name permanently displayed on the frontpage in our sidebar. This tier is really specifically designed for the most hardcore supporters of OSNews, and can even be used as a bit of a marketing tool for yourself. By the way, I do not know where to go after antimatter. What's rarer and more expensive than antimatter? Second, you can make an individual donation to OSNews through Ko-Fi. Recently, my wife, two kids, and I were all hit with, in order, bronchitis, flu, and then a minor cold. With all of us down and out, unable to work, our finances obviously took a bit of a hit. My wife works in home care for the elderly, which isn't exactly a job with a fair wage, so any time we can't work it hits us hard. Individual Ko-Fi donations have proven to be lifesavers. As such, I've set up a Ko-Fo donation target of 2500, so my wife, kids, and I can build up a bit of a buffer for emergencies. Creating such a buffer will be a huge load off our backs. Third, we have official OSNews merch! Our merch store is filled with a ton of fun products for the operating system connoisseurs among us, from the basic OSNews T-shirt and mug, to the old-school ASCII-art OSNews T-shirt and sweatshirt, and finally three unique terminal T-shirts showing the terminal of MS-DOS, BeOS, and Mac OS X. Each of the terminal shirts sport the correct colour schemes, text, and fonts. The pricing has been set up in such a way that for each product sold, we receive about $8. OSNews has always been a passion project for everyone involved, and I'd like to continue that. By making sure we're independent, free from the forces that are destroying websites left, right, and centre, OSNews can keep doing what it's always done: report on things nobody else covers, without the pressure to post 45 items about every new iPhone, stupid SEO blogspam nonsense about how to plug in a USB cable or whatever, or AI"-generated drudgery. The people making that possible are all of our Patreons, Ko-Fi donors, and merch customers. You have no idea how thankful I am for each and every one of you.
The Trinity Desktop Environment, a fork of the last release in the KDE 3.x series, has just released their latest version, R14.1.3. Despite its rather small version number change, it contains some very welcome new features. TDE started the process of integrating the XDG Desktop Portal API, which will bring a lot of welcome integration with applications from the wider ecosystem. There's also a brand new touchpad settings module, which was something I was sorely missing when I tried out TDE a few months ago. Furthermore, there's of course a ton of bugfixes and improvements, but also things like support for tiling windows, some new theme and colour scheme options, and a lot more. Not too long ago, when KDE's Akademy 2024 took place, a really fun impromptu event happened. A number of KDE developers got together - I think in a restaurant or coffee place - and ended up organising an unplanned TDE installation party. Several photos floated around Mastodon of KDE developers using TDE, and after a few fun interactions between KDE and TDE developers on Mastodon, TDE developers ended up being invited to next year's Akademy. We'll have to wait and see if the schedules line up, but if any of this can lead to both projects benefiting from some jolly cooperation, it can only be seen as a good thing. Regardless, TDE is an excellent project with a very clear goal, and they're making steady progress all the time. It's not a fast-paced environment chasing the latest and greatest technologies, but instead builds upon a solid foundation, bringing it into modern world where it makes sense. If you like KDE 3.x, TDE is going to be perfect for you.
There's many ways to judge if an operating system has made it to the big leagues, and one of the more unpleasant ones is the availability of malware. Haiku, the increasingly capable and daily-driveable successor to BeOS, is now officially a mainstream operating system, as it just had its first piece of malware. HaikuRansomware is an experimental ransomware project designed for educational and investigative purposes. Inspired by the art of poetry and the challenge of cryptography, this malware encrypts files with a custom extension and provides a ransom note with a poetic touch. This is a proof of concept aimed to push the boundaries of how creative ransomware can be designed. HaikuRansomware's GitHub page Now this is obviously a bit of a tongue-in-cheek, experimental kind of thing, but it's still something quite unique to happen to Haiku. I'm not entirely sure how the ransomware is supposed to spread, but my guess would be through social engineering. With Haiku being a relatively small project, and one wherein every user runs as root - baron, in BeOS parlance - I'm sure anything run through social engineering can do some serious damage without many guardrails in place. Don't quote me on that, though, as Haiku may have more advanced guardrails and mitigations in place than classic BeOS did. This proof-of-concept has no ill intent, and is more intended as an art project to highlight what you can do with encryption and ransomware on Haiku today, and I definitely like the art-focused approach of the author.
As of the previous release of POSIX, the Austin Group gained more control over the specification, having it be more working group oriented, and they got to work making the POSIX specification more modern. POSIX 2024 is the first release that bears the fruits of this labor, and as such, the changes made to it are particularly interesting, as they will define the direction of the specification going forwards. This is what this article is about! Well, mostly. POSIX is composed of a couple of sections. Notably XBD (Base Definitions, which talk about things like what a file is, how regular expressions work, etc), XSH (System Interfaces, the C API that defines POSIX's internals), and XCU (which defines the shell command language, and the standard utilities available for the system). There's also XRAT, which explains the rationale of the authors, but it's less relevant for our purposes today. XBD and XRAT are both interesting as context for XSH and XCU, but those are the real meat of the specification. This article will focus on the XCU section, in particular the utilities part of that section. If you're more interested in the XSH section, there's an excellent summary page bysortix'sJonas Termansen that you can readhere. im tosti The weekend isn't over yet, so here's some more light reading.
Old Vintage Computing Research, by the incredibly knowledgeable Cameron Kaiser, is one of the best resources on the web about genuinely obscure retrocomputing, often diving quite deep in topics nobody else covers - or even can cover, considering how rare some of the hardware Kaiser covers is. I link to Old VCR all the time, and today I've got two more great articles by Kaiser for you. First, we've got the more well-known - relatively speaking - of the two devices covered today, and that's the MIPS ThinkPad, officially known as the IBM WorkPad z50. This was a Windows CE 2.11 device powered by a NECVR4120 MIPS processor, running at 131 Mhz, released in 1999. Astute readers might note the WorkPad branding, which IBM also used for several rebranded Palm Pilots. Kaiser goes into his usual great detail covering this device, with tons of photos, and I couldn't stop reading for a second. There's so much good information in here I have no clue what to highlight, but since OSNews has OS in the name, this section makes sense to focus on: The desktop shortcuts are pre-populated in ROM along with a whole bunch of applications. The marquee set that came on H/PC Pro machines was Microsoft Pocket Office (Pocket Word, Pocket Excel, Pocket Access and Pocket PowerPoint), Pocket Outlook (Calendar, Contacts, Inbox and Tasks) and Pocket Internet Explorer, but Microsoft also included Calculator, InkWriter (not too useful on the z50 without a touch screen), Microsoft Voice Recorder, World Clock, ActiveSync (a la Palm HotSync), PC Link (direct connect, not networked), Remote Networking, Terminal (serial port and modem), Windows Explorer and, of course, Solitaire. IBM additionally licensed and included some of bSquare's software suite, including bFAX Pro for sending and receiving faxes with the softmodem, bPRINT for printing and bUSEFUL Backup Plus for system backups, along with a battery calibrator and a Rapid Access quick configuration tool. There is also aCMD.EXEcommand shell, though it too is smaller and less functional than its desktop counterpart. Old Vintage Computing Research Using especially these older versions of Windows CE is a wild experience, because you can clearly tell Microsoft was trying really hard to make it look and feel like normal' Windows, but as anyone who used Windows CE back then can attest, it was a rather poor imitation with a ton of weird limitations and design decisions borne from the limited hardware it was designed to run on. I absolutely adore the various incarnations of Windows CE and associated graphical shells it ran - especially the PocketPC days - but there's no denying it always felt quite clunky. Moving on, the second Old VCR article I'm covering today is more difficult for me to write about, since I am too young to have any experience with the 8 bit era - save for some experience with the MSX platform as a wee child - so I have no affinity for machines like the Commodore 64 and similar machines from that era. And, well, this article just so happens to be covering something called the Commodore HHC-4. Once upon a time (and that time was Winter CES 1983), Commodore announced what was to be their one and only handheld computer, the Commodore HHC-4. It was never released and never seen again, at least not in that form. But it turns out that not only did the HHC-4 actually exist, it also wasn't manufactured by Commodore - it was a Toshiba. Like Superman had Clark Kent, the Commodore HHC-4 had a secret identity too: the Toshiba Pasopia Mini IHC-8000, the very first portable computer Toshiba ever made. And like Clark Kent was Superman with glasses, compare the real device to the Commodore marketing photo and you can see that it's the very same machine modulo a plastic palette swap. Of course there's more to the story than that. Old Vintage Computing Research Of course, Kaiser hunted down an IHC-8000, and details his experiences with the little handheld, calculator-like machine. It turns out it's most likely using some unspecified in-house Toshiba architecture, running at a few hundred kHz, and it's apparently quite sluggish. It never made it to market in Commodore livery, most likely because of its abysmal performance. The amount of work required to make this little machine more capable and competitive probably couldn't be recouped by its intended list price, Kaiser argues.
Firmware, software that's intimately involved with hardware at a low level, has changed radically with each of the different processor architectures used in Macs. Howard Oakley A quick but still detailed overview of the various approach to Mac firmware Apple has employed over the years, from the original 68k firmware and Mac OS ROMs, to the modern Apple M-specific approach.
There's a date looming on the horizon for the vast majority of Windows users. While Windows 11 has been out for a long time now, most Windows users are using Windows 10 - about 63% - while Windows 11 is used by only about 33% of Windows users. In October 2025, however, support for Windows 10 will end, leaving two-thirds of Windows users without the kind of updates they need to keep their system secure and running smoothly. Considering Microsoft is in a lot of hot water over its security practices once again lately, this must be a major headache for the company. The core of the problem is that Windows 11 has a number of very strict hardware requirements that are mostly entirely arbitrary, and make it impossible for huge swaths of Windows 10 users to upgrade to Windows 11 even if they wanted to. And that is a problem in and of itself too: people don't seem to like Windows 11 very much, and definitely prefer to stick to Windows 10 even if they can upgrade. It's going to be quite difficult for Microsoft to convince those people to upgrade, which likely won't happen until these people buy a new machine, which in turn in something that just isn't necessary as often as it used to be. That first group of users - the ones who want to upgrade, but can't - do have unofficial options, a collection of hacks to jank Windows 11 into installing on unsupported hardware. This comes with a number of warnings from Microsoft, so you may wonder how much of a valid option this really is. Ars Technica has been running Windows 11 on some unsupported machines for a while, and concludes that while it's problem-free in day-to-day use, there's a big caveat you won't notice until it's time for a feature update. These won't install without going through the same hacks you needed to use when you first installed Windows 11 and manually downloading the update in question. This essentially means you'll need to repeat the steps for doing a new unsupported Windows 11 install every time you want to upgrade.As we detail in our guide, that'srelativelysimple if your PC has Secure Boot and a TPM but doesn't have a supported processor. Make a simple registry tweak, download the Installation Assistant or an ISO file to run Setup from, and the Windows 11 installer will let you off with a warning and then proceed normally, leaving your files and apps in place. Without Secure Boot or a TPM, though, installing these upgrades in place is more difficult. Trying to run an upgrade install from within Windows just means the system will yell at you about the things your PC is missing. Booting from a USB drive that has been doctored to overlook the requirements will help you do a clean install, but it will delete all your existing files and apps. Andrew Cunningham at Ars Technica The only way around this that may work is yet another hack, which tricks the update into thinking it's installing Windows Server, which seems to have less strict requirements. This way, you may be able to perform an upgrade from one Windows 11 version to the next without losing all your data and requiring a fresh installation. It's one hell of a hack that no sane person should have to resort to, but it looks like it might be an inevitability for many. October 2025 is going to be a slaughter for Windows users, and as such, I wouldn't be surprised to see Microsoft postponing this date considerably to give the two-thirds of Windows users more time to move to Windows 11 through their regular hardware replacements cycles. I simply can't imagine Microsoft leaving the vast majority of its Windows users completely unprotected. Spare a thought for our Windows 10-using friends. They're going to need it.
If you love exploit mitigations, you may have heard of a new system call namedmseallanding into the Linux kernel's 6.10 release, providing a protection called memory sealing." Beyond notes from the authors, very little information about this mitigation exists. In this blog post, we'll explain what this syscall is, including how it's different from prior memory protection schemes and how it works in the kernel to protect virtual memory. We'll also describe the particular exploit scenarios thatmsealhelps stop in Linux userspace, such as stopping malicious permissions tampering and preventing memory unmapping attacks. Alan Cao The goal of mseal is to, well, literally seal a part of memory and protect its contents from being tampered with. It makes regions of memory immutable so that while a program is running, its memory contents cannot be modified by malicious actors. This article goes into great detail about this new feature, explains how it works, and what it means for security in the Linux kernel. Excellent light reading for the weekend.
One-third of payments tocontractors training AI systems used by companies such as Amazon, Meta and Microsoft have not been paid on time after the Australian company Appen moved to a new worker management platform. Appen employs 1 million contractors who speak more than 500 languages and are based in 200 countries. They work to label photographs, text, audio and other data to improve AI systems used by the large tech companies and have been referred to as ghost workers" - the unseen human labour involved in training systems people use every day. Josh Taylor at The Guardian It's crazy that if you peel back the layers on top of a lot of tools and features sold to us as artificial intelligence", you'll quite often find underpaid workers doing the labour technology companies are telling us are done by computers running machine learning algorithms. The fact that so many of them are either deeply underpaid or, as in this case, not even paid at all, while companies like Google, Apple, Microsoft, and OpenAI are raking in ungodly amounts of profits, is deeply disturbing. It's deeply immoral on so many levels, and just adds to the uncomfortable feeling people have with AI". Again I'd like to reiterate I'm not intrinsically opposed to the current crop of artificial intelligence tools - I just want these mega corporations to respect the rights of artists, and not use their works without permission to earn immense amounts of money. On top of that, I don't think it should be legal for them to lie about how their tools really work under the hood, and the workers who really do the work claimed to be done by AI" to be properly paid. Is any of that really too much to ask? Fix these issues, and I'll stop putting quotation marks around AI".
Windows 11, version 24H2 represents significant improvements to the already robust update foundation of Windows. With the latest version, you get reduced installation time, restart time, and central processing unit (CPU) usage for Windows monthly updates. Additionally, enhancements to the handling of feature updates further reduce download sizes for most endpoints by extending conditional downloads to include Microsoft Edge. Let's take a closer look at these advancements. Steve DiAcetis at the Windows IT Pro Blog Now this is the kind of stuff we want to see in new Windows releases. Updating Windows feels like a slow, archaic, and resource-intensive process, whereas on, say, my Fedora machines it's such an effortless, lightweight process I barely even notice it's happening. This is an area where Windows can make some huge strides that materially affect people - Windows updates are a meme - and it's great to see Microsoft working on this instead of shoving more ads onto Windows users' desktops. In this case, Microsoft managed to reduce installation time, make reboots faster, and lower CPU and RAM usage through a variety of measures roughly falling in one of three groups: improved parallel processing, faster and optimised reading of update manifests, and more optimal use of available memory. We're looking at some considerable improvements here, such as a 45% reduction in installation time, 15-25% less CPU usage, and more. Excellent work. On a related note, at the Qualcomm Snapdragon Summit, Microsoft also unveiled a number of audio improvements for Windows on ARM that will eventually also make their way to Windows on x86. I'm not exactly an expert on audio, but from what I understand the Windows audio stack is robust and capable, and what Microsoft announced today will improve the stack even further. For instance, support for MIDI 2.0 is coming to Windows, with backwards compatibility for MIDI 1.0 devices and APIs, and Microsoft worked together with Yamaha and Qualcomm to develop a new USB Audio Class 2 Driver. In the company's blog post, Microsoft explains that the current USB Audio Class 2 driver in Windows is geared towards consumer audio applications, and doesn't fulfill the needs of professional audio engineers. This current driver does not support the standard professional software has standardised on - ASIO - forcing people to download custom, third-party kernel drivers to get this functionality. That's not great for anybody, and as such they're working on a new driver. The new driver will support the devices that our current USB Audio Class 2 driver supports, but will increase support for high-IO-count interfaces with an option for low-latency for musician scenarios. It will have an ASIO interface so all the existing DAWs on Windows can use it, and it will support the interface being used by Windows and the DAW application at the same time, like a few ASIO drivers do today. And, of course, it will handle power management events on the new CPUs. Pete Brown at the Dev Blogs The code for this driver will be published as open source on GitHub, so that anyone still opting to make a specialised driver can use Microsoft's code to see how things are done. That's a great move, and one that I think we'll be seeing more often from Microsoft. This is great news for audio professionals using Windows.
The processor in the Game Boy Advance, the ARM7TDMI, has a weird characteristic where the carry flag is set to a meaningless value" after a multiplication operation. What this means is that software cannot and should not rely on the value of the carry flag after multiplication executes. It can be set to anything. Any value. 0, 1, a horse, whatever. This has been a source of memes in the emulator development community for a few years - people would frequently joke about how the implementation of the carry flag may as well becpu.flags.c = rand() & 1;. And they had a point - the carry flag seemed to defy all patterns; nobody understood why it behaves the way it does. But the one thing we did know, was that the carry flag seemed to bedeterministic. That is, under the same set of inputs to a multiply instruction, the flag would be set to the same value. This was big news, because it meant that understanding the carry flag could give us key insight into how this CPU performs multiplication. And just to get this out of the way, the carry flag's behavior after multiplication isn't an important detail to emulate at all. Software doesn't rely on it. And if softwaredidrely on it, then screw the developers who wrote that software. But the carry flag is a meme, and it's a really tough puzzle, and that was motivation enough for me to give it a go. Little did I know it'd take3 yearsof on and off work. bean machine Please don't make me understand any of this.
When I think about bhyve Live Migration, it's something I encounter almost daily in my consulting calls. VMware's struggles with Broadcom's licensing issues have been a frequent topic, even as we approach the end of 2024. It's surprising that many customers still feel uncertain about how to navigate this mess. While VMware has been a mainstay in enterprise environments for years, these ongoing issues make customers nervous. And they should be - it's hard to rely on something when even the licensing situation feels volatile. Now, as much as I'm a die-hard FreeBSD fan, I have to admit that FreeBSD still falls short when it comes to virtualization - at least from an enterprise perspective. In these environments, it's not just about running a VM; it's about having the flexibility and capabilities to manage workloads without interruption. Years ago, open-source solutions like KVM (e.g., Proxmox) and Xen (e.g., XCP-ng) introduced features like live migration, where you can move VMs between hosts with zero downtime. Even more recently, solutions like SUSE Harvester (utilizing KubeVirt for running VMs) have shown that this is now an essential part of any virtualization ecosystem. gyptazy FreeBSD has bhyve, but the part where it falls short, according to gyptazy, is the tool's lack of live migration. While competitors and alternatives allow for virtual machines to be migrated without downtime, bhyve users still need to shut down their VMs, interrupt all connections, and thus experience a period of downtime before everything is back up and running again. This is simply not acceptable in most enterprise environments, and as such, bhyve is not an option for most users of that type. Luckily for enterprise FreeBSD users, things are improving. Live migration of bhyve virtual machines is being worked on, and basic live migration is now supported, but with limitations. For instance, only virtual machines with a maximum of 3GB could be migrated live, but that limit has been raised in recent years to 13 to 14GB, which is a lot more palatable. There are also some issues with memory corruption, as well as some other issues. Still, it's a massive feat to have live migration at all, and it seems to be improving every year. The linked article goes into much greater detail about where things stand, so if you're interested in keeping up with the latest progress regarding bhyve's live migration capabilities, it's a great place to start.
At the Snapdragon Summit today, Qualcomm is officially announcing the Snapdragon 8 Elite, its flagship SoC for smartphones. The Snapdragon 8 Elite is a major upgrade from its predecessor, with improvements across the board. Qualcomm is also changing its naming scheme for its flagship SoCs from Snapdragon 8 Gen X to Snapdragon X Elite. Pradeep Viswanathan at Neowin It's wild - but not entirely unexpected - how we always seem to end up in a situation in technology where crucial components, such as the operating system or processor, are made by one, or at most two, companies. While there are a few other smartphone system-on-a-chip vendors, they're mostly relegated to low-end devices, and can't compete on the high end, where the money is, at all. It's sadness. Speaking of our mobile SoC overlords, they seem to be in a bit of a pickle when it comes to their core business of, well, selling SoCs. In short, Qualcomm bought Nuvia to use its technology to build the current crop of Snapdragon X Elite and Pro laptop chips. According to ARM, Qualcomm does not have an ARM license to do so, and as such, a flurry of lawsuits between the two companies followed. ARM is now cancelling certain Qualcomm ARM licenses, arguing specifically its laptop Snapdragon X chips should be destroyed. What we're looking at here is two industry giants engaged in very public, and very expensive, contract negotiations, using the legal system as their arbiter. This will eventually fizzle out into a new agreement between the two companies with renewed terms and conditions - and flows of money - but until that dust has settled, be prepared for an endless flurry of doomerist news items about this story. As for us normal people? We don't have to worry one bit about this legal nonsense. It's not like we have any choice in smartphone chips anyway.
I commented on Lobsters that/tmpis usually a bad idea, which caused some surprise. I suppose/tmpsecurity bugs were common in the 1990s when I was learning Unix, but they are pretty rare now so I can see why less grizzled hackers might not be familiar with the problems. I guess that's some kind of success, but sadly the fixes have left behind a lot of scar tissue because they didn't address the underlying problem:/tmpshould not exist. Tony Finch Not only is this an excellent, cohesive, and convincing argument against the existence of /tmp, it also contains some nice historical context as to why things are the way they are. Even without the arguments against /tmp, though, it just seems entirely more logical, cleaner, and sensible to have /tmp directories per user in per user locations. While I never would've been able to so eloquently explain the problem as Finch does, it just feels wrong to have every user resort to the exact same directory for temporary files, like a complex confluence of bad decisions you just know is going to cause problems, even if you don't quite understand the intricate interplay.
Apple announced a trio of major new hearing health features for the AirPods Pro 2 in September, including clinical-grade hearing aid functionality, a hearing test, and more robust hearing protection. All three will roll out next week with the release of iOS 18.1, and theycould mark a watershed momentfor hearing health awareness. Apple is about to instantly turn the world's most popular earbuds into an over-the-counter hearing aid. Chris Welch at The Verge Rightfully so, most of us here have a lot of issues with the major technology companies and the way they do business, but every now and then, even they accidentally stumble into doing something good for the world. AirPods are already a success story, and gaining access to hearing aid-level features at their price point is an absolute game changer for a lot of people with hearing issues - and for a lot of people who don't even yet know they have hearing issues in the first place. If you have people in your life with hearing issues, or whom you suspect may have hearing issues, gifting them AirPods this Christmas season may just be a perfect gift. Yes, I too think hearing aids should be a thing nobody has to pay for and which should just be part of your country's universal healthcare coverage - assuming you have such a thing - but this is not a bad option as a replacement.
System76, purveyor of Linux computers, distributions, and now also desktop environments, has just unveiled its latest top-end workstation, but this time, it's not an x86 machine. They've been working together with Ampere to build a workstation based around Ampere's Altra ARM processors: the Thelio Astra. Phoronix, fine purveyor of Linux-focused benchmarks, were lucky enough to benchmark one, and has more information on the new workstation. System76 designed the Thelio Astra in collaboration with Ampere Computing. The System76 Thelio Astra makes use of Ampere Altra processors up to the Ampere Altra Max 128-core ARMv8 processor that in turn supports 8-channel DDR4 ECC memory. The Thelio Astra can be configured with up to 512GB of system memory, choice of Ampere Altra processors, up to NVIDIA RTX 6000 Ada Generation graphics, dual 10 Gigabit Ethernet, and up to 16TB of PCIe 4.0 NVMe SSD storage. System76 designed the Thelio Astra ARM64 workstation to be complemented by NVIDIA graphics given the pervasiveness of NVIDIA GPUs/accelerators for artificial intelligence and machine learning workloads. The Astra is contained within System76's custom-designed, in-house-manufactured Thelio chassis. Pricing on the System76 Thelio Astra will start out at $3,299 USD with the 64-core Ampere Altra Q64-22 processor, 2 x 32GB of ECC DDR4-3200 memory, 500GB NVMe SSD, and NVIDIA A402 graphics card. Michael Larabel This pricing is actually remarkably favourable considering the hardware you're getting. System76 and its employees have been dropping hints for a while now they were working on an ARM variant of their Thelio workstation, and knowing some of the prices others are asking, I definitely expected the base price to hit $5000, so this is a pleasant surprise. With the Altra processors getting a tiny bit long in the tooth, you do notice some oddities here, specifically the DDR4 RAM instead of the modern DDR5, as well as the lack of PCIe 5.0. The problem is that while the Altra has a successor in the AmpereOne processor, its availability is quite limited, and most of them probably end up in datacentres and expensive servers for big tech companies. This newer variant does come with DDR5 and PCIe 5.0 support, but doesn't yet have a lower core count version, so even if it were readily available it might simply push the price too far up. Regardless, the Altra is still a ridiculously powerful processor, and at anywhere between 64 and 128 cores, it's got power to spare. The Thelio Astra will be available come 12 November, and while I would perform a considerable number of eyebrow-raising acts to get my hands on one, it's unlikely System76 will ship one over for a review. Edit: here's an excellent and detailed reply to our Mastodon account from an owner of an Ampere Altra workstation, highlighting some of the challenges related to your choice of GPU. Required reading if you're interested in a machine like this.
It's no secret that a default Windows installation is... Hefty. In more ways than one, Windows is a bit on the obese side of the spectrum, from taking up a lot of disk space, to requiring hefty system requirements (artificial or not), to coming with a lot of stuff preinstalled not everyone wants to have to deal with. As such, there's a huge cottage industry of applications, scripts, modified installers, custom ISOs, and more, that try to slim Windows down to a more manageable size. As it turns out, even Microsoft itself wants in on this action. The company that develops and sells Windows also provides a Windows debloat script. Over on GitHub, Microsoft maintains a repository of scripts simplify setting up Windows as a development environment, and amid the collection of scripts we find RemoveDefaultApps.ps1, a PowerShell script to Uninstall unnecessary applications that come with Windows out of the box". The script is about two years old, and as such it includes a few applications no longer part of Windows, but looking through the list is a sad reminder of the kind of junk Windows comes with, most notably mobile casino games for children like Bubble Witch and March of Empires, but also other nonsense like the Mixed Reality Portal or Duolingo. It also removes something called ActiproSoftwareLLC, which are apparently a set of third-party, non-Microsoft UI controls for WPF? Which comes preinstalled with Windows sometimes? What is even happening over there? The entire set of scripts makes use of Chocolatey wrapped in Boxstarter, which is a wrapper for Chocolatey and includes features like managing reboots for you", because of course, the people at Microsoft working on Windows can't be bothered to fix application management and required reboots themselves. Silly me, expecting Microsoft's Windows developers to address these shortcomings internally instead of using third-party tools. The repository seems to be mostly defunct, but the fact it even exists in the first place is such a damning indictment of the state of Windows. People keep telling us Windows is fine, but if even Microsoft itself needs to resort to scripts and third-party tools to make it usable, I find it hard to take claims of Windows being fine seriously in any way, shape, or form.
In early 2022I got several Sun SPARC serversfor free off of a FreeCycle ad: I was recentlycalled outfor not providing any sort of update on those devices... so here we go! Sidneys1.com Some information on booting old-style SPARC machines, as well as pretty pictures. Nice palate-cleanser if you've had to deal with something unpleasant this weekend. This world would be a better place if we all had our own Sun machines to play with when we get sad.
I don't think most people realize how Firefox and Safari depend on Google for more than just" revenue from default search engine deals and prototyping new web platform features. Off the top of my head, Safari and Firefox use the following Chromium libraries: libwebrtc, libbrotli, libvpx, libwebp, some color management libraries, libjxl (Chromium may eventually contribute a Rust JPEG-XL implementation to Firefox; it's a hard image format to implement!), much of Safari's cryptography (from BoringSSL), Firefox's 2D renderer (Skia)...the list goes on. Much of Firefox's security overhaul in recent years (process isolation, site isolation, user namespace sandboxes, effort on building with ControlFlowIntegrity) is directly inspired by Chromium's architecture. Rohan Seirdy" Kumar Definitely an interesting angle on the browser debate I hadn't really stopped to think about before. The argument is that while Chromium's dominance is not exactly great, the other side of the coin is that non-Chromium browsers also make use of a lot of Chromium code all of us benefit from, and without Google doing that work, Mozilla would have to do it by themselves, and let's face it, it's not like they're in a great position to do so. I'm not saying I buy the argument, but it's an argument nonetheless. I honestly wouldn't mind a slower development pace for the web, since I feel a lot of energy and development goes into things making the web worse, not better. Redirecting some of that development into things users of the web would benefit from seems like a win to me, and with the dominant web engine Chromium being run by an advertising company, we all know where their focus lies, and it ain't on us as users. I'm still firmly on the side of less Chromium, please.
Google has gotten a bad reputation as of late for beinga bit overzealouswhen it comes to fighting ad blockers. Most recently, it's been spottedautomatically turning off popular ad blocking extension uBlock Originfor some Google Chrome users. To a degree, that makes sense-Google makes its money off ads. But withmalicious adsanddata trackersall over the internet these days, users have legitimate reasons to want to block them. The uBlock Origin controversy is just one facet of a debate that goes back years, and it's not isolated: your favorite ad blocker will likely be affected next. Here are the best ways to keep blocking ads now that Google is cracking down on ad blockers. Michelle Ehrhardt at LifeHacker Here's the cold and harsh reality: ad blocking will become ever more difficult as time goes on. Not only is Google obviously fighting it, other browser makers will most likely follow suit. Microsoft is an advertising company, so Edge will follow suit in dropping Manifest v2 support. Apple is an advertising company, and will do whatever they can to make at least their own ads appear. Mozilla is an advertising company, too, now, and will continue to erode their users' trust in favour of nebulous nonsense like privacy-respecting advertising in cooperation with Facebook. The best way to block ads is to move to blocking at the network level. Get a cheap computer or Raspberry Pi, set up Pi-Hole, and enjoy some of the best adblocking you're ever going to get. It's definitely more involved than just installing a browser extension, but it also happens to be much harder for advertising companies to combat. If you're feeling generous, set up Pi-Holes for your parents, friends, and relatives. It's worth it to make their browsing experience faster, safer, and more pleasant. And once again I'd like to reiterate that I have zero issues with anyone blocking the ads on OSNews. Your computer, your rules. It's not like display ads are particularly profitable anyway, so I'd much rather you support us through Patreon or a one-time donation through Ko-Fi, which is a more direct way of ensuring OSNews continues to exist. Also note that the OSNews Matrix room - think IRC, but more modern, and fully end-to-end encrypted - is now up and running and accessible to all OSNews Patreons as well.
Something odd happened to Qualcomm's Snapdragon Dev Kit,an $899 mini PCpowered by Windows 11 and the company's latest Snapdragon X Elite processor. Qualcomm decided to abruptly discontinue the product, refund all orders (including for those with units on hand), and cease its support, claiming the device has not met our usual standards of excellence." Taras Buria at Neowin The launch of the Snapdragon X Pro and Elite chips seems to have mostly progressed well, but there have been a few hiccups for those of us who want ARM but aren't interested in Windows and/or laptops. There's this story, which is just odd all around, with an announced, sold, and even shipped product suddenly taken off the market, which I think at this point was the only non-laptop device with an X Elite or Pro chip. If you are interested in developing for Qualcomm's new platform, but don't want a laptop, you're out of luck for now. Another note is that the SoC SKU in the Dev Kit was clocked a tiny bit higher than the laptop SKUs, which perhaps plays a role in its cancellation. The bigger hiccup is the problematic Linux bring-up, which is posing many more problems and is taking a lot longer than Qualcomm very publicly promised it would take. For now, if you want to run Linux on a Snapdragon X Elite or Pro device, you're going to need a custom version of your distribution of choice, tailored to a specific laptop model, using a custom kernel. It's an absolute mess and basically means that at this point in time, months and months after release, buying one of these to run Linux on them is a bad idea. Quite a few important bits will arrive with Linux 6.12 to supposedly greatly improve the experience, but seeing is believing. Qualcomm made a lot of grandiose promises about Linux support, and they simply haven't delivered.
I want to take advantage of Go's concurrency and parallelism for some of my upcoming projects, allowing for some serious number crunching capabilities. But what if I wanted EVEN MORE POWER?!? Enter SIMD,SameInstructionMulipleData . Simd instructions allow for parallel number crunching capabilities right down at the hardware level. Many programming languages either have compiler optimizations that use simd or libraries that offer simd support. However, (as far as I can tell) Go's compiler does not utilizes simd, and I cound not find a general propose simd package that I liked.I just want a package that offers a thin abstraction layer over arithmetic and bitwise simd operations. So like any good programmer I decided to slightly reinvent the wheel and write my very own simd package. How hard could it be? After doing some preliminary research I discovered that Go uses its own internal assembly language called Plan9. I consider it more of an assembly format than its own language. Plan9 uses target platforms instructions and registers with slight modifications to their names and usage. This means that x86 Plan9 is different then say arm Plan9. Overall, pretty weird stuff. I am not sure why the Go team went down this route. Maybe it simplifies the compiler by having this bespoke assembly format? Jacob Ray Pehringer Another case of light reading for the weekend. Even as a non-programmer I learned some interesting things from this one, and it created some appreciation for Go, even if I don't fully grasp things like this. On top of that, at least a few of you will think this has to do with Plan9 the operating system, which I find a mildly entertaining ruse to subject you to.
We've pulled together all kinds of resources to create a comprehensive guide to installing and upgrading to Windows 11. This includes advice and some step-by-step instructions for turning on officially required features like your TPM and Secure Boot, as well as official and unofficial ways to skirt the system-requirement checks on unsupported" PCs, because Microsoft is not your parent and therefore cannot tell you what to do. There are some changes in the 24H2 update that will keep you from running it on every ancient system that could run Windows 10, and there are new hardware requirements for some of the operating system's new generative AI features. We've updated our guide with everything you need to know. Andrew Cunningham at Ars Technica In the before time, the things you needed to do to make Windows somewhat usable mostly came down to installing applications replicating features other operating systems had been enjoying for decades, but as time went on and Windows 10 came out, users now also had to deal with disabling a ton of telemetry, deleting preinstalled adware, dodge the various dark patterns around Edge, and more. You have wonder if it was all worth it, but alas, Windows 10 at least looked like Windows, if you squinted. With Windows 11, Microsoft really ramped up the steps users have to take to make it usable. There's all of the above, but now you also have to deal with an ever-increasing number of ads, even more upsells and Edge dark patterns, even more data gathering, and the various hacks you have to employ to install it on perfectly fine and capable hardware. With Windows 10's support ending next year, a lot of users are in a rough spot, since they can't install Windows 11 without resorting to hacks, and they can't keep using Windows 10 if they want to keep getting updates. And here comes 24H2, which makes it all even worse. Not only have various avenues to make Windows 11 installable on capable hardware been closed, it also piles on a whole bunch of AI" garbage, and accompanying upsells and dark patterns, Windows users are going to have to deal with. Who doesn't want Copilot regurgitating nonsense in their operating system's search tool, or have Paint strongly suggest it will improve" your quick doodle to illustrate something to a friend with that unique AI StyleTM we all love and enjoy so much? Stay strong out there, Windows folks. Maybe it'll get better. We're rooting for you.
If you read my previous article onDOS memory models, you may have dismissed everything I wrote as legacy cruft from the 1990s that nobody cares about any longer". After all, computers have evolved from sporting 8-bit processors to 64-bit processors and, on the way, the amount of memory that these computers can leverage has grown orders of magnitude: the 8086, a 16-bit machine with a 20-bit address space, could only use 1MB of memory while today's 64-bit machines can theoretically access 16EB. All of this growth has been in service of ever-growing programs. But... even if programs are now more sophisticated than they were before, do they allreallyrequire access to a 64-bit address space? Has the growth from 8 to 64 bits been a net positive in performance terms? Let's try to answer those questions to find some very surprising answers. But first, some theory. Julio Merino It's not quite weekend yet, but I'm still calling this some light reading for the weekend.
Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts. In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google's own docs and blogs and various news site's reports. Kevin Purdy at Ars Technica It's a welcome collection of changes and features to better align Android' theft and personal privacy protection with how thieves steal phones in this day and age. I'm not sure I understand all of them, though - the Private Space, where you can drop applications to lock them behind an additional pin code, confuses me, since everyone can see it's there. I assumed Private Space would also give people in vulnerable positions - victims of abuse, journalists, dissidents, etc. - the option to truly hide parts of their life to protect their safety, but it doesn't seem to work that way. Android 15 will also use AI" to recognise when a device is yanked out of your hands and lock it instantly, which is a great use case for AI" that actually benefits people. Of course, it will be even more useful once thieves are aware this feature exists, so that they won't even try to steal your phone in the first place, but since this is Android, it'll be a while before Android 15 makes its way to enough users for it to matter.
Earlier this year we talked about Huawei's HarmonyOS NEXT, which is most likely the only serious competitor to Android and iOS in the world. HarmonyOS started out as a mere Android skin, but over time Huawei invested heavily into the platform to expand it into a full-blown, custom operating system with a custom programming language, and it seems the company is finally ready to take the plunge and release HarmonyOS NEXT into the wild. It's indicated that HarmonyOS made up 17% of China's smartphone market in Q1 of 2024. That's a significant amount of potential devices breaking off from Android in a market dominated by either it or iOS. HarmonyOS NEXT is set to begin rolling out to Huawei devices next week. The OS will first come to the Mate 60, Mate X5, and MatePad Pro on October 15. Andrew Romero at 9To5Google Huawei has been hard at work making sure there's no application gap' for people using HarmonyOS NEXT, claiming it has 10000 applications ready to go that cover 99.9%" of their users' use case. That's quite impressive, but of course, we'll have to wait and see if the numbers line up with the reality on the ground for Chinese consumers. Here in the est HarmonyOS NEXT is unlikely to gain any serious traction, but that doesn't mean I would mind taking a look at the platform if at all possible. It's honestly not surprising the most serious attempt at creating a third mobile ecosystem is coming from China, because here in the west the market is so grossly rusted shut we're going to be stuck with Android and iOS until the day I die.
Engineers at Google started work on a new Terminal app for Android a couple of weeks ago. This Terminal app is part of the Android Virtualization Framework (AVF) and contains a WebView that connects to a Linux virtual machine via a local IP address, allowing you to run Linux commands from the Android host. Initially, you had to manually enable this Terminal app using a shell command and then configure the Linux VM yourself. However, in recent days, Google began work on integrating the Terminal app into Android as well as turning it into an all-in-one app for running a Linux distro in a VM. Mishaal Rahman at Android Authority There already are a variety of ways to do this today, but having it as a supported feature implemented by Google is very welcome. This is also going to greatly increase the number of spammy articles and lazy YouTube videos telling you how to run Ubuntu on your phone", which I'm not particularly looking forward to.
Next up in my backlog of news to cover: the US Department of Justice's proposed remedies for Google's monopolistic abuse. Now that Judge Amit Mehta hasfound Google is a monopolist, lawyers for the Department of Justice have begun proposing solutions to correct the company's illegal behavior and restore competition to the market for search engines. In a new32-page filing(included below), they said they are considering both behavioral and structural remedies. That covers everything from applying a consent decree to keep an eye on the company's behavior to forcing it to sell off parts of its business, such as Chrome, Android, or Google Play. Richard Lawler at The Verge While I think it would be a great idea to break Google up, such an action taken in a vacuum seems to be rather pointless. Say Google is forced to spin off Android into a separate company - how is that relatively small Android, Inc. going to compete with the behemoth that is Apple and its iOS to which such restrictions do not apply? How is Chrome Ltd. going to survive Microsoft's continued attempts at forcing Edge down our collective throats? Being a dedicated browser maker is working out great for Firefox, right? This is the problem with piecemeal, retroactive measures to try and correct" a market position that you have known for years is being abused - sure, this would knock Google down a peg, but other, even larger megacorporations like Apple or Microsoft will be the ones to benefit most, not any possible new companies or startups. This is exactly why a market-wide, equally-applied set of rules and regulations, like the European Union's Digital Markets Act, is a far better and more sustainable approach. Unless similar remedies are applied to Google's massive competitors, these Google-specific remedies will most likely only make things worse, not better, for the American consumer.
Internet Archive's The Wayback Machine" has suffered a data breach after a threat actor compromised the website and stole a user authentication database containing 31 million unique records. News of the breach began circulating Wednesday afternoon after visitors to archive.org began seeing a JavaScript alert created by the hacker,stating that the Internet Archive was breached. Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!," reads a JavaScript alert shown on the compromised archive.org site. Lawrence Abrams at Bleeping Computer To make matters worse, the Internet Archive was also suffering from waves of distributed denial-of-service attacks, forcing the IA to take down the site while strengthening everything up. It seems the attackers have no real motivation, other than the fact they can, but it's interesting, shall we say, that the Internet Archive has been under legal assault by big publishers for years now, too. I highly doubt the two are related in any way, but it's an interesting note nonetheless. I'm still catching up on all the various tech news stories, but this one was hard to miss. A lot of people are rightfully angry and dismayed about this, since attacking the Internet Archive like this kind of feels like throwing Molotov cocktails at a local library - there's literally not a single reason to do so, and the only people you're going to hurt are underpaid librarians and chill people who just want to read some books. Whomever is behind this are just assholes, no ifs and buts about it.
I finally seem to be recovering from a nasty flu that is now wreaking havoc all across my tiny Arctic town - better now than when we hit -40 I guess - so let's talk about something that's not going to recover because it actually just fucking died: Windows 7. For nearly everyone, support for Windows 7 ended on January 14th, 2020. However, if you were a business who needed more time to migrate off of it because your CEO didn't listen to the begging and pleading IT department until a week before the deadline, Microsoft did have an option for you. Businesses could pay to get up to 3 years of extra security updates. This pushes the EOL date for Windows 7 to January 10th, 2023. Okay but that's still nearly 2 years earlier than October 8th, 2024? The Cool Blog I'd like to solve the puzzle! It's POSReady, isn't it? Of course it is! Windows Embedded POSReady's support finally ended a few days ago, and this means that for all intents and purposes, Windows 7 is well and truly dead. In case you happen to be a paleontologist, think of Windows Embedded POSReady adding an extra two years of support to Windows 7 as the mammoths who managed to survive on Wrangel until as late as only 4000 years ago. Windows 7 was one of the good ones, for sure, and all else being equal, I'd choose it over any of the releases that cam after. It feels like Windows 7 was the last release designed primarily for users of the Windows platform, whereas later releases were designed more to nickle and dime people with services, ads, and upsells that greatly cheapened the operating system. I doubt we'll ever see such a return to form again, so Windows 7 might as well be the last truly beloved Windows release. If you're still using Windows 7 - please don't, unless you're doing it for the retrocomputing thrill. I know Windows 8, 10, and 11 are scary, and as much as it pains me to say this, you're better off with 10 or 11 at this point, if only for security concerns.
Sometimes I have the following problem to deal with: An OS/2 system uses NetBIOS over TCP/IP (aka TCPBEUI) and should communicate with a SMB server (likewise using TCPBEUI) on a different subnet. This does not work on OS/2 out of the box without a little bit of help. Michal Necasek My 40 fever certainly isn't helping, but goes way over my head. Still, it seems like an invaluable article for a small group of people, and anyone playing with OS/2 and networking from here on out can refer back this excellent and detailed explanation.
Entirely coincidentally, the KDE team released Plasma 6.2 yesterday, the latest release in the well-received 6.x series. As the version number implies, it's not a groundbreaking release, but it does contain a number of improvements that are very welcome to a few specific, often underserved groups. For instance, 6.2 overhauls the Accessibility settings panel, and ads, among other things, colourblindness filters for a variety of types of colourblindness. This condition affects roughly 8-9% of the population, so it's an important new feature. Another group of people served by Plasma 6.2 are artists. Plasma 6.2 includes a smorgasbord of new features for users of drawing tablets. OpenSystem Settingsand look forDrawing Tabletto see various tools for configuring drawing tablets. New in Plasma 6.2: a tablet calibration wizard and test mode; a feature to define the area of the screen that your tablet covers (the whole screen or a section); and the option to re-bind pen buttons to different kinds of mouse clicks. KDE Plasma 6.2 release announcement Artists and regular users alike can now also enjoy better colour management, more complete HDR support, a tone-mapping feature in Kwin, and much more. Power management has been improved as well, so you can now manage brightness per individual monitor, control which application block going to sleep, and so on. There's also the usual array of bug fixes, UI tweaks, and so on. Plasma 6.2 is already available in at least Fedora and openSUSE, and it will find its way to your distribution soon enough, too.
Over the decades, my primary operating system of choice has changed a few times. As a wee child of six years old, we got out first PC through one of those employer buy-a-PC programs, where an employer would subsidize its employees buying PCs for use in the home. The goal here was simple: if people get comfortable with a computer in their private life, they'll also get comfortable with it in their professional life. And so, through my mother's employer, we got a brand new 286 desktop running MS-DOS and Windows 3.0. I still have the massive and detailed manuals and original installation floppies it came with. So, my first operating system of choice' was MS-DOS, and to a far lesser extent Windows 3.0. As my childhood progressed, we got progressively better computers, and the new Windows versions that came with it - Windows 95, 98, and yes, even ME, which I remarkably liked just fine. Starting with Windows 95, DOS became an afterthought, and with my schools, too, being entirely Windows-only, my teenage years were all Windows, all the time. So, when I bought my first own, brand new computer - instead of old 386 machines my parents took home from work - right around when Windows XP came out, I bought a totally legal copy of Windows XP from some dude at school that somehow came on a CD-R with a handwritten label but was really totally legit you guys. I didn't like Windows XP at all, and immediately started looking for alternatives, trying out Mandrake Linux before discovering something called BeOS - and despite BeOS already being over by that point, I had found my operating system of choice. I tried to make it last as long as the BeOS community would let me, but that wasn't very long. The next step was a move to the Mac, something that was quite rare in The Netherlands at that time. During that same time, Microsoft released Windows Server 2003, the actually good version of Windows XP, and a vibrant community of people, including myself, started using it as a desktop operating system instead. I continued using this mix of Mac OS X and Windows - even Vista - for a long time, while having various iterations of Linux installed on the side. I eventually lost interest in Mac OS X because Apple lost interest in it (I think around the Snow Leopard era?), and years later, six or seven years ago or so, I moved to Linux exclusively, fully ditching Windows even for gaming like four or so years ago when Valve's Proton started picking up steam. Nowadays all my machines run Fedora KDE, which I consider to be by far the best desktop operating system experience you can get today. Over the last few years or so, I've noticed something fun and interesting in how I set up my machines: you can find hints of my operating system history all over my preferred setup and settings. I picked up all kinds of usage patterns and expectations from all those different operating systems, and I'd like to enable as many of those as possible in my computing environment. In a way, my setup is a reflection of the operating systems I used in the past, an archaeological record of my computing history, an evolutionary tree of good traits that survived, and bad traits bred out. Taking a look at my bare desktop, you'll instantly pick up on the fact I used to use Mac OS X for a long time. The Mac OS X-like dock at the bottom of the screen has been my preferred way of opening and managing running applications since I first got an iBook G4 more than 20 years ago, and to this day I find it far superior to any alternatives. KDE lets me easily recreate a proper dock, without having to resort to any third-party dock applications. I never liked the magnification trick Mac OS X wowed audiences with when it was new, so I don't use it. The next dead giveaway I used to be a Mac OS X user a long time ago is the top bar, which shares quite a few elements with the Mac OS X menubar, while also containing elements not found in Mac OS X. I keep the KDE equivalent of a start menu there, a button that brings up my home folder in a KDE folder view, a show desktop button that's mostly there for aesthetic reasons, KDE's global menubar widget for that Mac OS X feel, a system tray, the clock, and then a close button that opens up a custom system menu with shutdown/reboot/etc. commands and some shortcuts to system tools. Another feature coming straight from my days using Mac OS X is KDE's equivalent of Expose, called Overview, without which I wouldn't know how to find a window if my life depended on it. I bind it to the top-left hotcorner for easy access with my mouse, while the bottom-right hotcorner is set to show my desktop (and the reason why I technically don't really need that show desktop button I mentioned earlier). I fiddled with the hot corner trigger timings so that they fire virtually instantly. Waiting on my computer is so '90s. It's not really possible to see in screenshots, but my stint using BeOS as my main operating system back when that was a thing you could do also shines through, specifically in the way I manage windows. In BeOS, double-clicking a titlebar tab would minimise a window, and right-clicking the tab would send the window to the bottom of the Z-stack. I haven't maximised a non-video window in several decades, so I find double-clicking a titlebar to maximise a window utterly baffling, and a ridiculous Windows-ism I want nothing to do with. Once again, KDE lets me set this up exactly the way I want, and I genuinely feel lost when I can't manipulate my windows in this
OpenBSD 7.6, the release in which every single line of the original code form the first release has been edited or removed, has been released. There's a lot of changes, new features, bug fixes, and more in 7.6, but for desktop users, the biggest new feature is undoubtedly hardware-accelerated video decoding through VA-API. Or, as the changelog puts it: Imported libva 2.22.0, an implementation for VA-API (video acceleration API). VA-API provides access to graphics hardware acceleration capabilities for video processing. OpenBSD 7.6 release announcement This is a massive improvement for anyone using OpenBSD for desktop use, especially on power-constrained devices like laptops. Problematic video playback was one of the reasons I went back to Fedora KDE after running OpenBSD on my workstation, and it seems this would greatly improve that situation. I can't wait until I find some time to reinstall OpenBSD and see how much difference this will make for me personally. There's more, of course. OpenBSD 7.6 starts the bring-up for Snapdragon X Elite devices, and in general comes with a whole slew of low-level improvements for the ARM64 architecture. AMD64 systems don't have to feel left out, thanks to AVX-512 support, several power management improvements to make sleep function more optimally, and several other low-level improvements I don't fully understand. RISC-V, PowerPC, MIPS, and other architectures also saw small numbers of improvements. The changelog is vast, so be sure to dig through it to see if your pet bug has been addressed, or support for your hardware has been improved. OpenBSD users will know how to upgrade, and for new installations, head on over to the download page.
Late last year, Google's Play Store was ruled to be a monopoly in the US, and today the judge in that case has set out what Google must do to address this situation. Today, Judge James Donato issued his final rulinginEpic v. Google, ordering Google to effectively open up the Google Play app store to competition for three whole years. Google will have to distribute rival third-party app storeswithinGoogle Play, and it must give rival third-party app stores access to the full catalog of Google Play apps, unless developers opt out individually. Sean Hollister at The Verge On top of these rather big changes, Google also cannot mandate the use of Google's own billing solution, nor can it prohibit developers from informing users of other ways to download and/or pay for an application. Furthermore, Google can't make sweetheart deals with device makers to entice them to install the Play Store or to block them from installing other stores, and Google can't pay developers to only use the Play Store or not use other stores. It's a rather comprehensive set of remedies that will remain in force for three years. Many of these remedies are taken straight from the European Union's Digital Markets Act, but they will be far less effective since they're only applied to one company, and only for three years. On top of that, Google can appeal, and the company has already stated that it's going to ask for an immediate stay on these remedies, and if they get that stay, the remedies won't have to be implemented any time soon. This legal tussling is far from over, and does very little to protect consumer choice. A clear law that simply prohibits this kind of market abuse, like the DMA, is much fairer to everyone involved, and creates a consistent level playing field for everyone, instead of only affecting random companies based on the whims of something as unpredictable as juries. In other words, I don't think much is going to change in the United States after this ruling, and we'll likely be hearing more back and forths in the court room for years to come, all while US consumers are being harmed. It's better than nothing in lieu of a working Congress actually doing, well, anything, but that's not saying much.
You have to wonder how meaningful this news is in 2024, but macOS 15.0 Sequoia running on either Apple Silicon or Intel processors is now UNIX 03-certified. The UNIX 03 Product Standard is the mark for systems conforming to Version 3 of the Single UNIX Specification. It is a significantly enhanced version of the UNIX 98 Product Standard. The mandatory enhancements include alignment with ISO/IEC 9989:1999 C Programming Language, IEEE Std 1003.1-2001 and ISO/IEC 9945:2002. This Product Standard includes the following mandatory Product Standards:Internationalized System Calls and Libraries Extended V3,Commands and Utilities V4,C Language V2, andInternationalized Terminal Interfaces. UNIX 03 page The questionable usefulness of this news stems from a variety of factors. The UNIX 03 specification hails from the before time of 2002, when UNIX-proper still had some footholds in the market and being a UNIX meant something to the industry. These days, Linux has pretty much taken over the traditional UNIX market, and UNIX certification seems to have all but lost its value. Only one operating system can boast to conform to the latest UNIX specification - AIX is UNIX V7 and 03-certified - while macOS and HP-UX are only UNIX 03-certified. OpenWare, UnixWare, and z/OS only conform to even older standards. On top of all this, it seems being UNIX-certified by The Open Group feels a lot like a pay-to-play scheme, making it unlikely that community efforts like, say, FreeBSD, Debian, or similarly popular server operating systems could ever achieve UNIX-certification even if they wanted to. This makes the whole UNIX-certification world feel more like the dying vestiges of a job security program than something meaningful for an operating system to aspire to. In any even, you can now write a program that compiles and runs on all two UNIX 03-certified operating systems, as long as it only uses POSIX APIs.
A YouTube channel hasresurrected a programming languagethat hadn't been seen since the 1980s - in a testament to both the enduring power of our technology, and of the communities that care about it. But best of all, Simpsonuploaded the language to the Internet Archive, along with all his support materials, inviting his viewers to write their own programs (and saying he hoped his upstairs neighbor would've approved). And in our email interview, Simpson said since then it's already been downloaded over 1,000 times - which is pretty amazing for something so old." David Cassel It's great that this lost programming language, MicroText for the Commodore 64, was rediscovered, but I'm a bit confused as to how lost" this language really was. I mean, it was discovered" in a properly listed eBay listing, which feels like cheating to me. When I think of stories of discoveries of long-lost software, games, or media, it usually involves things like finding it in a shed after years of searching, or someone at a company going through that box of old hard drives discovering the game they worked on 32 years ago. I don't know, something about this whole story feels off to me, and it's ringing some alarm bells I can't quite place. Regardless, it's cool to have MicroText readily available on the web now, so that people can rediscover it and create awesome new things with it. Perhaps there's old ideas to be relearned here.
In ancient Greek mythology, Kassandra, priestess of Apollo and daughter ofKingPriamand QueenHecuba of Troy, was granted the gift of prophecy by Apollo, in return for favours". When Kassandra then decided to, well, not grant any favours", Apollo showcased that as a good son of Zeus, he did not understand consent either, and cursed her by making sure nobody would believe her prophecies. There's some variations to the story from one author or source to the next, but the general gist remains the same. Anyway, I've been warning everyone about the fall of Mozilla and Firefox for years now, so here's another chapter in the slow decline and fall of Mozilla: they're now just flat-out stating they're an online advertising company. As Mark shared inhis blog, Mozilla is going to be more active in digital advertising. Our hypothesis is that we need to simultaneously work on public policy, standards, products and infrastructure. Today, I want to take a moment to dive into the details of the product" and infrastructure" elements. I will share our emerging thoughts on how this will come to life across our existing products (like Firefox), and across the industry (through the work of our recent acquisition,Anonym, which is building an alternative infrastructure for the advertising industry). Laura Chambers Pretty much every one of my predictions regarding the slow downfall of Mozilla are coming true, and we're just waiting around now for the sword of Damocles to drop: Google ending its funding for Mozilla, which currently makes up about 80% of the former browser maker's revenue. Once this stream of free money dries up, Mozilla's decline will only accelerate even more, and this is probably why they are trying to get into the online advertising business in the first place. How else are you going to make money from a browser? In the meantime, the operating system most reliant on Firefox existing as a privacy-respecting browser, desktop Linux, still seems to be taking no serious steps to prepare for this seeming inevitability. There's no proper Firefox fork, there's no Chromium variant with the kind of features desktop users expect (tab sharing, accounts, etc., which are not part of Chromium), nothing. There's going to be a point where shipping a further enshittified Firefox becomes impossible, or at the least highly contentious, for Linux distributions, and I don't see any viable alternative anywhere on the horizon. I'm sure things will turn out just fine.
Remember earlier this year, when Android Authority discovered Google was experimenting with letting you run full Chrome OS on your Android device? In case you were wondering if that particular piece of spaghetti was sticking to the wall, I'm sorry to disappoint you it isn't. Despite creating the Ferrochrome launcher app, which would've made the whole thing a one-click affair, Google has just removed the whole concept from the Android code base altogether. Unfortunately, though, Google has decided to kill its Ferrochrome launcher app. This was revealed to us by a code change recently submitted to the AOSP Gerrit. The code change, which hasn't been merged yet, removes the entire Ferrochrome launcher app from AOSP. Google's reason for removing this app is that it doesn't plan to ship it or maintain its code. It seems that Google is shifting towards using the Linux-based Debian distro instead of Chrome OS as its testbed for AVF development. Mishaal Rahman at Android Authority I'm not really sure if people were really asking for something like this, and to Google's credit - for once - the company never even so much as hinted at releasing this to the general public. Still, the idea of carrying just your phone with you as your primary computer, and plugging into a display and input devices as the need arises, remains something a lot of people are fascinated with, and putting Chrome OS on your Android phone would've been one way to achieve this goal. Despite decades of attempts, it seems not even the smartest people in Silicon Valley can crack this nut. Perhaps they should ask Gemini to solve it for them? It doesn't involve pizza's, glue, or rocks, so who knows - it might surprise them!
For nearly 15 years, FreeBSD has been at the core of my personal infrastructure, and my passion for it has only grown over time. As a die-hard fan, I've stuck with BSD-based systems because they continue to deliver exactly what I need-storage, networking, and security-without missing a beat. The features I initially fell in love with, like ZFS, jails, and pf, are still rock-solid and irreplaceable. There's no need to overhaul them, and in many ways, that reliability is what keeps me hooked. My scripts from 20 years ago still work, and that's a rare kind of stability that few platforms can boast. It's not just me, either-big names like Netflix, Microsoft, and NetApp, alongside companies like Tailscale and AMD, continue to support FreeBSD, further reinforcing my belief in its strength and longevity (you can find the donators and sponsors righthere). Yet, while this familiarity is comforting, it's becoming clear that FreeBSD must evolve to keep pace with the modern landscape of computing. gyptazy It's good to read so many articles and comments from long-time FreeBSD users and contributors who seem to recognise that there's a real opportunity for FreeBSD to become more than just' a solid server operating system. This aligns neatly with FreeBSD itself recognising this, too, and investing in improving the operating system's support for what are not considered basic laptop features like touchpad gestures and advanced sleep states, among other things. I've long held the belief that the BSDs are far closer to attracting a wider, more general computing-focused audience than even they themselves sometimes seem to think. There's a real, tangible benefit to the way BSDs are developed and structured - a base system developed by one team - compared to the Linux world, and there's enough disgruntlement among especially longtime Linux users about things like Wayland and systemd that there's a pool of potential users to attract that didn't exist only a few years ago. If you're a little unsure about the future of Linux - give one of the BSDs a try. There's a real chance you'll love it.