We talked about Psion last week, and we're talking about Psion again this week. This time, Kian Ryan highlights a very important capability of Psion's devices, a capability that's entirely absent from today's mobile devices: a built-in IDE and dedicated programming language so you can write code and build applications, including ones with a graphical user interface, right on the device. All Psion devices could run OPL, either preinstalled on the device or via a DATAPAK memory card. It's a BASIC-esque programming language, and while you could develop OPL programs on your PC in DOS, Psion devices also shipped with an IDE preinstalled so you could get just as much done on the device itself. Back then, this wasn't particularly unique, but these days, mobile devices have become so locked-down and dumb that developing applications on-device is basically a non-starter. Which can't be said about my current mobile. My mobile is a great device to consume content on, but it has no built in tools to extend its functionality. If I want to build an application for it, I have to use another computer to download a build environment, build the application, sign it, and then transfer the packaged app to my phone. On the Psion, all the tools are right there, on my home screen. It does feel like we're missing an opportunity here. Kian Ryan They're entirely right, of course. Our current mobile devices are faster and technically more capable than ever, but extending the functionality of your smartphone using the smartphone itself by writing and compiling code on it is far more cumbersome than it was in the past. Even my Psion Organiser II LZ64, from 1986, has OPL on it, and if I took the time to relearn the basic BASIC I once knew, I could probably still program something useful on it today, almost 40 years later, without being gatekept by anyone, and without needing any other device. That's something quite magical that we've lost, and that's sad.
I've been working on a bunch of small projects involving microcontrollers. Currently a lot of them are based around the Raspberry Pi Pico boards because I like the development experience of those a lot. They have a decent SDK and cheap hardware to get started and the debugger works with gdb/openocd so it just integrates in all IDEs that support that. One of my current projects is making a fancy hardware controller for a bunch of video equipment I use. The main things that will be controlled are two PTZ cameras (those are cameras that have motors to move them). One stationary camera and the video switching equipment that that's hooked up to. Martijn Braam There's more to building something like this than connecting up hardware components - there's also software that needs to be taken care of. In this case, the author is weighing several real-time operating systems for use in the project, namely FreeRTOS, NuttX, and Zephyr. If you're working on a similar project, this article may help in choosing the RTOS that's right for you.
David Rosenthal, one of the primary contributors to the X Windowing System, has published an awesome blog post about the recent 40 year anniversary of X, full of details about the early days of X development, as well as the limitations they had to deal with, the choices they had to make, and the environment in which they were constrained. Once at Sun I realized that it was more important for the company that the Unix world standardized on a single window system than that the standard be Sun's NeWS system. At C-MU I had already looked into X as an alternative to the Andrew window system, so I knew it was the obvious alternative to NeWS. Although most of my time was spent developing NeWS, I rapidly ported X version 10 to the Sun/1, likely the second port to non-DEC hardware. It worked, but I had to kludge several areas that depended on DEC-specific hardware. The worst was the completely DEC-specific keyboard support. Because it was clear that a major redesign of X was needed to make it portable and in particular to make it work well on Sun hardware, Gosling and I worked with the teams at DEC SRC and WRL on the design of X version 11. Gosling provided significant input on the imaging model, and I designed the keyboard support. As the implementation evolved I maintained the Sun port and did a lot of testing and bug fixing. All of which led to my trip to Boston to pull all-nighters at MIT finalizing the release. David Rosenthal They were clearly right. During those days, the UNIX world was using a variety of windowing systems, all tied to various companies and platforms. Standardising virtually the entire UNIX world on X aided in keeping UNIX compatible-ish even in the then-new graphical era, and X's enduring existence to this very day is evidence of the fact they made a lot of right choices early on. Rosenthal also explains why one of the main alternatives to X, Sun's PostScript-based NeWS, which was also co-developed by Rosenthal, didn't win out over X. It had several things working against its adoptions and popularisation, such as Sun requiring a license fee for the source code, its heftier system requirements, and the fact it was more difficult to program for. After trying to create what Rosenthal describes as a ghastly kludge" by combining NeWS and X into Xnews, Sun eventually killed it altogether. Of course, this wouldn't be restrospective of X without mentioning Wayland. We and Jobs were wrong about the imaging model, for at least two reasons. First, early on pixels were in short supply and applications needed to make the best use of the few they were assigned. They didn't want to delegate control to the PostScript interpreter. Second, later on came GPUs with 3D imaging models. The idea of a one-size-fits-all model became obsolete. The reason that Wayland should replace X11 is that it is agnostic to the application's choice of imaging model. David Rosenthal This is about as close to a blessing from the original X Windowing System developers you're ever going to get, but Rosenthal does correctly note that XWayland is a thing, and since not every application is going to be rewritten to support Wayland, X will most likely be around for a long time to come. In fact, he looks towards the future, and predicts that we'll definitely be celebrating 50 years of X, and that yes, people will still be using it by then.
It seems the dislike for machine learning runs deep. In a blog post, Cloudflare has announced that blocking machine learning scrapers is so popular, they decided to just add a feature to the Cloudflare dashboard that will block all machine learning scrapers with a single click. We hear clearly that customers don't want AI bots visiting their websites, and especially those that do so dishonestly. To help, we've added a brand new one-click to block all AI bots. It's available for all customers, including those on the free tier. To enable it, simply navigate to the Security > Bots section of the Cloudflare dashboard, and click the toggle labeled AI Scrapers and Crawlers. Cloudflare blog According to Cloudflare, 85% of their customers block machine learning scrapers from taking content from their websites, and that number definitely does not surprise me. People clearly understand that multibillion dollar megacorporations freely scraping every piece of content on the web for their own further obscene enrichment while giving nothing back - in fact, while charging us for it - is inherently wrong, and as such, they choose to block them from doing so. Of course, it makes sense for Cloudflare to try and combat junk traffic, so this is one of those cases where the corporate interests of Cloudflare actually line up with the personal interests of its customers, so making blocking machine learning scrapers as easy as possible benefits both parties. I think OSNews, too, makes use of Cloudflare, so I'm definitely going to ask OSNews' owner to hit that button. Cloudflare further details that a lot of people are blocking crawlers run by companies like Amazon, Google, and OpenAI, but completely miss far more active crawlers like those run by the Chinese company ByteDance, probably because those companies don't dominate the AI" news cycle. Then there's the massive number of machine learning crawlers that just straight-up lie about their intentions, trying to hide the fact they're machine learning bots. We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection. We will continue to keep watch and add more bot blocks to our AI Scrapers and Crawlers rule and evolve our machine learning models to help keep the Internet a place where content creators can thrive and keep full control over which models their content is used to train or run inference on. Cloudflare blog I find this particularly funny because what's happening here is machine learning models being used to block... Machine learning models. Give it a few more years down the trajectory we're currently on, and the internet will just be bots reading content posted by other bots.
The article's from 2021, but I think it's still worth discussing. A hard reality of C and C++ software development on Windows is that there has never been a good, native C or C++ standard library implementation for the platform. A standard library should abstract over the underlying host facilities in order to ease portable software development. On Windows, C and C++ is so poorly hooked up to operating system interfaces that most portable or mostly-portable software - programs which work perfectly elsewhere - are subtly broken on Windows, particularly outside of the English-speaking world. The reasons are almost certainly political, originally motivated by vendor lock-in, than technical, which adds insult to injury. This article is about what's wrong, how it's wrong, and some easy techniques to deal with it in portable software. Chris Wellons As someone who doesn't know how to code or program, articles like these are always difficult to properly parse. I understand the primary problem the article covers, but what I'm curious about is how much of this problem is personal - skill issue - and how much of it is a widely held belief by Windows developers and programmers. I know there's quite a few of you in our audience, so I'd love to hear from you how you feel about this. The author also authored his on fix, something called libwinsane, which I'm also curious about - is this the only solution, or are there more options out there?
Another month, another report from the Redox team. The Rust-based operating system saw another active month, including getting a whole bunch of new funding deals for specific features, such as adding UNIX-style signals to Redox, as well as the further development of Termion, a Redox project that is a pure Rust, bindless library for low-level handling, manipulating and reading information about terminals". Furthermore, the default user interface Orbital got a small makeover with new colours and a new default wallpaper, and there's the usual documentation and website improvements. More substantial are doubling the performance of RedoxFS by improving the speed of block reads and writes, and changes in how the xHCI drivers works to drastically reduce CPU usage. The PCI/PCIe and x86 VirtIO drivers has also been improved, and you can now do userspace debugging using the GNU Debugger from outside the VM. There's a lot more, so head on over to read the whole thing.
The impact printer was a mainstay of the early desktop computing era. Also called dot matrix printers," these printers could print low-resolution yet very readable text on a page, and do so quickly and at a low price point. But these printers are a relic of the past; in 2024, you might find them printing invoices or shipping labels, although more frequently these use cases have been replaced by other types of printers such as thermal printers and laser printers. The heart of the impact printer is the print head. The print head contained a column of pins (9 pins was common) that moved across the page. Software in the printer controlled when to strike these pins through an inked ribbon to place a series of dots" on a page. By carefully timing the pin strikes with the movement of the print head, the printer could control where each dot was placed. A column of dots might represent the vertical stroke of the letter H, a series of single dots created the horizontal bar, and another column would create the final vertical stroke. Jim Hall at Technically We Write Our first printer was a dot matrix model, from I think a brand called Star or something similar. Back then, in 1991 or so, a lot of employers in The Netherlands offered programs wherein employees could buy computers through their work, offered at a certain discount. My parents jumped on the opportunity when my mom's employer offered such a program, and through it, we bought a brand new 286 machine running MS-DOS and Windows 3.0, and it included said dot matrix printer. There's something about the sound and workings of a dot matrix printer that just can't be bested by modern ink, laser, or LED printers. The mechanical punching, at such a fast rate it sounded like a tiny Gatling gun, was mesmerising, especially when paired with continuous form paper. Carefully ripping off the perforated edges of the paper after printing was just a nice bonus that entertained me quite a bit as a child. I was surprised to learn that dot matrix printers are still being manufactured and sold today, and even comes in colour. They're quite a bit more expensive than other printer types these days, but I have a feeling they're aimed at enterprises and certain niches, which probably means they're going to be of considerably higher quality than all the other junk printers that clog the market. With a bit more research, it might actually be possible to find a brand new colour dot matrix printer that is a better choice than some of the modern alternatives. The fact that I'm not contemplating buying a brand new dot matrix printer in 2024, even though I rarely print, is a mildly worrying development.
Microsoft Defender is the endpoint security solution preinstalled on every Windows machine since Windows 7. It's a fairly complex piece of software, addressing both EDR and EPP use cases. As such, Microsoft markets two different products. Microsoft Defender for Endpoint is a cloud based endpoint security solution that combines sensor capabilities with the advantages of a cloud processing. Microsoft Defender Antivirus (MDA), on the other hand, is a modern EPP enabled by default on any fresh Windows installation. MDA is the focus of this analysis. Retooling If you've ever wanted to know how Microsoft Defender works, this article contains a wealth of detailed information.
R9 is a work-in-progress effort to build a Plan 9 kernel to Rust. It was started a couple years back by the maintainers of the Harvey OS distribution of Plan 9, who threw in the towel after loss of traction". R9 is a reimplementation of the plan9 kernel in Rust. It is not only inspired by but in many ways derived from the original Plan 9 source code. R9OS GitHub page For now, the project is obviously mostly focused on running in virtual machines, specifically Qemu, in which it can be run using a variety of architectures: aarch64, x86-64 (with or without kvm), and RISC-V.
Once upon a time, the IBM PC was released. In the IBM PC BIOS, you could enter characters that weren't present on the keyboard by holding the Alt key and typing the decimal value on the numeric keypad. For example, you could enter n by holding Alt and typing Numpad1 Numpad6 Numpad4, then releasing the Alt key. Raymond Chen Another Raymond Chen story, and this one involves hearts, snowmen, different editing controls, codepages, and more. In other words, just another Tuesday for Chen.
The European Union's Digital Markets Act is the gift that keeps on giving. This time, it's Facebook's turn to be slapped on the fingers with a ruler - a metric ruler, of course - because of its malicious compliance with the DMA. Today, the Commission has informed Meta of its preliminary findings that its pay or consent" advertising model fails to comply with the Digital Markets Act (DMA). In the Commission's preliminary view, this binary choice forces users to consent to the combination of their personal data and fails to provide them a less personalised but equivalent version of Meta's social networks. European Commission press release The European Commission's preliminary conclusion takes issue with Facebook's binary choice between pay for zero ads" and full-on tracking and all the ads". According to the DMA, Facebook must offer users the option of an equivalent experience with less tracking, and the company doesn't offer such an option to users. In addition, Facebook's proposal does not allow users to exercise their right to freely consent to the combination of their personal data". It's important to note that this is not some sort of definitive ruling of finding; it's preliminary, and Facebook now has the opportunity to state its case and formulate its arguments. If the eventual ruling is that Facebook does not comply, the company is liable for fines up to 10% of its yearly worldwide turnover, which can rise up to 20% for repeated infractions.
Well, it seems we've got a better understanding now of why Andreas Kling decided to leave the SerenityOS project to focus entirely on Ladybird, the web browser that grew out of his hobby operating system. They've got some big plans for where to take Ladybird, and I'm saying they" because it's being backed by a big name. They've set up a fancy new website for the project, which makes it all look a bit more presentable to a general audience. The project is aiming for a first alpha release for Linux and macOS in 2026, and Windows or mobile versions are not something they're currently interested in - they want to get the desktop version to be presentable first. It also seems we're not in Kansas anymore - they've got four full-time paid engineers working on Ladybird at the moment, with three more starting soon. Sure, they've got some sponsors, but that seems like a lot of people, so where's the cash coming from? Well, the project also announced its first two board members, and it won't surprise you Andreas Kling himself is one of them. The other name is none other than Chris Wanstrath, and if that name doesn't ring a bell - he's the co-founder and former CEO of GitHub, which he sold to Microsoft in 2018. He also created the Atom text editor and led several other projects. Oh, he also happens to be a billionaire who apparently has donated 1 million dollars to Ladybird. In other words, the Ladybird project is a lot more of a serious, grown-up effort than it may have seemed when Kling first announced his departure from SerenityOS. This means the project has some serious money behind it, an influential name with probably some great networking skills, and, of course, Kling's unique experience working on browser engines for Nokia and Apple in the past. All in all, this is great news.
In this writeup we provide a summary of technical information crucial to evaulate the exploitability and impact of memory safety problems in IBM i programs. As administrators and developers of IBM i aren't supposed to work below MI level" this kind of information is not officially documented by the vendor. The information presented here is thus based on already published reverse engineering results, and our own findings uncovered using IBM's System Sertice Tools (SST) and the POWER-AS specific Processor extensions we developed for the Ghidra reverse engineering framework. Tests were performed on a physical POWER 9 system running IBM i V7R4. Programs were compiled by the default settings of the system in the ILE program model. C language source code will be provided separately. Silent Signal Some light reading.
On the brink of insanity, my tattered mind unable to comprehend the twisted interplay of millennia of arcane programmer-time and the ragged screech of madness, I reached into the Mass and steeled myself to the ground lest I be pulled in, and found my magnum opus. Booting Linux off of a Google Drive root. Ersei That's not... You shouldn't... Why would...
The web browser Vivaldi is taking a firm stance against including machine learning tools to its browser. So, as we have seen, LLMs are essentially confident-sounding lying machines with a penchant to occasionally disclose private data or plagiarise existing work. While they do this, they also use vast amounts of energy and are happy using all the GPUs you can throw at them which is a problem we've seen before in the field of cryptocurrencies. As such, it does not feel right to bundle any such solution into Vivaldi. There is enough misinformation going around to risk adding more to the pile. We will not use an LLM to add a chatbot, a summarization solution or a suggestion engine to fill up forms for you until more rigorous ways to do those things are available. Julien Picalausa on the Vivaldi blog I'm not a particular fan of Vivaldi personally - it doesn't integrate with KDE well visually and its old-fashioned-Opera approach of throwing everything but the kitchen sink at itself is just too cluttered for me - but props to the Vivaldi team for taking such clear and firm stance. There's a ton of pressure from big money interests to add machine learning to everything from your operating system to your nail scissors, and popular tech publishers are certainly going to publish articles decrying Vivaldi's choice, so they're not doing this without any risk. With even Firefox adding machine learning tools to the browser, there's very few - if any - browsers left, other than Vivaldi, it seems - that will be free of these tools. I can only hope we're going to see a popular Firefox fork without this nonsense take off, and I'm definitely keeping my eye on the various options that already exist today.
Straight from the arcade world, the Neo Geo was, without a doubt, the most expensive hardware of the 4th generation. This begs the question: how capable was it and how did it compare with the rest? In this entry, we'll take a look at the result of one company (SNK) setting budget restrictions aside and shipping a product meant to please both arcade owners and rich households. Rodrigo Copetti Rich households, indeed. Back in the '90s, when Nintendo was the only game in town - few people in my area cared one bit about Sega - Neo Geo was a name we only knew of vaguely. It was supposed to be a massively powerful console that was so expensive nobody bought one, and some of us even doubted it was real in the first place. Ah, the pre-internet playground days were wild.
The openSUSE project recently announced the second release candidate (RC2) of its Aeon Desktop, formerly known as MicroOS Desktop GNOME. Aside from the new coat of naming paint, Aeon breaks ground in a few other ways by dabbling with technologies not found in other openSUSE releases. The goal for Aeon is to provide automated system updates using snapshots that can be applied atomically, removing the burden of system maintenance for lazy developers" who want to focus on their work rather than desktop administration. System-tinkerers need not apply. The idea behind Aeon, as with other immutable (or image-based) Linux distributions, is to provide the core of the distribution as a read-only image or filesystem that is updated atomically and can be rolled back if needed. Google's ChromeOS was the first popular Linux-based desktop operating system to follow this model. Since the release of ChromeOS a number of interesting immutable implementations have cropped up, such as Fedora Silverblue, Project Bluefin (covered here in December 2023), openSUSE's MicroOS (covered here in March 2023), and Ubuntu Core. Joe Brockmeier at LWN With the amount of attention immutable Linux desktops are getting, and how much work and experimentation that's going into them, I'm getting the feeling that sooner or later all of the major, popular desktop Linux distributions will be going this route. Depending on implementation details, I actually like the concept of a defined base system that's just an image that can be replaced easily using btrfs snapshots or something like that, while all the user's files and customisations are kept elsewhere. It makes intuitive sense. Where the current crop of immutable Linux desktops fall flat for me is their reliance on (usually) Flatpak. You know how there's people who hate systemd and/or Wayland just a little too much, to the point it gets a little weird and worrying? That's me whenever I have to deal with Flatpaks. Every experience I have with Flatpaks is riddled with trouble for me. Even though I'm a KDE user, I'm currently testing out the latest GNOME release on my workstation (the one that I used to conclude Windows is simply not ready for the desktop), using Fedora of course, and on GNOME I use the Mastodon application Tuba. While I mostly write in English, I do occasionally write in Dutch, too, and would love for the spell check feature to work in my native tongue, too, instead of just in English. However, despite having all possible Dutch dictionaries installed - hunspell, aspell - and despite those dictionaries being picked up everywhere else in GNOME, Tuba only showed me a long list of variants of English. After digging around to find out why this was happening, it took me far longer than I care to publicly admit to realise that since the latest version of Tuba is only really available as a Flatpak on Fedora, my problem probably had something to do with that - and it turns out I was right: Flatpak applications do not use the system-wide installed spellcheck dictionaries like normal applications do. This eventually led me to this article by Daniel Aleksandersen, where he details what you need to do in order to add spellcheck dictionaries to Flatpak applications. You need to run the following commands: The list of languages uses two-letter codes only, and the first language listed will serve as the display language for Flatpak applications, while the rest will be fallback languages - which happens to include downloading and installing the Flatpak-specific copies of the spellcheck libraries. Sadly, this method is not particularly granular. Since it only accepts the two-letter codes, you can't, say, only install nl-nl"; you'll be getting nl-be" as well. In the case of a widely spoken language like English, this means a massive list of 18 different varieties of English. The resulting menus are... Not elegant. This is just an example, but using Flatpak, you'll run into all kinds of issues like this, that then have to be solved by hacks or obscure terminal commands - not exactly the user-friendly image Flatpak is trying to convey to the world. This particular issue might not matter to the probably overwhelming English-speaking majority of Flatpak developers, but for anyone who has to deal with multiple languages on a daily basis - which is a massive number of people, probably well over 50% of computer users - having to mess around with obscure terminal commands hidden in blog posts just to be able to use the languages they use every day is terrible design on a multitude of levels, and will outright make Flatpak applications unusable for large numbers of people. Whenever I run into these Flatpak problems, it makes it clear to me that Flatpak is designed not by users, for users - but by developers, for developers. I can totally understand and see why Flatpak is appealing to developers, but as a user, they bring me nothing but grief, issues, and weird bugs that all seem to stem from being made to make developers' lives easier, instead of users'. If immutable Linux distributions are really hellbent on using Flatpak as the the means of application installation - and it seams like they are - it will mean a massive regression in functionality, usability, and discoverability for users, and as long as Flatpak remains as broken and badly designed as it is, I really see no reason to recommend an immutable Linux desktop to anyone but the really curious among us.
When someone tells you who they are, believe them. Microsoft's AI chief Mustafa Suleyman: With respect to content that is already on the open web, the social contract of that content since the '90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding. Mustafa Suleyman This is absolute bullshit from the first word to the very last. None of this is true - not even in the slightest. Content on the web is not free for the taking by anyone, especially not to be chewed up and regurgitated verbatim by spicy autocomplete tools. There is no social contract" to that effect. In fact, when I go to any of Microsoft's website, documents, videos, or any other content they publish online, on the open web, and scroll to the very bottom of the page, it's all got the little copyright symbol or similar messaging. Once again, this underlines how entitled Silicon Valley techbros really are. If we violate even a gram of Microsoft's copyrights, we'd have their lawyers on our ass in weeks - but when Microsoft itself needs to violate copyright and licensing on an automated, industrial scale, for massive profits, everything is suddenly peace, love, and fair use. Men in Silicon Valley just do not understand consent. At all. And they show this time and time again. Meanwhile, the Internet Archive has to deal with crap like this: The lawsuit is about the longstanding and widespread library practice of controlled digital lending, which is how we lend the books we own to our patrons. As a result of the publishers' lawsuit, more than 500,000 books have been removed from our lending library. Chris Freeland at the Internet Archive Blogs Controlled lending without a profit motive is deemed illegal, but violating copyright and licensing on an automated, industrial scale is fair use. Make it make sense. Make it make sense.
The Apple ][ is one of the most iconic vintage computers of all time. But since Wozniak's monster lasted all the way until 1993 (1995 if you could the IIe card, which I won't count until I get one), it can be easy to forget that in 1977, it was a video extravaganza. The competitors- even much bigger and established companies like Commodore and Tandy- generally only had text modes, let alone pixel-addressable graphics, and they certainly didn't have sixteen colors. (Gray and grey are different colors, right?) Nicole Branagan If there's ever anything you wanted to know about how graphics work on the Apple II, this is the place to go. It's an incredibly detailed and illustrated explanation of how the machine renders and displays graphics, and an excellent piece of writing to boot. I'm a little jealous.
The Vector Packet Processor (VPP) is a framework for moving packets around at high rates. Its core concept is handling packets in groups known as vectors," which allows for the native use of vector processor instructions for packet classification and processing in different CPU architectures - currently amd64 and arm64. VPP can process packets at incredibly high rates and competes with many dedicated forwarding appliances. This is achieved using userspace networking that bypasses the host's normal network stack. This article describes the porting of VPP to FreeBSD and working with the upstream VPP project to include FreeBSD as a supported target. Tom Jones It's not unusual for me to link to something a little over my head, and this is another example of something I know y'all will like, but I don't really understand fully.
So I learned something new today: there are companies that provide security patches for Windows that aren't Microsoft. I never even considered this could be a thing, but it turns out that a paid service called 0patch seems to have been around for a long time, and the consensus seems to be that not only can it be trusted, it also sometimes provides patches sooner than Microsoft does. Today, 0patch announced it'll also be providing this service for Windows 10 after the end of support next year. With October 2025, 0patch will security-adopt" Windows 10 v22H2, and provide critical security patches for it for at least 5 more years - even longer if there's demand on the market. We're the only provider of unofficial security patches for Windows (virtual patches" are not really patches), and we have done this many times before: after security-adopting Windows 7 and Windows Server 2008 in January 2020, we took care of 6 versions of Windows 10 as their official support ended, security-adopted Windows 11 v21H2 to keep users who got stuck there secure, took care of Windows Server 2012 in October 2023 and adopted two popular Office versions - 2010 and 2013 - when they got abandoned by Microsoft. We're still providing security patches for all of these. Mitja Kolsek on the 0patch blog This service implements patching through what it calls micropatches", which are very small sets of CPU instructions injected into running code in memory without modifying - in this case - Microsoft's own code. These micropatches are applied by briefly stopping the offending program, injecting the fix, and continuing the program - without having to close the program or reboot. Of course, they can be unapplied in the same, non-disruptive way. The 0patch service will provide patches for 0days that Microsoft hasn't fixed yet, patches for issues Microsoft won't fix, and sometimes patches for third party code. As the headline clearly states, this service isn't free, but honestly, at roughly 25 dollars plus tax per computer per year, it's not exactly expensive, and definitely cheaper than Microsoft's own Windows 10 Extended Security Update program it's going to offer for Windows 10 after the end of support date next year. Diving a bit deeper into who is providing this service, it comes from a company called ACROS Security, a small company out of Slovenia. The company details its micropatches on its 0patch blog if you want more information on how each individual ones works. I still don't know exactly what to make of this, and I definitely wouldn't rely on something like this for mission-critical Windows computers or servers, but for something like a home PC that can't be upgraded to Windows 11 but still works just fine, or perhaps some disposable virtual machines you're using, this might be a good stopgap solution until you can upgrade to a better operating system, like Linux or one of the BSDs. Are there any people in the OSNews audience who've used 0patch, or perhaps a service similar to it?
KWin had a very long standing bug report about bad performance of the Wayland session on older Intel integrated graphics. There have been many investigations into what's causing this, with a lot of more specific performance issues being found and fixed, but none of them managed to fully fix the issue... until now. Xaver Hugl An excellent deep dive into a very annoying problem KWin on Wayland running on older Intel hardware was facing. It turns out the issue was related to display timings, and older Intel hardware simply not being powerful enough to render frames within the timing window. The solution consisted of a various smaller solutions, and one bigger one: triple-buffering. The end result is a massive performance improvement for KWin on Wayland on older Intel hardware. This detailed post underlines just how difficult it is to simply render a bunch of windows and UI elements on time, without stutters or tearing, while taking into account the wide variety of hardware a project like KDE Plasma intends to run on. It's great to see them paying attention to the older, less powerful systems too, instead of only focusing on the latest and greatest like Apple, and recently Microsoft as well, do.
Mozilla has announced it's adding easy access to tool like ChatGPT, Gemini, and so to Firefox. Whether it's a local or a cloud-based model, if you want to use AI, we think you should have the freedom to use (or not use) the tools that best suit your needs. With that in mind, this week, we will launch an opt-in experiment offering access to preferred AI services in Nightly for improved productivity as you browse. Instead of juggling between tabs or apps for assistance, those who have opted-in will have the option to access their preferred AI service from the Firefox sidebar to summarize information, simplify language, or test their knowledge, all without leaving their current web page. Our initial offering will include ChatGPT, Google Gemini, HuggingChat, and Le Chat Mistral, but we will continue adding AI services that meet our standards for quality and user experience. Ian Carmichael My biggest worry is not so much Mozilla adding these tools to Firefox - other browsers are doing it, and people clearly want to use them, so it makes sense for Firefox, too, to integrate them into the browser. No, my biggest worry is that this is just the first step on the way to the next major revenue agreement - just as Google is paying Mozilla to be the default search engine in Firefox, what if OpenAI starts paying to be the default AI tool in Firefox? Once that happens, I'm afraid a lot of the verbiage around choice and the ability to easily disable it all is going to change. I'm still incredibly annoyed by the fact I have to dive into about:config just to properly remove Pocket, a service I do not use, do not want, and annoys me by taking up space in my UI. I'm afraid that one or two years from now, AI integration will be just another complex set of strings I need to look for in about:config to truly disable it all. It definitely feels like Firefox is only going to get worse from here on out, not better, and this AI stuff seems more like an invitation for a revenue agreement than something well thought-out and useful. We'll see where things go from here, but my worries about Firefox' future are only growing stronger with Mozilla's latest moves. As a Linux user, this makes me worried.
As cool as the organizer was, it was extremely limited in pretty much every way. Psion had got many things right in the first go, as reviewers were quick to admit, and that made iterating on the design somewhat easy. The Organiser II CM released in 1986 was built on the Hitachi HD6303X (Motorola 6803) clocked at 920kHz with 8K RAM and 32K ROM. The screen was a much improved dot matrix LCD with two lines of sixteen characters. This version also shipped with a little piezo beeper built in, and an expansion slot on the top. The expansion slot could allow for a wired power adapter, a serial cable, a bar code reader, a telephone dialer, and even a USB port. Given the reputation of the first model for ruggedness and the coverage of the same quality in the second model, this particular model sold quite well to companies who needed handheld computers for inventory and other purposes. The Organizer II XP launched the same year, and this model had 32K RAM and a backlit screen while otherwise being the same machine. Given that both of these models had significantly more RAM than their predecessor, the programming capabilities were greatly enhanced with a new language, OPL, which was similar to BASIC. Bradford Morgan White The Psion Organiser II is the very root of all mobile computing today. This may seem like hyperbole - but trust me, it really is. I have an Organiser II LZ64 with a 32k datapak (memory card), and while it may look like a calculator, this little machine from 1986 already contains the very skeleton of the graphical user interface Palm would eventually popularise, and the iPhone and Android would take to extraordinary heights. Turn on an Organiser II, and you're greeted by a home screen with a grid of applications (no icons, though, of course - just labels) with a selector you moved around with the cursor keys. Hit the EXE key, and the application would load up, ready to be used; hit the home button (the ON key if my memory serves) and it would take you back to the home screen. This basic paradigm, of a grid of applications as a home screen you always return to, survives to this day, and is used by billions of people on their Android and iOS devices, both smartphones and tablets. People with little to no knowledge of the history of mobile computing - or people spreading corporate propaganda - often seem to act as if the release of the iPhone was the big bang of mobile computing, and that it materialised out of thin air because Steve Jobs alone willed it into existence. The reality is, though, that there is a direct line from the early Psion devices, through to Palm OS, the iPhone, and later Android. There were various dead end branches along the way, too, like the Newton, like Symbian, like the original Windows PocketPC, and so on - but that direct line from early Psion to that fancy Pixel 8 Pro or whatever you have today is solidly visible to anyone without an agenda. I love my Organiser II. It's approaching 40 years old now, and it still works without a single hitch. There's barely a scratch on it, the display is bright, the pixels are clear, the characteristic sliding cover feels as solid today as it did when it rolled off the factory line. This is where mobile computing began.
In a few days, 29 June, FreeDOS will turn 30. This happens to make it one of the oldest, continuously active open source projects in the world, originally created because Jim Hall had heard Microsoft was going to kill DOS when the upcoming Windows 95 was going to be released. After seeing the excitement around Linux, he decided it an open source DOS would be a valuable time investment. I still used DOS, and I didn't want to stop using DOS. And I looked at what Linux had achieved: people from all over the world shared source code with each other to make this full operating system that worked just like Unix. And I thought If they can do that with Linux, surely we can do the same thing with DOS." I asked around on a discussion board (called Usenet) if anyone had made an open source" DOS, and people said No, but that's a good idea .. and you should do it." So that's why I announced on June 29, 1994, that I was starting a new project to make an open source version of DOS that would work just like regular DOS. Jim Hall For an open source implementation of what was a dead end and now is a dead operating system, FreeDOS has been remarkably successful. Not only are there countless people using FreeDOS on retro hardware, it's also a popular operating system for DOS gaming and running old DOS applications in virtual machines. On top of that, many motherboard makers and OEMs use FreeDOS to load firmware update tools, and some of them even offered FreeDOS as the preinstalled operating system when buying new hardware. With the ever-increasing popularity of retrocomputing and gaming, FreeDOS clearly has a bright future ahead of itself.
The European Commission has informed Microsoft of its preliminary view that Microsoft has breached EU antitrust rules by tying its communication and collaboration product Teams to its popular productivity applications included in its suites for businesses Office 365 and Microsoft 365. European Commission press release Chalk this one up in the unsurprising column, too. Teams has infested Office, and merely by being bundled it's become a major competitor to Slack, even though everyone who has to use it seems to absolutely despise Teams with a shared passion rivaling only Americans' disgust for US Congress. On a mildly related note, I'm working with a friend to set up a Matrix server specifically for OSNews users, so we can have a self-hosted, secure, and encrypted space to hang out, continue conversations beyond the shelf life of a news item, suggest interesting stories, point out spelling mistakes, and so on. It'll be invite-only at first, with preference given to Patreons, active commenters, and other people I trust. We intend to federate, so if everything goes according to plan, you can use your existing Matrix username and account. I'll keep y'all posted.
The transition to Wayland is nearing completion for most desktop Linux users. The most popular desktop Linux distribution in the world, Ubuntu, has made the call and is switching its NVIDIA users over to Wayland by default in the upcoming release of Ubuntu 24.10. The proprietary NVIDIA graphics driver has been the hold-out on Ubuntu in sticking to the GNOME X.Org session out-of-the-box rather than Wayland as has been the default for the past several releases when using other GPUs/drivers. But for Ubuntu 24.10, the plan is to cross that threshold for NVIDIA now that their official driver has much better Wayland support and has matured into great shape. Particularly with the upcoming NVIDIA R555 driver reaching stable very soon, the Wayland support is in great shape with features like explicit sync ready to use. Michael Larabel This is great news for the Linux desktop, as having such a popular Linux distribution defaulting the users of the most popular graphics card brand to X.org created a major holdout. None of this obviously means that Wayland is perfect or that all use cases are covered - accessibility is an important use case where tooling simply hasn't been optimised yet for Wayland, but work is underway - and for those of us who prefer X.org for a variety of reasons, there are still countless distributions offering it as a fallback or as the default option.
It seems the success of the Framework laptops, as well the community's relentless focus on demanding repairable devices and he ensuing legislation, are starting to have an impact. It wasn't that long ago that Microsoft's Surface devices were effectively impossible to repair, but with the brand new Snapdragon X Elite and Pro devices, the company has made an impressive U-turn, according to iFixIt. Both the new Surface Laptop and Surface Pro are exceptionally easy to repair, and take cues from Framework's hardware. Microsoft's journey from the unrepairable Surface Laptop to the highly repairable devices on our teardown table should drive home the importance of designing for repair. The ability to create a repairable Surface was always there but the impetus to design for repairable was missing. I'll take that as a sign that Right to Repair advocacy and legislation has begun to bear fruit. Shahram Mokhtari The new Surface devices contain several affordances to make opening them up and repairing them easier. They take cues from Framework in that inside screws and components are clearly labeled to indicate what type they are and which parts they're holding in place, and there's a QR code that leads to online repair guides, which were available right away, instead of having to wait months to forever for those to become accessible. The components are also not layered; in other words,you don't need to remove six components just to get to the SSD, or whatever - some laptops require you to take out the entire mainboard just to get access to the fans to clean them, which is bananas. Microsoft technically doesn't have to do any of this, so it's definitely praiseworthy that their hardware department is going the extra kilometre to make this happen. The fact that even the Surface Pro, a tablet, can be reasonably opened up and repaired is especially welcome, since tablets are notoriously difficult to impossible to repair.
Microsoft has made OneDrive slightly more annoying for Windows 11 users. Quietly and without any announcement, the company changed Windows 11's initial setup so that it could turn on the automatic folder backup without asking for it. Now, those setting up a new Windows computer the way Microsoft wants them to (in other words, connected to the internet and signed into a Microsoft account) will get to their desktops with OneDrive already syncing stuff from folders like Desktop Pictures, Documents, Music, and Videos. Depending on how much is stored there, you might end up with a desktop and other folders filled to the brim with shortcuts to various stuff right after finishing a clean Windows installation. Taras Buria at NeoWin Just further confirmation that Windows 11 is not ready for the desktop.
Today, the European Commission has informed Apple of its preliminary view that its App Store rules are in breach of the Digital Markets Act (DMA), as they prevent app developers from freely steering consumers to alternative channels for offers and content. In addition, the Commission opened a new non-compliance procedure against Apple over concerns that its new contractual requirements for third-party app developers and app stores, including Apple's new Core Technology Fee", fall short of ensuring effective compliance with Apple's obligations under the DMA. European Commission press release File this in the category for entirely expected news that is the opposite of surprising. Apple has barely even been maliciously compliant with the DMA, and the European Commission is entirely right in pursuing the company for its continued violation of the law. The DMA really isn't a very complicated law, and the fact the world's most powerful and wealthiest corporation in the world can't seem to adapt its products to the privacy and competition laws here in the EU is clearly just a bunch of grandstanding and whining. In fact, I find that the European Commission is remarkably lenient and cooperative in its dealings with the major technology giants in general, and Apple in particular. They've been in talks with Apple for a long time now in preparation for the DMA, the highest-ranking EU officials regularly talked with Apple and Tim Cook, they've been given ample warnings, instructions, and additional time to make sure their products do not violate the law - as a European Union citizen, I can tell you no small to medium business or individual EU citizen gets this kind of leniency and silk gloves treatment. Everything Apple is reaping, it sowed all by itself. As I posted on Mastodon a few days ago: The EU enacted a new law a while ago that all bottle caps should remain attached to the bottle, to combat plastic trash. All the bottle and packaging makers, from massive multinationals like Coca Cola and fucking Nestle to small local producers invested in the development of new caps, changing their production lines, and shipping the new caps. Today, a month before the law goes into effect, it's basically impossible to find a bottle without an attached cap. I don't know, I thought this story was weirdly relevant right now with Apple being a whiny bitch. Imagine being worse than Coca Cola and motherfucking Nestle. Thom Holwerda Apple is in this mess and facing insane fines as high as 10% of their worldwide turnover because spoiled, rich, privileged brats like Tim Cook are not used to anyone ever saying no". Silicon Valley has shown, time and time again, from massive data collection for advertising purposes to scraping the entire web for machine learning, that they simply do not understand consent. Now that there's finally someone big, strong, and powerful enough to not take Silicon Valley's bullshit, they start throwing tamper tantrums like toddlers. Apple's public attacks on the European Union - and their instructions to their PR attack dogs to step it up a notch - are not doing them any favours, either. The EU is, contrary to just about any other government body in the Western world, ridiculously popular among its citizens, and laws that curb the power of megacorps are even more popular. I honestly have no idea who's running their PR department, because they're doing a terrible job, at least here in the EU.
I can't believe this is considered something I need to write about, but it's still a very welcome new feature that surprisingly has taken this long to become available: iOS and iPadOS 18 now allow you to format external storage devices. Last year when I began testing iPadOS 17 betas, I noticed the addition of options for renaming and erasing external drives in the Files app. I watched these options over the course of the beta cycle for iPadOS 17 to see if any further changes would come. The one I watched most closely was the Erase" option for external drives. This option uses the same glyph as the Erase option in Disc Utility on macOS. In Disc Utility on the Mac, in order to reformat an external drive, you first select the Erase" option, and then additional options appear for selecting the new format you wish to reformat the drive with. When I saw the Erase" option added in the Files app on iPadOS, I suspected that Apple might be moving towards adding these reformatting options into the Files app on iPadOS. And I'm excited to confirm that this is exactly what Apple has done in iPadOS 18! Kaleb Cadle It was soon confirmed this feature is available in iOS 18 as well. You can only format in APFS, ExFAT and FAT, so it's not exactly a cornucopia of file systems to choose from, but it's better than nothing. This won't magically fix all the issues a lot of people have with especially iPadOS when it comes to feeling constrained when using their expensive, powerful tablets with detachable keyboards, but it takes away at least one tiny reason to keep a real computer around. Baby steps, I guess.
One of my biggest concerns regarding the state of the web isn't ads (easily blocked) or machine learning (the legal system isn't going to be kind to that), but the possible demise of Firefox. I've long been worried that with the seemingly never-ending downward marketshare spiral Firefox is in - it's at like 3% now on desktop, even less on mobile - Mozilla's pretty much sole source of income will eventually pull the plug, leaving the already struggling browser effectively for dead. I've continuously been warning that the first casualty of the downward spiral would be Firefox on platforms other than Windows and macOS. So, what do we make of Mozilla buying an online advertising analytics company? Mozilla has acquired Anonym, a trailblazer in privacy-preserving digital advertising. This strategic acquisition enables Mozilla to help raise the bar for the advertising industry by ensuring user privacy while delivering effective advertising solutions. Laura Chambers They way Mozilla explains buying an advertising network is that the company wants to be a trailblazer privacy-conscious online advertising, since the current brand of online advertising, which relies on massive amounts of data collection, is unsustainable. Anonym instead employs a number of measures to ensure that privacy is guaranteed, from anonymous analytics to employing differential privacy when it comes to algorithms, ensuring data can't be used to tack individual users. I have no reason to doubt Mozilla's intentions here - at least for now - but intentions change, people in charge change, and circumstances change. Having an ad network integrated into the Mozilla organisation will surely lead to temptations of weakening Firefox' privacy features and ad-blocking abilities, and just overall I find it an odd acquisition target for something like Mozilla, and antithetical to why most people use Firefox in the first place. What really doesn't help is who originally founded Anonym - two former Facebook executives, backed by a load of venture capital. Do with that little tidbit of information as you please.
Windows 3.0 Enhanced Mode introduced the ability to run MS-DOS programs in a virtual machine. This by itself was already quite an achievement, but it didn't stop there. It also let you put the MS-DOS session in a window, and run it on the screen along with your other Windows programs. This was crazy. Here's how it worked. Raymond Chen When Raymond Chen speaks, we all shut up, listen, and enjoy.
Andrew S. Tanenbaum, professor emeritus of Computer Science at VU Amsterdam, receives the ACM Software System Award for MINIX, which influenced the teaching of Operating Systems principles to multiple generations of students and contributed to the design of widely used operating systems, including Linux. Tanenbaum created MINIX 1.0 in 1987 to accompany his textbook, Operating Systems: Design and Implementation. MINIX was a small microkernel-based UNIX operating system for the IBM PC, which was popular at the time. It was roughly 12,000 lines of code, and in addition to the microkernel, included a memory manager, file system and core UNIX utility programs. It became free open-source software in 2000. VU Amsterdam website Definitely a deserved award for Tanenbaum, and it's a minuscule bit of pride that VU Amsterdam happens to be my Alma mater. He also wrote an article for OSNews way back in 2006, detailing MINIX 3, which is definitely a cool notch to have on our belt.
There's really a Linux distribution for everyone, it seems. EasyOS sounds like it's going to be some Debian derivative with a theme or something, but it's truly something different - in fact, it has such a unique philosophy and approach to everything I barely know where to even start. Everything in EasyOS runs in containers, in the distribution's own custom container format, even entire desktop environments, and containers are configured entirely graphically. EasyOS runs every application in RAM, making it insanely fast, and you can save the contents of RAM to disk whenever you want. You can also choose a special boot option where the entire session is only loaded in RAM, with disk access entirely disabled, for maximum security. Now things are going to get weird. In EasyOS, you always run as root, which may seem like a stupid thing to do, and I'm sure some people will find this offputting. The idea, however, is you run every application as its own user (e.g. Firefox runs as the firefox" user), entirely isolated from every other user, or in containers with further constraints applied. I honestly kind of like this approach. If these first few details of what EasyOS is going for tickles your fancy, I really urge you to read the rest of their detailed explanation of what, exactly, EasyOS is going for. It's an opinionated distribution, for sure, but it's opinionated in a way where they're clearly putting a lot of thought into the decisions they make. I'm definitely feeling the pull to give it a try and see if it's something for me.
Apple has announced it's not shipping three of its tentpole new features, announced during WWDC, in the European Union: Apple Intelligence, iPhone Mirroring, and SharePlay Screen Sharing. Ever since the introduction of especially Apple Intelligence, the company has been in hot water over the sourcing of its training data - Apple admitted it's been scraping everyone's data for years and now used it to train its AI features. This will obviously have included vasts amounts of data from European websites and citizens, and with the strict EU privacy laws, there's a very real chance that such scraping is simply not legal. As such, it's simpler to just not comply with such stricter privacy laws than to design your products with privacy in mind. As Steven Troughton-Smith quips: How many EU-based sites did Apple scrape to build the feature it now says it can't ship in the EU because of legal uncertainty? Steven Troughton-Smith Other massive corporations like Google and Facebook seem to have little issue shipping AI features in the EU, and have been doing so for quite a while now. And mind you, as Tim Cook has been very keen to reiterate in every single interview for the past two years or so, Apple has been shipping AI features similar to what they announced at WWDC for years as well, but it's only now that the European Union is actually imposing regulations on them - instead of letting corporatism run wild - that it can no longer ship such features in the EU? Apple is throwing its users under the bus because Tim Cook is big mad that someone told him no. As I keep reiterating, consent is something Silicon Valley simply does not understand.
It should be no secret to anyone reading OSNews that I'm not exactly a fan of Windows. While I grew up using MS-DOS, Windows 3.x, and Windows 9x, the move to Windows XP was a sour one for me, and ever since I've vastly preferred first BeOS, and then Linux. When, thanks to the tireless efforts of the Wine community and Valve gaming on Linux became a boring, it-just-works affair, I said goodbye to my final gaming-only Windows installation about four or so years ago. However, I also strongly believe that in order to be able to fairly criticise or dislike something, you should at least have experience with it. As such, I decided it was time for what I expected was going to be some serious technology BDSM, and I installed Windows 11 on my workstation and force myself to use it for a few weeks to see if Microsoft's latest operating system truly was as bad as I make it out to be in my head. Installing Windows 11 Technically speaking, my workstation is not supported by Windows 11. Despite packing two Intel Xeon E5 V4 2640 CPUs for a total of 20 cores and 40 threads, 32 GB of ECC RAM, an AMD Radeon Pro w5700, and the usual stuff like an M.2 SSD, this machine apparently did not meet the minimum specifications for Windows 11 since it has no TPM 2.0 security chip, and the processors were deemed too old. Luckily, these limitations are entirely artificial and meaningless, and using Ventoy, which by default disables these silly restrictions, I was able to install Windows 11 just fine. During installation, you run into the first problem if you're coming from a different operating system - even after all these years, Windows still does not give a single hootin' toot about any existing operating systems or bootloaders on your machine. This wasn't an issue for me since I was going to allow Windows to take over the entire machine, but for those of used to have control over what happens when we install our operating systems, be advised that your other operating systems will most likely be rendered unbootable. The tools you have access to during installation for things like disk partitioning are also incredibly limited, and there's nothing like the live environments you're used to from the Linux world - all you get is an installer. In addition, since Windows only really supports FAT and NTFS file systems, your existing ext4, btrfs, UFS, or ZFS partitions used by your Linux or BSD installs will not work at all in Windows. Again - be advised that Windows is a very limited operating system compared to Linux or BSD. Once the actual installation part is done, you're treated to a lengthy - and I truly mean lengthy - out of box experience. This is where you first get a glimpse of just how much data Microsoft wants to collect from its Windows users, and it stands in stark contrast to what I'm used to as a Linux user. On my Linux distribution of choice, Fedora KDE, there's really only KDE's opt-in, voluntary User Feedback option, which only collects basic system information in an entirely anonymous way. Windows, meanwhile, seems to want to collect pretty much everything you do on your machine, and while there's some prompts to reduce the amount of data it collects, even with everything set to minimum it's still quite a lot. Once you're past the out of box experience, you can finally start using your new Windows installation - but actually not really. Unlike a Linux distribution, where all your hardware is detected automatically and will use the latest drivers, on Windows, you will most likely have to do some manual driver hunting, searching the web for PCI and vendor IDs to hopefully locate the correct drivers, which isn't always easy. To make matters worse, even if Windows Update installs the correct drivers for you, those are often outdated, and you're better off downloading the latest versions straight from the vendors' websites. This is especially problematic for motherboard drivers - motherboard vendor websites often list horribly outdated drivers. Updating Windows 11 Once you have all the drivers installed and updated, which often requires several reboots, you might notice that your system seems to be awfully busy, even when you're not actually doing anything with it. Most likely, this means Windows Update is running in the background, sucking up a lot of system resources. If you're used to Linux or BSD, where updating is a quick and centralised process, updating things on Windows is a complete and utter mess. Instead of just updating everything all at once, Windows Update will often require several different rounds of updates, marked by reboots. You'll also discover that Windows Update is not only incredibly slow both when it comes to downloading and installing, but that it's also incredibly buggy. Updates will randomly fail to install for no apparent reason, and there's a whole cottage industry of useless ML and SEO content on the internet trying to help" you fix these issues. On my system, without doing anything, Windows Update managed to break itself in less than 24 hours - it listed 79 (!) driver updates related to the two Xeon processors (I assume it listed certain drivers for every single of the 40 threads), but every single one of them, save for one or two, would fail to install with a useless generic error code. Every time I tried to install them, one or two more would install, with everything else failing, until eventually the update process just hung the entire system. A few days later, the listed updated just disappeared entirely from Windows Update. The updates had no KB numbers, so it was impossible to find any information on them, and to this day, I have no idea what was going on here. Even after battling your way through Windows Update, you're not done actually updating your system. Unlike,
ExectOS is a preemptive, reentrant multitasking operating system that implements the XT architecture which derives from NT architecture. It is modular, and consists of two main layers: microkernel and user modes. Its' kernel mode has full access to the hardware and system resources and runs code in a protected memory area. It consists of executive services, which is itself made up on many modules that do specific tasks, a kernel and drivers. Unlike the NT, system does not feature a separate Hardware Abstraction Layer (HAL) between the physical hardware and the rest of the OS. Instead, XT architecture integrates a hardware specific code with the kernel. The user mode is made up of subsystems and it has been designed to run applications written for many different types of operating systems. This allows us to implement any environment subsystem to support applications that are strictly written to the corresponding standard (eg. DOS, or POSIX). Thanks to that ExectOS will allow to run existing software, including Win32 applications. ExectOS website What ExectOS seems to be is an implementation very close to what Windows NT originally was - implementing the theory of Windows NT, not the reality. It's clearly still in very early development, but in theory, I really like the idea of what they're trying to achieve here. Windows NT is, after all, in and of itself not a bad concept - it's just been tarred and feathered by decades of mismanagement from Microsoft. Implementing something that closely resembles the original, minimalist theories behind NT could lead to an interesting operating system for sure. ExectOS is open source, contains its own boot loader, only runs on EFI, and installation on real hardware, while technically possible, is discouraged.
Today just so happens to be the 40th birthday of X, the venerable windowing system that's on its way out, at least in the Linux world. From the original announcement by Robert W. Scheifler: I've spent the last couple weeks writing a window system for the VS100. I stole a fair amount of code from W, surrounded it with an asynchronous rather than a synchronous interface, and called it X. Overall performance appears to be about twice that of W. The code seems fairly solid at this point, although there are still some deficiencies to be fixed up. We at LCS have stopped using W, and are now actively building applications on X. Anyone else using W should seriously consider switching. This is not the ultimate window system, but I believe it is a good starting point for experimentation. Right at the moment there is a CLU (and an Argus) interface to X; a C interface is in the works. The three existing applications are a text editor (TED), an Argus I/O interface, and a primitive window manager. There is no documentation yet; anyone crazy enough to volunteer? I may get around to it eventually. Robert W. Scheifler Reading this announcement email made me wonder if way back then, in 1984, the year of my birth, there were also people poo-pooing this new thing called X" for not having all the features W had. There must've people posting angry messages on various BBS servers about how X is dumb and useless since it doesn't have their feature in W that allows them to use an acoustic modem to send a signal over their internal telephone system by slapping their terminal in just the right spot to activate their Betamax that's hotwired into the telephone system. I mean, W was only about a year old at the time, so probably not, but there must've been a lot of complaining and whining about this newfangled X thing, and now, 40 years later, long after it has outgrown its usefulness, we're again dealing with people so hell-bent on keeping an outdated system running but hoping - nay, demanding - others to do the actual work of maintaining it. X served its purpose. It took way too long, but we've moved on. Virtually every new Linux user since roughly 12-24 months ago will most likely never use X, and never even know what it was. They're using a more modern, more stable, more performant, more secure, and better maintained system, leading to a better user experience, and that's something we should all agree on is a good thing.
Framework, the company making modular, upgradeable, and repairable laptops, and DeepComputing, the same company that's making the DC ROMA II RISC-V laptop we talked about last week, have announced something incredibly cool: a brand new RISC-V mainboard that fits right into existing Framework 13 laptops. Sporting a RISC-V StarFive JH7110 SoC, this groundbreaking Mainboardwas independently designed and developed by DeepComputing. It's the main component of the very first RISC-V laptop to run Canonical's Ubuntu Desktop and Server, and the Fedora Desktop OSand represents the first independently developed Mainboard for a Framework Laptop. The DeepComputing website For a company that was predicted to fail by a popular Apple spokesperson, it seems Framework is doing remarkably well. This new mainboard is the first one not made by Framework itself, and is the clearest validation yet of the concept put into the market by the Framework team. I can't recall the last time you could buy a laptop powered by one architecture, and then upgrade to an entirely different architecture down the line, just by replacing the mainboard. The news of this RISC-V mainboard has made me dream of other possibilities - like someone crazy enough to design, I don't know, a POWER10 or POWER11 mainboard? Entirely impossible and unlikely due to heat constraints, but one may dream, right?
There's incredibly good news for people who use accessibility tools on Linux, but who were facing serious, gamebreaking problems when trying to use Wayland. Matt Campbell, of the GNOME accessibility team, has been hard at work on an entirely new accessibility architecture for modern free desktops, and he's got some impressive results to show for it already. I've now implemented enough of the new architecture that Orca is basically usable on Wayland with some real GTK 4 apps, including Nautilus, Text Editor, Podcasts, and the Fractal client for Matrix. Orca keyboard commands and keyboard learn mode work, with either Caps Lock or Insert as the Orca modifier. Mouse review also works more or less. Flat review is also working. The Orca command to left-click the current flat review item works for standard GTK 4 widgets. Matt Campbell One of the major goals of the project was to enable such accessibility support for Flatpak applications without having to pass an exception for the AT-SPI bus. what this means is that the new accessibility architecture can run as part of a Flatpak application without having to break out of their sandbox, which is obviously a hugely important feature to implement. There's still a lot of work to be done, though. Something like the GNOME shell doesn't yet support Newton, of course, so that's still using the older, much slower AT-SPI bus. Wayland also doesn't support mouse synthesizing yet, things like font, size, style, and colour aren't exposed yet, and there's a many more limitations due to this being such a new project. The project also isn't trying to be GNOME-specific; Campbell wants to work with the other desktops to eventually end up with an accessibility architecture that is truly cross-desktop. The blog post further goes into great detail about implementation details, current and possible future shortcomings, and a lot more.
After the very successful release of KDE Plasma 6.0, which moved the entire desktop environment and most of its applications over to Qt 6, fixed a whole slow of bugs, and streamlined the entire KDE desktop and its applications, it's now time for KDE Plasma 6.1, where we're going to see a much stronger focus on new features. While it's merely a point release, it's still a big one. The tentpole new feature of Plasma 6.1 is access to remote Plasma desktops. You can go into Settings and log into any Plasma desktop, which is built entirely and directly into KDE's own Wayland compositor, avoiding the use of third party applications of hacky extensions to X.org. Having such remote access built right into the desktop environment and its compositor itself is a much cleaner implementation than in the before time with X. Another feature that worked just fine under X but was still missing from KDE Plasma on Wayland is something they now call persistent applications" - basically, KDE will now remember which windows you had open when you closed KDE or shut down your computer, and open them back up right where you left off when you log back in. It's one of those things that got lost in the transition to Wayland, and having it back is really, really welcome. Speaking of Wayland, KDE Plasma 6.1 also introduces two major new rendering features. Explicit sync removes flickering and glitches most commonly seen on NVIDIA hardware, while triple buffering provides smoother animations and screen rendering. There's more here, too, such as a completely reworked edit desktop view, support for controlling keyboard LED backlighting traditionally found in gaming laptops, and more. KDE Plasma 6.1 will find its way to your distribution of choice soon enough, but of course, you can compile and install it yourself, too.
I've always found the world of DOS versions and variants to be confusing, since most of it took place when I was very young (I'm from 1984) so I wasn't paying much attention to computing quite yet, other than playing DOS games. One of the variants of DOS I never quite understood where it was from until much, much later, was DR-DOS. To this day, I pronounce this as Doctor DOS". If you're also a little unclear on what, exactly, DR-DOS was,Bradford Morgan White has an excellent article detailing the origins and history of DR-DOS, making it very easy to get up to speed and expand your knowledge on DOS, which is surely a very marketable skill in the days of Electron and Electron for Developers. DR DOS was a great product. It was superior to other DOS versions in many ways, and it is certainly possible that it could have been more successful were it not for Microsoft Windows having been so wildly successful. Starting with Windows 95, the majority of computer users simply didn't much care about which DOS loaded Windows so long as it worked. There's quite a bit of lore regarding legal battles and copyrights surrounding CP/M and DOS involving Microsoft and Digital Research. This has been covered in previous articles to some extent, but I am not really certain how much would have changed had Microsoft and Digital Research got on. Gates and Kildall had been quite friendly at one point, and we know that the two mutually chose not to work together due to differences in business practices and beliefs. Kildall chose to be quite a bit more friendly and less competitive while Gates very much chose to be competitive and at times a bit ruthless. Additionally, Kildall sold DRI rather than continue the fight, and DRI had never really attempted to combine DR DOS with GEM as a cohesive product to fight Windows before Windows became the ultimate ruler of the OS market following Windows 3.1's release. Still, it was an absolutely brilliant product and part of me will always feel that it ought to have won. Bradford Morgan White I can definitely imagine an alternative timeline in which Digital Research managed to combine DR-DOS with GEM in a more attractive way, stealing Microsoft's thunder before Gates' balls got rolling properly with Windows 3.x. It's one of the many, many what-ifs in this sector, but not one you often hear or read about.
To lock subscribers into recurring monthly payments, Adobe would typically pre-select by default its most popular annual paid monthly" plan, the FTC alleged. That subscription option locked users into an annual plan despite paying month to month. If they canceled after a two-week period, they'd owe Adobe an early termination fee (ETF) that costs 50 percent of their remaining annual subscription. The material terms" of this fee are hidden during enrollment, the FTC claimed, only appearing in disclosures that are designed to go unnoticed and that most consumers never see." Ashley Belanger at Ars Technica There's a sucker for every corporation, but I highly doubt there's anyone out there who would consider this a fair business practice. This is so obviously designed to hide costs during sign-up, and then unveil them when the user considers quitting. If this is deemed legal or allowed, you can expect everyone to jump on this bandwagon to scam users out of their money. It goes further than this, though. According to the FTC, Adobe knew this practice was shady, but continued it anyway because altering it would negatively affect the bottom line. The FTC is actually targeting two Adobe executives directly, which is always nice to hear - it's usually management that pushes such illegal practices through, leaving the lower ranks little choice but to comply or lose their job. Stuff like this is exactly why confidence in the major technology companies is at an all-time low.
Cinnamon, the popular GTK desktop environment developed by the Linux Mint project, pushed out Cinnamon 6.2 today, which will serve as the default desktop for Linux Mint 22. It's a relatively minor release, but it does contain a major new feature which is actually quite welcome: a new GTK frontend for GNOME Online Accounts, part of the XApp project. This makes it possible to use the excellent GNOME Online Accounts framework, without having to resort to a GNOME application - and will come in very handy on other GTK desktops, too, like Xfce. The remainder of the changes consist of a slew of bugfixes, small new features, and nips and tucks here and there. Wayland support is still an in-progress effort for Cinnamon, so you'll be stuck with X for now.
Less than a month after 3.5.0, IceWM is already shipping version 3.6.0. Once again not a major, earth-shattering release, it does contain at least one really cool feature that I think it pretty nifty: if you double-click on a window border, it will maximise just that side of the window. Pretty neat. For the rest, it's small changes and bug fixes for this venerable window manager.
It seems that if you want to steer clear from having Facebook use your Facebook, WhatsApp, Instagram, etc. data for machine learning training, you might want to consider moving to the European Union. Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe. The decision comes afterdata regulators rebuffedthe tech giant'sclaimsthat it had legitimate interests" in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users' data-including personal posts and pictures-to train future AI tools. Ashley Belanger These are just the opening salvos of the legal war that's brewing here, so who knows how it's going to turn out. For now, though, European Union Facebook users are safe from Facebook's machine learning training.
Way, way back in the cold and bleak days of 2021, I mentioned Vinix on OSNews, an operating system written in the V programming language. A few days ago, over on Mastodon, the official account for the V programming language sent out a screenshot showing Solitaite running on Vinix, showing off what the experimental operating system can do. The project doesn't seem to really publish any changelogs or release notes, so it's difficult to figure out what, exactly, is going on at the moment. The roadmap indicates they've already got a solid base going to work from, such as mlibc, bash, GCC/G++, X and an X window manager, and more - with things like Wayland, networking, and more on the roadmap.
I have a feeling Microsoft is really starting to feel some pressure about its plans to abandon Windows 10 next year. Data shows that 70% of Windows users are still using Windows 10, and this percentage has proven to be remarkably resilient, making it very likely that hundreds of millions of Windows users will be out of regular, mainstream support and security patches next year. It seems Microsoft is, therefore, turning up the PR campaign, this time by publishing a blog post about myths and misconceptions about Windows 11. The kind of supposed myths and misconceptions Microsoft details are exactly the kind of stuff corporations with large deployments worry about at night. For instance, Microsoft repeatedly bangs the drum on application compatibility, stating that despite the change in number - 10 to 11 - Windows 11 is built on the same base as its predecessor, and as such, touts 99.7% application compatibility. Furthermore, Microsoft adds that if businesses to suffer from an incompatibility, they can use something call App Assure - which I will intentionally mispronounce until the day I die because I'm apparently a child - to fix any issues. Apparently, the visual changes to the user interface in Windows 11 are also a cause of concern for businesses, as Microsoft dedicated an entire entry to this, citing a study that the visual changes do not negatively impact productivity. The blog post then goes on to explain how the changes are actually really great and enhance productivity - you know, the usual PR speak. There's more in the blog post, and I have a feeling we'll be seeing more and more of this kind of PR offensive as the cut-off date for Windows 10 support nears. Windows 10 users will probably also see more and more Windows 11 ads when using their computers, too, urging them to upgrade even when they very well cannot because of missing TPMs or unsupported processors. I don't think any of these things will work to bring that 70% number down much over the next 12 months, and that's a big problem for Microsoft. I'm not going to make any predictions, but I wouldn't be surprised if Microsoft will simply be forced by, well, reality to extend the official support for Windows 10 well beyond 2025. Especially with all the recent investigations into Microsoft's shoddy internal security culture, there's just no way they can cut 70% of their users off from security updates and patches.
Stranded on a desert island; lost in the forest; stuck in the snow; injured and unable to get back to civilization. Human beings have used their ingenuity for millennia to try to signal for rescue. there's been a progression of technological innovations: smoke signals, mirrors, a loud whistle, a portable radio, a mobile phone. With each invention, it's been possible to venture a little farther from populated areas and still have peace of mind about being able to call for help. But once you get past the range of a terrestrial radio tower, whether it's into the wilderness or out at sea, it starts to get more complicated and expensive to be able to call for rescue. In the next year or so, it's going to become a lot simpler and less expensive. Probably enough to become ubiquitous. Hardware infrastructure is already in place, and the relevant software and service support is rolling out now. It's been possible for decades for adventurers to keep in contact via satellite. The first commercial maritime satellite communications was launched in 1976. Globalstar and Iridium launched in the late 90s and drove down the device size and service cost of satellite phones. However, the service was a lot more expensive than cellular phone service, and not enough people were willing to pay for remote comms to be able to overcome the massive infrastructure costs, and both companies went bankrupt. Their investors lost their money, but the satellites still worked, so once the bankruptcies were hashed out they fulfilled their promise, as least technologically. On a parallel track, in the late 1980s International Cospas-Sarsat Programme was set up to develop a system for satellite aided search and rescue system that detects and locates emergency beacons activated by aircraft, ships and people engaged in recreational activities in remote areas, and then sends these distress alerts to search-and-rescue (SAR) authorities. Many types of beacons are available, and nowadays they send exact GPS coordinates along with the call for rescue. In the 2010s, the Satellite Emergency Notification Device or SEND device was brought to market. These are portable beacons that connect to the Globalstar and Iridium networks and allow people in remote areas not only to call for help in emergencies, but also to communicate via text messaging. Currently the two most popular SEND devices are the Garmin inReach Mini 2 and the Spot X. These devices cost $400 and $250 USD respectively, and require monthly service fees of $12-40. For someone undertaking a long and dangerous expedition into the backcountry, these are very reasonable costs, especially for someone who does it often. But for most people, it's just not practical to pay for and carry a device like that just in case." In 2022, the iPhone 14 included a feature that was the first step in taking satellite-based communication into the mainstream. It allows iPhone users to share their location via Find My feature with new radio hardware that connects to the Globalstar service. So if you're out adventuring, your friends can keep track of where you are. And if there's an emergency, you can make an emergency SOS. It's not just a generic Mayday: you can text specific details about your emergency and it will be transmitted to the local authorities. You can also choose to notify your personal emergency contacts. Last week, at WWDC, Apple announced the next stage: in iOS 18, iMessage users will be able to send text messages over satellite, using the same Globalstar network as its SOS features. Initially at least, this feature is expected to be free. With this expansion, iPhone users will have the basic functionality of a SPOT or inReach device, without special hardware or a monthly fee. SpaceX's Starlink, which first offered service in 2021, has much higher bandwidth and lower latency than the Globalstar and Iridium networks. Starlink's current offering requires a dinner plate sized antenna and conventional networking hardware to enable high bandwidth mobile internet. It's great for a vehicle, but impractical for a backpacker. However, SpaceX has announced 2nd generation satellites that can connect to 1900MHz spectrum mobile phone radios, and T-Mobile has announced that it will be enabling the service for its customers in late 2024, and Apple, Google, and Samsung devices are confirmed to be supported. Initially, like Apple's service, this will be restricted to text messaging and other low-bandwidth applications. Phone calls and higher bandwidth internet connectivity are promised in 2025. The other two big US carriers, AT&T and Verizon, have announced they will be partnering with a competing service, AST SpaceMobile, but it's unlikely those plans will come to fruition very soon. Mobile phone users outside the US will also need to wait. Apple's Message via satellite is only announced for US users, as is T-Mobile's offering. So if you're in the US, and have an iPhone, or are a T-Mobile subscriber with an Apple, Samsung, or Google device, you'll soon be able to point your phone at the sky, even in remote areas, to call for help, give your friends an update on your expedition, or just stay in touch. Pretty soon, Tom Hanks won't have to make friends with a volleyball when he crash lands on a deserted island, at least not until his battery dies.