It was Stability's armada of GPUs, the wildly powerful and equally expensive chips undergirding AI, that were so taxing the company's finances. Hosted by AWS, they had long been one of Mostaque's bragging points; he often touted them as one of the world's 10 largest supercomputers. They were responsible for helping Stability's researchers build and maintain one of the top AI image generators, as well as break important new ground on generative audio, video and 3D models. Undeniably, Stability has continued to ship a lot of models," said one former employee. They may not have profited off of it, but the broader ecosystem benefitted in a huge, huge way." But the costs associated with so much compute were now threatening to sink the company. According to an internal October financial forecast seen by Forbes, Stability was on track to spend $99 million on compute in 2023. It noted as well that Stability was underpaying AWS bills for July (by $1M)" and not planning to pay AWS at the end of October for August usage ($7M)." Then there were the September and October bills, plus $1 million owed to Google Cloud and $600,000 to GPU cloud data center CoreWeave. (Amazon, Google and CoreWeave declined to comment.) Kenrick Cai and Iain Martin As a Dutch person, I can smell a popping bubble from a mile away, even if tulipmania is most likely anti-Dutch British propaganda. In all seriousness, there's definitely signs that the insane energy and compute costs of artificial image and video generation in particular are rising at such an insane pace it's simply unsustainable for the popularity of these tools to just keep rising. Eventually someone's going to have to pay, and I wonder just how much regular people are willing to pay for this kind of stuff.
Amazon is phasing out its checkout-less grocery stores with Just Walk Out" technology, first reported by The Information Tuesday. The company's senior vice president of grocery stores says they're moving away from Just Walk Out, which relied on cameras and sensors to track what people were leaving the store with. Just over half of Amazon Fresh stores are equipped with Just Walk Out. The technology allows customers to skip checkout altogether by scanning a QR code when they enter the store. Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped. Maxwell Zeff Behind every Silicon Valley innovation are underpaid poor people.
Even with that said, those gray-hairs will frequently claim that of the many makers of floppies out there, 3M made the best ones. Given that, I was curious to figure out exactly why 3M became the most memorable brand in data storage during the formative days of computing, and why it abandoned the product. Ernie Smith I do not remember if I ever held any particular views on which brand of floppy disk (or diskettes, as we called them) was the best. We had a wide variety of brands, and I can't recall any one of them being better than the other, but then, I'm sure people in professional settings had more experience with the little black squares and thus developed all kinds of feelings about them.
Windows 10 is reaching end of support on October 14, 2025, so if you're still using Windows 10 - and let's face it, if you're somehow forced to still use Windows, better 10 than 11 - your time is running out. Luckily, end of support is a bit of a nebulous term when it comes to Microsoft products, and many among you, especially those managing larger fleets of systems, will know Microsoft offers something called the Extended Security Update (ESU) program, wherein you get additional security updates even after end of support. Microsoft just unveiled the prices for this program for Windows 10. While there's several schemes, the one most of you will be interested in is this one: With the 5-by-5 activation method, you'll download an activation key and apply it to individual Windows 10 devices that you've selected for your ESU program. Manage it via scripting or the Volume Activation Management Tool (VAMT), among other methods. You can use on-premises management tools such as Windows Server Update Services (WSUS) with Configuration Manager to download and apply the updates to your Windows 10 devices. The 5-by-5 activation subscription will establish the Year One list price of ESU for Windows 10. This is the base license and will cost $61 USD per device for Year 1, similar to the Windows 7 ESU Year 1 price. Jason Leznek Honestly, that's not an egregious price, but do note that this price doubles every year for three years total, and note that if you want to start using ESU in year two, you'll have to pay for year one as well. In other words, pricing ramps up fast. Furthermore, this program only includes security updates - no new features or anything like that, and it doesn't include support either. So, if you're still using Windows 10 after October 14, 2025, you'll either have to pay up, have an insecure system, downgrade to Windows 11, or move to a better alternative. Choice's yours.
Microsoft is currently testing a new AI-powered Xbox chatbot that can be used to automate support tasks. Sources familiar with Microsoft's plans tell The Verge that the software giant has been testing an embodied AI character" that animates when responding to Xbox support queries. I understand this Xbox AI chatbot is part of a larger effort inside Microsoft to apply AI to its Xbox platform and services. Tom Warren at The Verge I'm convinced. This is the future. Artificial intelligence, AI, no quotation marks. Please, Microsoft. Train this AI on Xbox voice chat and messages. What could possible go wrong?
Quests are a way for players to discover games and earn rewards for playing them on Discord. We started experimenting with them over the last year, and millions of you opted in and completed them. We've heard great feedback from developers who partnered with us to create them and from many of you who completed one. If you didn't see firsthand, the May the 4th" Fortnite Quest is a great example. Now, we're opening up sponsored Quests to more game developers. Peter Sellis That's a lot of fancy, hip words to say Discord is going to show you ads. I have an odd relationship with Discord - it holds a special place in my heart because through Discord is how I met my now-wife and mother of our children, so understandably, the chat platform has a special meaning for us. At the same time, though, Discord has been getting steadily worse and less usable over the years, and while my wife isn't too bothered by that, I certainly am - and so we moved our instant messaging over to Signal instead. My wife still uses Discord with her friends. Seeing a platform that used to be quite usable, and easily the best way to manage a group of geographically spread-out friends, fall prey to the same kind of bullshit so many other platforms have succumbed to. Discord today is almost unrecognisable to what it was like 6-7 years ago, and now there's even going to be ads - the final nail in the coffin for the possibility of me ever going back to using it.
Before the cancellation of The Problem with Jon Stewart on Apple TV+, Apple forbade the inclusion of Federal Trade Commission Chair Lina Khan as a guest and steered the show away from confronting issues related to artificial intelligence, according to Jon Stewart. Samuel Axon at Ars Technica Just when you thought Apple and Tim Cook couldn't get any more unlikable.
Redis, a tremendously popular tool for storing data in-memory rather than in a database, recently switched its licensing from an open source BSD license to both a Source Available License and a Server Side Public License (SSPL). The software project and company supporting it were fairly clear in why they did this. Redis CEO Rowan Trollope wrote on March 20 that while Redis and volunteers sponsored the bulk of the project's code development, the majority of Redis' commercial sales are channeled through the largest cloud service providers, who commoditize Redis' investments and its open source community." Clarifying a bit, cloud service providers hosting Redis offerings will no longer be permitted to use the source code of Redis free of charge." This generated a lot of discussion, blowback, and action. The biggest thing was a fork of the Redis project, Valkey, that is backed by The Linux Foundation and, critically, also Amazon Web Services, Google Cloud, Oracle, Ericsson, and Snap Inc. Valkey is fully open source," Linux Foundation execs note, with the kind of BSD-3-Clause license Redis sported until recently. You might note the exception of Microsoft from that list of fork fans. Kevin Purdy at Ars Technica Moves like this never go down well.
Update: the proposal has now been formally announced on the devel mailing list and Fedora Discussions. I have been assured by the main author of the proposal itself that this is very much not an April Fools joke, but of course, there's still the very real possibility we're being led on here. Still, I'm taking the risk and treating this as a serious change proposal for Fedora, even though it's likely to cause some controversy in the wider Fedora community. The proposal is written by Joshua Strobl, the lead developer of Budgie. Yes, this is a change proposal to make KDE the default desktop environment of Fedora Workstation. The reasoning is that KDE is more approachable for new users than GNOME, it supports standards better, the industry seems to be making moves to KDE (see the Steam Deck), and so on. KDE also has more advanced features people have come to expect from a desktop, like HDR, VRR, and more, and it's the more advanced Wayland desktop. The important note here is that in the highly unlikely event this proposal would be accepted, it's not like current Fedora GNOME users will be upgraded' to KDE when Fedora 42 gets released. The idea is to promote the current Fedora Plasma spin to the main Fedora Workstation release, and demote the Fedora GNOME release to a mere Fedora spin, like KDE is now. While I would personally support this change, it's pretty much 100% unlikely this change proposal will make it through. Red Hat and Fedora are entirely GNOME-first, and no matter how much I believe that's misguided when looking at the state of the two primary open source desktops today, that's not going to change. Still, it's an interesting discussion point, if only to highlight that the frustrations with GNOME run a lot deeper than people seem to think.
Way back in the day, back when I wasn't even working at OSNews yet, I used to run QNX as my desktop operating system, together with a small number of other enthusiasts. It was a struggle, for sure, but it was fun, exciting, and nobody else was crazy enough to do so. Sadly, the small QNX desktop community wasn't even remotely interesting to QNX, and later Blackberry when they acquired the company, and eventually the stand-alone Neutrino-powered version of QNX disappeared behind confusing signup screens and other dark patterns. It meant the end of our small little community. Much to my utter surprise and delight, I saw a post by js about how he ported GCC 10 to QNX - in this case, to QNX 6.5 SP1, released in 2012 - and submitted it to pkgsrc. His ultimate goal is to port one of his other projects, ObjFW, to QNX. He makes use of pkgsrc to do this kind of work, which also means he had to make pkgsrc bootstrap and a lot of other software work on QNX. We're at QNX 8.0 by now, and as much as I bang my head against QNX and BlackBerry's wall of marketing and corporate speak, I just can't find out if it's even still possible to download QNX Neutrino and install it on real generic hardware today.
This is a contender for the World Record for Feature Creep Side Project. It is pretty high in the contender list as it's a bolt on to another contender for the World Record for Feature Creep Side Project (the MII Apple //e emulator). It is a library that duplicate a lot of a Macintosh Classic Toolbox" APIs. It is not a complete implementation, but it is enough to make a few simple applications, also, all the bits I needed for the MII emulator. libmui GitHub page This is absolutely wild.
On October 3, 2023, Google and Yahoo announced upcoming email security standards to prevent spam, phishing and malware attempts. Outlook.com (formerly Hotmail) is also enforcing these policies. With the big 3 Email Service Providers (ESP) in agreement, expect widespread adoption soon. Today's threats are more complex than ever and more ESPs will begin tightening the reigns. Failure to comply with these guidelines will result in emails being blocked beginning April 2024. In this article, we're going to cover these guidelines and explain what senders must do in order to achieve and maintain compliance. XOMedia Some of these changes - most of them impact bulk senders and spammers - should've been implemented ages ago, but seeing them being pushed by the three major email providers, who all happened to be owned, of course, by massive corporations, does raise quite a few red flags. Instinctively, this makes me worried about ulterior motives, especially since running your own email server is already fraught with issues due to the nebulous ways Gmail treats emails coming from small servers. With the rising interest in self-hosting and things like Mastodon, I hope we're also going to see a resurgence in hosting your own e-mail. I really don't like that all my email is going through Gmail - it's what OSNews uses - but I don't feel like dealing with all the delivery issues people who try self-hosting email lament about. With a possible renewed wave of interest in it, we might be able to make the process easier and more reliable.
Microsoft will sell its chat and video app Teams separately from its Office product globally, the U.S. tech giant said on Monday, six months after it unbundled the two products in Europe in a bid to avert a possible EU antitrust fine. The European Commission has been investigating Microsoft's tying of Office and Teams since a 2020 complaint by Salesforce-owned competing workspace messaging app Slack. Foo Yun Chee at Reuters I honestly misread this as Microsoft selling Teams off, which would've been far bigger news. Unbundling Teams from Office globally is just Microsoft applying its recent European Union policy to the rest of the world. All we need now is Microsoft to stop trying to make Teams for families and friends happen, because nobody will ever want to use Teams for anything, let alone personal use.
Every computer has at least one heart which beats the cadence to all the other chips. The CloCK output pin is connected to a copper line which spreads to most components, into their CLK input pin. If you are mostly a software person like me, you may have never noticed it but all kinds of processors have a CLK input pin. From CPUs (Motorola 68000, Intel Pentium, MOS 6502), to custom graphic chips (Midway's DMA2, Capcom CPS-A/CPS-B, Sega's Genesis VDP) to audio chips (Yamaha 2151, OKI msm6295), they all have one. Fabien Sanglard I've watched enough Adrian Black that I already knew all of this, and I'm assuming so did many of you. But hey, I'll never pass up the opportunity to link to the insides of the Super Nintendo.
As some of the dust around the xz backdoor is slowly starting to settle, we've been getting a pretty clear picture of what, exactly, happened, and it's not pretty. This is a story of the sole maintainer of a crucial building block of the open source stack having mental health issues, which at least partly contributes to a lack of interest in maintaining xz. It seems a coordinated campaign - consensus seems to point to a state actor - is then started to infiltrate xz, with the goal of inserting a backdoor into the project. Evan Boehs has done the legwork of diving into the mailing lists and commit logs of various projects and the people involved, and it almost reads like the nerd version of a spy novel. It involves seemingly fake users and accounts violently pressuring the original xz maintainer to add a second maintainer; a second maintainer who mysteriously seems to appear at around the same time, like a saviour. This second maintainer manages to gain the original maintainer's trust, and within months, this mysterious newcomer more or less takes over as the new maintainer. As the new maintainer, this person starts adding the malicious code in question. Sockpuppet accounts show up to add code to oss-fuzz to try and make sure the backdoor won't be detected. Once all the code is in place for the backdoor to function, more fake accounts show up to push for the compromised versions of xz to be included in Debian, Red Hat, Ubuntu, and possibly others. Roughly at this point, the backdoor is discovered entirely by chance because Andres Freund noticed his SSH logins felt a fraction of a second slower, and he wanted to know why. What seems to have happened here is a bad actor - again, most likely a state actor - finding and targeting a vulnerable maintainer, who, through clever social engineering on both a personal level as well as the project level, gained control over a crucial but unexciting building block of the open source stack. Once enough control and trust was gained, the bad actor added a backdoor to do... Well, something. It seems nobody really knows yet what the ultimate goal was, but we can all make some educated guesses and none of them are any good. When we think of vulnerabilities in computer software, we tend to focus on bugs and mistakes that unintentionally create the conditions wherein someone with malicious intent can do, well, malicious things. We don't often consider the possibility of maintainers being malicious, secretly adding backdoors for all kinds of nefarious purposes. The problem the xz backdoor highlights is that while we have quite a few ways to prevent, discover, mitigate, and fix unintentional security holes, we seem to have pretty much nothing in place to prevent intentional backdoors placed by trusted maintainers. And this is a real problem. There are so many utterly crucial but deeply boring building blocks all over the open source stacks pretty much the entire computing world makes use of that it has become a meme, spearheaded by xkcd's classic comic. The weakness in many of these types of projects is not the code, but the people maintaining that code, most likely through no fault of their own. There are so many things life can throw at you that would make you susceptible to social engineering - money problems, health problems, mental health issues, burnout, relationship problems, god knows what else - and the open source community has nothing in place to help maintainers of obscure but crucial pieces of infrastructure deal with problems like these. That's why I'm suggesting the idea of setting up a foundation - or whatever legal entity makes sense - that is dedicated to helping maintainers who face the kinds of problems like the maintainer of xz did. A place where a maintainer who is dealing with problems outside of the code repository can go to for help, advice, maybe even financial and health assistance if needed. Even if all this foundation offers to someone is a person to talk to in confidence, it might mean the difference between burning out completely, or recovering at least enough to then possibly find other ways to improve one's situation. If someone is burnt-out or has a mental health crisis, they could contact the foundation, tell their story, and say, hey, I need a few months to recover and deal with my problems, can we put out a call among already trusted members of the open source community to step in for me for a while? Keep the ship steady as she goes without rocking it until I get back or we find someone to take over permanently? This way, the wider community will also know the regular, trusted maintainer is stepping down for a while, and that any new commits should be treated with extra care, solving the problem of some unknown maintainer of an obscure but important package suffering in obscurity, the only hints found in the low-volume mailing list well after something goes wrong. The financial responsibility for such a safety net should undoubtedly be borne by the long list of ultra-rich megacorporations who profit off the backs of these people toiling away in obscurity. The financial burden for something like this would be pocket change to the likes of Google, Apple, IBM, Microsoft, and so on, but could make a contribution to open source far greater than any code dump. Governments could probably be involved too, but that will most likely open up a whole can of worms, so I'm not sure if that would be a good idea. I'm not proposing this be some sort of glorified ATM where people can go to get some free money whenever they feel like it. The goal should be to help people who form crucial cogs in the delicate machinery of computing to live healthy, sustainable lives so their code and contributions to the community don't get compromised. This
This month, after surpassing our legacy layout engine in the CSS test suites, we're proud to share that Servo has surpassed legacy in the whole suite of Web Platform Tests as well! Servo blog Another months, another detailed progress report from Servo, the Rust browser engine once started by Mozilla. There's a lot of interesting reading here for web developers.
This year, there have been numerous improvements both to the kernel's correctness, as well as raw performance. The signal and TLB shootdown MRs have significantly improved kernel memory integrity and possibly eliminated many hard-to-debug and nontrivial heisenbugs. Nevertheless, there is still a lot of work to be done optimizing and fixing bugs in relibc, in order to improve compatibility with ported applications, and most importantly of all, getting closer to a self-hosted Redox. Jacob Lorentzon (4lDO2) I love how much of the focus for Redox seems to be on the lower levels of the operating system, because it's something many projects tend to kind of forget to highlight, to spend more time on new icons or whatever. These in-depth Redox articles are always informative, and have me very excited about Redox' future. Obviously, Redox is on the list of operating systems I need to write a proper article about. I'm not sure if there's enough for a full review or if it'll be more of a short look - we'll see when we get there.
In October 2023, we published a recap of the top 10 features Windows 11 users want for the redesigned Start menu. Number 6 was the ability to switch from list view to grid view in the All Apps" list, which received over 1,500 upvotes in the Feedback Hub. Six months later, Microsoft finally appears to be ready to give users what they want. PhantomOfEarth, the ever-giving source of hidden stuff in Windows 11 preview builds, discovered that Windows 11 build 22635.3420 lets you change from list to grid view in the All Apps" section. Like other unannounced features, this one requires a bit of tinkering using the ViVeTool app until Microsoft makes it official. Taras Buria I'm still baffled Microsoft consistently manages to mess up something as once-iconic and impactful like the Start menu. It seems like Microsoft just can't leave it well enough alone, even though it kind of already nailed it in Windows 95 - just give us that, but with a modern search function, and we're all going to be happy. That's it. We don't want or need more.
A couple of years ago, I imported a Japanese-market 4*4 van into the US; a 1996 Mitsubishi Delica. Based on the maps I found in the seat pocket and other clues, it seems to have spent its life at some city dweller's cabin in the mountains around Fukushima, and only driven occasionally. Despite being over 25 years old, it only had 77,000 km on the odometer. The van had some interesting old tech installed in it: what appears to be a radar detector labeled Super Eagle 30" and a Panasonic-brand electronic toll collection device that you can insert a smart card into. One particularly noteworthy accessory that was available in mid-90s Delicas was a built-in karaoke machine for the rear passengers. Sadly, mine didn't have that feature. But the most interesting accessory installed in the van was the Avco Maptwin Inter, which I immediately identified as some kind of electronic navigation aid, about which there is very little information available on the English-language internet. When I first saw the Maptwin, I had thought it might be some kind of proto-GPS that displayed latitude/longitude coordinates that you could look up on a paper map. Alas, it's not that cool. It was not connected to any kind of antenna, and the electronics inside seem inadequate for the reception of a GPS signal. The Maptwin was, however, wired into an RPM counter that was attached between the transmission and the speedometer cable, presumably to delivery extremely accurate and convenient display of how many kilometers have been traveled since the display was last reset. What I've been able to learn is that the Maptwin is computer that was mostly used for rally race navigation, precursor to devices still available from manufacturers like Terra Trip. Now, the Mitsubishi Delica is about the best 4*4 minivan you can get, but it's extremely slow and unwieldy at speed, so it would be pretty terrible for rally racing. My best guess is that the owner used this device as a navigation aid for overland exploration, as the name Maptwin" implies, to augment the utility of a paper map. On the other hand, I found an article that indicates that some kinds of rallies were not high speed affairs, but rather accuracy-based navigation puzzles of sorts, so who knows? The Maptwin wasn't working when I got the van, and I don't know if it's actually broken or just needs to be wired up correctly. If any OSNews readers have any additional information about any of the devices I've mentioned, please enlighten us in the comments. If anyone would like to try to get the Maptwin working and report back, please let me know.
NetBSD 10.0 has been released, and it brings a lot of improvements, new features, and fixes compared to the previous release, 9.3. First and foremost, there are massive performance improvements when it comes to compute and filesystem-bound applications on multicore and multiprocessor systems. NetBSD 10.0 also brings WireGuard support compatible with implementations on other systems, although this is still experimental. There's also a lot of added support for various ARM SoCs and boards, including Apple's M1 chip, and there's new support for compat_linux on AArch64, for running Linux programs. Of course, there's also a ton of new and updated drivers, notably the graphics drivers which are now synced to Linux 5.6, bringing a ton of improvements with them. This is just a small sliver of all the changes, so be sure to read the entire release announcement for everything else.
It's the ext2 filesystem driver that will be marked as deprecated in the upcoming 6.9 Linux kernel. The main issue is that even if the filesystem is created with 256 byte inodes (mkfs.ext2 -I 256), the filesystem driver will stick to 32 bit dates. Because of this, the driver does not support inode timestamps beyond 03:14:07 UTC on 19 January 2038. Michael Opdenacker Kernel developer Ted T'so did state that if someone wants to add support for 64bit dates to ext2, it shouldn't be too hard. I doubt many people still use ext2, but if someone is willing to step up, the deprecation can be made undone by adding this support.
After observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors) I figured out the answer: The upstream xz repository and the xz tarballs have been backdoored. At first I thought this was a compromise of debian's package, but it turns out to be upstream. Andres Freund I don't normally report on security issues, but this is a big one not just because of the severity of the issue itself, but also because of its origins: it was created by and added to upstream xz/liblzma by a regular contributor of said project, and makes it possibly to bypass SSH encryption. It was discovered more or less by accident by Andres Freund. I have not yet analyzed precisely what is being checked for in the injected code, to allow unauthorized access. Since this is running in a pre-authentication context, it seems likely to allow some form of access or other form of remote code execution. Andres Freund The exploit was only added to the release tarballs, and not present when taking the code off GitHub manually. Luckily for all of us, the exploit has only made it way to the most bloodiest of bleeding edge distributions, such as Fedora Rawhide 41 and Debian testing, unstable and experimental, and as such has not been widely spread just yet. Nobody seems to know quite yet what the ultimate intent of the exploit seems to be. Of note: the person who added the compromising code was recently added as a Linux kernel maintainer.
Since the technology industry and associated media outlets tend to focus primarily on the latest and greatest technology and what's right around the corner, it sometimes seems as if the only valid option when you need a new laptop, phone, desktop, or whatever is to spend top euro on the newest, most expensive incarnations of those. But what if you need, say, a new laptop, but you're not swimming in excess disposable income? Or you just don't want to spend 1000-2000 euro on a new laptop? The tech media tends to have an answer for this: buy something like a cheap Chromebook or an e-waste 350 Windows laptop and call it a day - you don't deserve a nice experience. However, there's a far better option than spending money on a shackled Chromebook or an underpowered bottom-of-the-barrel Windows laptop: buy used. Recently, I decided to buy a used laptop, and I set it up how I would set up any new laptop, to get an idea of what's out there. Here's how it went. For this little experiment, I first had to settle on a brand, and to be brutally honest, that was an easy choice. ThinkPads seems to be universally regarded as excellent choices for a used laptop for a variety of reasons which I'll get to later. After weighing some of the various models, options, and my budget, I decided to go for a Lenovo ThinkPad T450s for about 150, and about a week later, the device arrived at my local supermarket for pickup. Before I settled on this specific ThinkPad, I had a few demands and requirements. First and foremost, since I don't like large laptops, I didn't want anything bigger than roughly 14'', and since I'm a bit of a pixel count snob, 1920*1080 was non-negotiable. Since I already have a Dell XPS 13 with an 8th Gen Core i7, I figured going 3-4 generations older seemed like it would give me at least somewhat of a generational performance difference. An SSD was obviously a must, and as long as there were expansion options, RAM did not matter to me. The T450s delivered on all of these. It's got the 1920*1080 14'' IPS panel (there's also a lower resolution panel, so be sure to check you're getting the right one), a Core i5-5300U with 2 cores and 4 threads with a base frequency of 2.30GHz and a maximum boost frequency of 2.90GHz, Intel HD 5500 graphics, a 128GB SATA SSD, and 4GB of RAM. Since 4GB is a bit on the low side for me, I ordered an additional 8GB SO-DIMM right away for 35. This brought the total price for this machine to 185, which I considered acceptable. For that price, it also came with its Windows license, for whatever that's worth. I don't want to turn this into a detailed review of a laptop from 2015, but let's go over what it's like to use this machine today. The display cover is made of carbon-reinforced plastic, and the rest of magnesium. You can clearly feel this laptop is of a slightly older vintage, as it feels a bit more dinkey than I'm used to from my XPS 13 9370 and my tiny Chuwi MiniBook X (2023). It doesn't feel crappy or cheap or anything - just not as solid as you might expect from a modern machine. It's got a whole load of ports to work with, though, which is refreshing compared to the trend of today. On the left side, there's a smartcard slot, USB 3.0, mini DisplayPort, another USB 3.0, and the power connector. On the right side, there's a headphone jack, an SD card slot, another USB 3.0 port, an Ethernet jack, and a VGA port. On the bottom of the laptop is a docking port to plug it into various docking stations with additional ports and connectors. On the inside, there's a free M.2 slot (a small 2242 one). First, I eradicated Windows from the SSD because while I'm okay with an outdated laptop, I'm not okay with an outdated operating system (subscribe to our Patreon to ensure more of these top-quality jokes). After messing around with various operating systems and distributions for a while, I got back to business and installed my distribution of choice, Fedora, but I did opt for the Xfce version instead of my usual KDE one just for variety's sake. ThinkPads tend to be well-supported by Linux, and the T450s is no exception. Everything I could test - save for the smartcard reader, since I don't have a smartcard to test it with - works out of the box, and nothing required any manual configuration or tweaking to work properly. Everything from trackpad gestures to the little ThinkLight on the lid worked perfectly, without having to deal with hunting for drivers and that sort of nonsense Windows users have to deal with. This is normal for most laptops and Linux now, but it's nice to see it applies to this model as well. Using the T450s was... Uneventful. Applications open fast, there's no stutter or lag, and despite having just 2 cores and 4 threads, and a very outdated integrated GPU, I didn't really feel like I was missing out when browsing, doing some writing and translating (before I quit and made OSNews my sole job), watching video, those sorts of tasks. This isn't a powerhouse laptop for video editing, gaming, or compiling code or whatever, but for everything else, it works great. After I had set everything up the way I like, software-wise, I did do some work to make the machine a bit more pleasant to use. First and foremost, as with any laptop or PC that's a little older, I removed the heatsink assembly, cleaned off the crusty old thermal paste, and added some new, fresh paste. I then dove into the fan management, and installed zcfan, a Linux fan control daemon for ThinkPads, using its default settings, and created a systemd
Monogon OS is an open-source, secure, API-driven and minimal operating system unlike any other. It is based on Linux and Kubernetes, but with a clean userland rebuilt entirely from scratch. It is written in pure Go and eliminates decades worth of legacy code and unnecessary complexity. It runs on a fleet of bare metal or cloud machines and provides users with a hardened, production ready Kubernetes, without the overhead of traditional Linux distributions or configuration management systems. It does away with the scripting/YAML duct tape and configuration drift inherent to traditional deployments. Instead, it provides a stable API-driven platform free of vendor lock-in and with none of the drudgery. Monogon OS website This not exactly in my wheelhouse, but I'm pretty sure some of you will be all over this concept.
Since the mid-90's with the P6 micro-architecture for the Pentium Pro as the sixth-generation x86 microarchitecture, Intel has relied on the Family 6" CPU ID. From there Intel has just revved the Model number within Family 6 for each new microarchitecture/core. For example, Meteor Lake is Family 6 Model 170 and Emerald Rapids is Family 6 Model 207. This CPU ID identification is used within the Linux kernel and other operating systems for identifying CPU generations for correct handling, etc. But Intel Linux engineers today disclosed that Family 6 is coming to an end soon-ish". Michael Larabel They should revive the ix86 family name, and call the next generation i786. It sounds so much cooler, even if these names have become rather irrelevant.
The Snap Store, where containerized Snap apps are distributed for Ubuntu's Linux distribution, has been attacked for months by fake crypto wallet uploads that seek to steal users' currencies. As a result, engineers at Ubuntu's parent firm are now manually reviewing apps uploaded to the store before they are available. The move follows weeks of reporting by Alan Pope, a former Canonical/Ubuntu staffer on the Snapcraft team, who is still very active in the ecosystem. In February, Pope blogged about how one bitcoin investor lost nine bitcoins (about $490,000 at the time) by using an Exodus Wallet" app from the Snap store. Exodus is a known cryptocurrency wallet, but this wallet was not from that entity. As detailed by one user wondering what happened on the Snapcraft forums, the wallet immediately transferred his entire balance to an unknown address after a 12-word recovery phrase was entered (which Exodus tells you on support pages never to do). Kevin Purdy at Ars Tecnhica Cryptocurrency, or as I like to call it, MLMs for men, are a scammer's goldmine. It's a scam used to scam people. Add in a poorly maintained application store like Ubuntu's Snap Store, and it's dangerous mix of incompetence and scammers. I honestly thought Canonical already nominally checked the Snap Store - as one of its redeeming features, perhaps its only redeeming feature - but it turns out anyone could just upload whatever they wanted and have it appear in the store application on every Ubuntu installation. Excellent.
In the middle of the 1980s, Apple found itself with several options regarding the future of its computing platforms. The Apple II was the company's bread and butter. The Apple III was pitched as an evolution of that platform, but was clearly doomed due to hardware and software issues. The Lisa was expensive and not selling well, and while the Macintosh aimed to bring Lisa technology to the masses, sales were slow after its initial release. Those four machines are well known, but there was a fifth possibility in the mix, named the Jonathan. In his book Inventing the Future, John Buck writes about the concept, which was led by Apple engineer Jonathan Fitch starting in the fall of 1984. Stephen Hackett So apparently, the Jonathan was supposed to be a modular computer, with a backbone you could slot all kinds of upgrades in, from either Apple or third parties. These modules would add the hardware needed to run Mac OS, Apple II, UNIX, and DOS software, all on the same machine. This is an incredibly cool concept, but as we all know, it didn't pan out. The reasons are simple: this is incredibly hard to make work, especially when it comes to the software glue that would have to make it all work seamlessly. On top of that, it just doesn't sound very Apple-like to make a computer designed to run anything that isn't from Apple itself. Remember, this is still the time of Steve Jobs, before he got kicked out of the company and founded NeXT instead. According to Stephen Hackett, the project never made it beyond the mockup phase, so we don't have many details on how it was supposed to work. It does look stunning, though.
One alternative to ESXi for home users and small organizations is Proxmox Virtual Environment, a Debian-based Linux operating system that provides broadly similar functionality and has the benefit of still being an actively developed product. To help jilted ESXi users, the Proxmox team has just added a new integrated import wizard" to Proxmox that supports importing of ESXi VMs, easing the pain of migrating between platforms. Andrew Cunningham at Ars Technica It's of course entirely unsurprising other projects and companies were going to try and capitalise on Broadcom's horrible management of its acquisition of VMware.
After the Windows Server 2025's launch, a Windows insider posted a screenshot on X showing Copilot running on Windows Server 2025, Build 26063.1. The admins discovered the feature in shock and wondered if it was a mistake from Microsoft's part. A month later, the same Bob Pony broke the news that most admins wanted to see: Copilot is gone in Windows Server 2025's Build 26085. Claudiu Andone This reminds of Windows Server 2012, which was based on Windows 8 and launched with a Metro user interface.
Unboxing a new gadget is always a fun experience, but it's usually marred somewhat by the setup process. Either your device has been in a box for months, or it's just now launching and ships in the box with pre-release software. Either way, the first thing you have to do is connect to Wi-Fi and wait several minutes for an OS update to download and install. The issue is so common that going through a lengthy download is an expected part of buying anything that connects to the Internet. But what if you could update the device while it's still in the box? That's the latest plan cooked up by Apple, which is close to rolling out a system that will let Apple Stores wirelessly update new iPhones while they're still in their boxes. The new system is called Presto." Ron Amadeo at Ars Technica That's a lot of engineering for a small inconvenience. Just the way I like my engineering.
Oregon Governor Tina Kotek has now signed one of the strongest US right-to-repair bills into law after it passed the state legislature several weeks ago by an almost 3-to-1 margin. Oregon's SB 1596 will take effect next year, and, like similar laws introduced in Minnesota and California, it requires device manufacturers to allow consumers and independent electronics businesses to purchase the necessary parts and equipment required to make their own device repairs. Oregon's rules, however, are the first to ban parts pairing" - a practice manufacturers use to prevent replacement components from working unless the company's software approves them. These protections also prevent manufacturers from using parts pairing to reduce device functionality or performance or display any misleading warning messages about unofficial components installed within a device. Current devices are excluded from the ban, which only applies to gadgets manufactured after January 1st, 2025. Jess Weatherbed at The Verge Excellent news, and it wouldn't be the first time that one US state's strict (positive) laws end up benefiting all the other states since it's easier for corporations to just develop to the strictest state's standards and use that everywhere else (see California's car safety and emissions regulations for instance). As a European, I hope this will make it way to the European Union, as well.
lEEt/OS is a graphical shell and partially posix-compliant multitasking operating environment that runs on top of a DOS kernel. The latest version can be downloaded from this site. lEEt/OS is tested with FreeDOS 1.2 and ST-DOS, but it may also work with other DOS implementations. It can be compiled with Open Watcom compiler. 8086 binaries are also available from this site. lEEt/OS website I had never heard of lEEt/OS before, but it looks quite interesting - and the new ST-DOS kernel the developer is making further adds to its uniqueness. A very cool project I'm putting on my list of operating systems to write short first look' article about for y'all.
Probably the most confused looks I get from other developers when I discuss Windows and ARM64 is when I used the term ARM64EC". They ask is the same thing as ARM64? Is it a different instruction set than ARM64? How can you tell if an application is or ARM64 ARM64EC? This tutorial will answer those questions by de-mystifying and explaining the difference between what can be called classic ARM64" as it existed since Windows 10, and this new ARM64EC" which was introduced in Windows 11 in 2021. Darek Mihocka I'm not going to steal the article's thunder, but the short of it is that the EC' stands for Emulation Compatible', meaning it can call unmodified x86-64 code. ARM64X, meanwhile, is an extended version of Windows PE that allows both ARM64 and emulated x86-64 code to coexist in the same binary (which is not the same as a fat binary, which is an either/or situation). There is a whole lot more to this subject - and I truly mean a lot, this a monster of an in-depth article - so be sure to head on over and read it in full. You'll be busy for a while.
Swift is well-suited for creating user interfaces thanks to the clean syntax, static typing, and special features making code easier to write. Result builders, combined with Swift's closure expression syntax, can significantly enhance code readability. Adwaita for Swift leverages these Swift features to provide an intuitive interface for developing applications for the GNOME platform. The Swift blog It seems the Swift project is actively trying to move beyond being the Apple programming language'.
Intel, Microsoft, Qualcomm, and AMD have all been pushing the idea of an AI PC" for months now as we head toward more AI-powered features in Windows. While we're still waiting to hear the finer details from Microsoft on its big plans for AI in Windows, Intel has started sharing Microsoft's requirements for OEMs to build an AI PC - and one of the main ones is that an AI PC must have Microsoft's Copilot key. Tom Warren at The Verge I lack the words in any of the languages I know to describe the utter disdain I have for this.
In an interview with Microsoft's CEO of Gaming during the annual Game Developers Conference, Spencer told Polygon about the ways he'd like to break down the walled gardens that have historically limited players to making purchases through the first-party stores tied to each console. Or, in layperson terms, why you should be able to buy games from other stores on Xbox - not just the official storefront. Spencer mentioned his frustrations with closed ecosystems, so we asked for clarity. Could he really see a future where stores like Itch.io and Epic Games Store existed on Xbox? Was it just a matter of figuring out mountains of paperwork to get there? Chris Plante at Polygon The answer is yes, Spencer claims. I don't know how realistic any of this is, but to me it makes perfect sense, and the gaming world has been moving towards it for a while now. At the moment, I'm doing something thought unthinkable until very recently: I'm playing a major Sony PlayStation exclusive, Horizon: Forbidden West, on PC, through Steam on Linux. Sony has been making its major exclusives available on Steam in recent years, and while seeing these games on Xbox might be a bit too much to ask, I wouldn't be surprised to see storefronts from companies who don't make game consoles pop up on the Xbox and PlayStation. Games have become so expensive to make that limiting them to a single console just doesn't make any commercial sense. Why limit your audience?
If you read my scoop last week, I bet you've been wondering - how well could a Snapdragon chip actually run Windows games? At the 2024 Game Developers Conference, the company claimed Arm could run those titles at close to x86/64 speed, but how fast is fast? With medium-weight games like Control and Baldur's Gate 3, it looks like the target might be: 30 frames per second at 1080p screen resolution, medium settings, possibly with AMD's FSR 1.0 spatial upscaling enabled. Sean Hollister at The Verge Those are some rough numbers for machines Qualcomm claims can run x86 games at close to full speed.
Hackaday recently published an article titled Why x86 Needs to Die" - the latest addition in a long-running RISC vs CISC debate. Rather than x86 needing to die, I believe the RISC vs CISC debate needs to die. It should've died a long time ago. And by long, I mean really long. About a decade ago, a college professor asked if I knew about the RISC vs CISC debate. I did not. When I asked further, he said RISC aimed for simpler instructions in the hope that simpler hardware implementations would run faster. While my memory of this short, ancient conversation is not perfect, I do recall that he also mentioned the whole debate had already become irrelevant by then: ISA differences were swept aside by the resources a company could put behind designing a chip. This is the fundamental reason why the RISC vs CISC debate remains irrelevant today. Architecture design and implementation matter so much more than the instruction set in play. Chips and Cheese The number of instruction sets killed by x86 is high, and the number of times people have wrongly predicted the death of x86 - most recently, after Apple announced its first ARM processors - is even higher. It seems people are still holding on to what x86 was like in the '80s and early '90s, completely forgetting that the x86 we have today is a very, very different beast. As Chips and Cheese details in this article, the differences between x86 and, say, ARM, aren't nearly as big and fundamental as people think they are. I'm a huge fan of computers running anything other than x86, not because I hate or dislike the architecture, but because I like things that are different, and the competition they bring. That's why I love POWER9 machines, and can't wait for competitive non-Apple ARM machines to come along. If you try to promote non-x86 ISAs out of hatred or dislike of x86, history shows you'll eventually lose.
FuryGpu is a real hardware GPU implemented on a Xilinx Zynq UltraScale+ FPGA, built on a custom PCB and connected to the host computer using PCIe. Supporting hardware features equivalent to a high-end graphics card of the mid 1990s and a full modern Windows software driver stack, it can render real games of that era at beyond real-time frame rates. FuryGpu A really cool project, undertaking by a single person - who also wrote the Windows drivers for it, which was apparently the hardest part of the project, as the announcement blog post details. Another blog post explains how the texture units work.
About a year ago I came across the Previous emulator - it appeared to be a faithful simulation of the NeXT hardware and thus capable of running NeXTStep. While including it in Infinite Mac would be scope-creep, NeXT's legacy is in many ways more relevant to today's macOS than classic Mac OS. It also helped that it's under active development by its original creator (see the epic threadin the NeXT Computers forums), and thus a modern, living codebase. Previous is the fifth emulator that I've ported to WebAssembly/Emscripten and the Infinite Mac runtime, and it's gotten easier. As I'm doing this work, I'm developing more and more empathy for those doing Mac game ports - some things are really easy and others become yak shaves due to the unintended consequences of choices made by the original developers. Previous is available on multiple platforms and has good abstractions, so overall it was a pretty pleasant experience. Mihai Parparita By porting previous to WebAssembly/Emscripten, Infinite Mac now offers access to a whole slew of NeXTSTEP releases, from the earliest known release to the last one from 1997. There's also a ton of applications added to make the experience feel more realistic. This makes Infinite Mac even more useful than it already was, ensuring it's one of the best and easiest ways to experience old macOS and now NeXTSTEP releases through virtual machines (real ones, this time), available in your browser. I'll be spending some time with these new additions for sure, since I've very little experience with NeXTSTEP other than whatever I vicariously gleamed through Steven Troughton-Smiths toots on the subject over the years. Mihai Parparita is doing incredibly important work through Infinite Mac, and he deserves credit and praise for all he's doing here.
Seven years ago, on 27 March 2017, Apple introduced one of the most fundamental changes in its operating systems since Mac OS X 10.0 Cheetah was released 16 years earlier. On that day, those who updated iOS to version 10.3 had their iPhone's storage silently converted to the first release of Apple File System, APFS. Six months later, with the release of macOS 10.13 High Sierra on 25 September, Mac users followed suit. Howard Oakley The migration from HFS+ to APFS is still an amazing feat for Apple to have pulled off. Hundreds of millions devices converted from one filesystem to another, and barely anyone noticed - no matter how you look at it, that's an impressive achievement, and the engineers who made it possible deserve all the praise they're getting.
With KDE's 6th Mega Release finally out the door, let's reflect on the outgoing Plasma 5 that has served us well over the years. Can you believe it has been almost ten years since Plasma 5.0 was released? Join me on a trip down memory lane and let me tell you how it all began. This coincidentally continues pretty much where my previous retrospective blog post concluded. Kai Uwe It took them a few years after the release of Plasma 5.0, but eventually they won me over, and I'm now solid in the KDE camp, after well over a decade of either GNOME or Cinnamon. GNOME has strayed far too much away from just being a traditional desktop user interface, and Cinnamon is dragging its heels with Wayland support, but luckily KDE has spent a long time now clearing up so many of the paper cuts that used to plague them every time I tried KDE. That's all in the past now. They've done a solid job cleaning up a lot of the oddities and inconsistencies during Plasma 5's lifecycle, and I can't wait until Fedora 40 hits the streets with Plasma 6 in tow. In the desktop Linux world, I feel KDE and Qt will always play a little bit of second fiddle to the (seemingly) much more popular GNOME and GTK+, but that's okay - this kind of diversity and friendly competition is what makes each of these desktops better for their respective users. And this is the Linux world, after all - you're not tied down to anything your current desktop environment does, and you're free to switch to whatever else at a moment's notice if some new update doesn't sit well with you. I can't imagine using something like macOS or Windows where you have to just accept whatever garbage they throw at you with nowhere to go.
Following testing in Canary earlier this year, Google today announced that the Arm/Snapdragon version of Chrome for Windows is now rolling out to stable. Google says this version of Chrome is fully optimized for your PC's hardware and operating system to make browsing the web faster and smoother." People that have been testing it report significant performance improvements over the emulated version. Abner Li at 9To5Google A big Windows on Snapdragon Elite X is about to tumble through the tech media landscape, and this Chrome release fits right into the puzzle.
Today, Canonical announced the general availability of Legacy Support, an Ubuntu Pro add-on that expands security and support coverage for Ubuntu LTS releases to 12 years. The add-on will be available for Ubuntu 14.04 LTS onwards. Long term supported Ubuntu releases get five years of standard security maintenance on the main Ubuntu repository. Ubuntu Pro expands that commitment to 10 years on both the main and universe repositories, providing enterprises and end users alike access to a vast secure open source software library. The subscription also comes with a phone and ticket support tier. Ubuntu Pro paying customers can purchase an extra two years of security maintenance and support with the new Legacy Support add-on. Canonical blog Assuming all of this respects the open source licenses of the countless software packages that make up Ubuntu, this seems like a reasonable way to offer quite a long support lifecycle for those that really need it. Such support doesn't come free, and it I think it's entirely reasonable to try and get compensated for the work required in maintaining that level of support for 10 or 12 years. If you want this kind of longevity from your Linux installation without paying for it, you'll have to maintain it yourself. Seems reasonable to me.
Welcome to the 3D era! Well... sorta. Sega enjoyed quite a success with the Mega Drive so there's no reason to force developers to write 3D games right now. Just in case developers want the extra dimension, Sega adapted some bits of the hardware to enable polygon drawing as well. Hopefully, the result didn't get out of hand! Rodrigo Copetti These in-depth analyses by Copetti are always a treat, and the Saturn one is no exception.
I worked for a few years in the intersection between data science and software engineering. On the whole, it was a really enjoyable time and I'd like to have the chance to do so again at some point. One of the least enjoyable experiences from that time was to deal with big CSV exports. Unfortunately, this file format is still very common in the data science space. It is easy to understand why - it seems to be ubiquitous, present everywhere, it's human-readable, it's less verbose than options like JSON and XML, it's super easy to produce from almost any tool. What's not to like? Robin Kaveland I'm not going to pretend to be some sort of expert on this matter, but even as a casual it seems CSV isn't exactly scalable to large data sets. It seems to work great for smaller exports and imports for personal use, but any more complicated matters it seems wholly unsuited for.
Complete desktops contain all operating system components as well as Internet Explorer and Outlook Express. Where possible, I have tried to include built in file transfer programs (Web Publishing Wizard, Web Folders), useful system tools (System File Checker, System Restore) and certain wizards (Network Setup Wizard, Internet Connection Wizard). As a result, some of the desktops are quite large and can take some time to load. VirtualDesktop.org These are easily loaded virtual machines inside your browser, for various versions of Windows and macOS. There's more and more of these websites now, and while I don't use them for anything, they're still quite handy in a pinch. And let's face it - it's still kind of magical to see entire operating systems running inside a browser. The website also has several virtual machines without applications, and application-specific virtual machines, too, focused on browsers and mail clients.
In Google's First Tensor Processing Unit - Origins, we saw why and how Google developed the first Tensor Processing Unit (or TPU v1) in just 15 months, starting in late 2013. Today's post will look in more detail at the architecture that emerged from that work and at its performance. The Chip Letter People forget that Google is probably one of the largest hardware manufacturers out of the major technology companies. Sadly, we rarely get good insights into what, exactly, these machines are capable of, as they rarely make it to places like eBay so people can disseminate them.
Most of the Linux world has moved to systemd by now, but there are still quite a few popular other init systems, too. One of those is the venerable SysV init, which saw a brand new release yesterday. The biggest improvement also seems like it'll enable a match made in heaven: SysVinit, but with musl. On Linux distributions which use the musl C library (instead of glibc) we can now build properly. Specifically, the hddown helper program now builds on musl C systems. SysVinit 3.09 release notes It's important init systems like SysV init and runit don't just die off or lose steam because of the systemd juggernaut, as competition, alternatives, and different ideas are what makes open source what it is.
Version 1.06 is a more modest release than 1.05 or 1.04. But I think that's okay. v1.06 includes one new Application, three new Utilities and new features and improvements to several existing Apps and Utilities, and even some new low-level features in the KERNAL and libraries. This latest release makes use of a combination of all of the above to provide a handy new feature for users and a potentially powerful and useful feature for developers, when put to creative uses at a low-level. Discussions of just this nature have already been spurred on in the developer forums on the C64 OS Discord server. That feature is: Hidden Files. Greg Nacu C64 OS is a marvel of engineering, and what the developers are managing to squeeze out of the C64 is stunning. This article delves deep into how hidden files were implemented in the latest release.