Feed osnews OSnews

Favorite IconOSnews

Link https://www.osnews.com/
Feed http://www.osnews.com/files/recent.xml
Updated 2025-07-07 08:31
Silicon Valley developers need to unionise
I don't know anything about hiring processes in Silicon Valley, or about hiring processes in general since I've always worked for myself (and still do, running OSNews, relying on your generous Patreon and Ko-Fi support), so when I ran into this horror story of applying for a position at a Silicon Valley startup, I was horrified. Apparently it's not unheard of - it might even be common? - to ask applicants for a coding position to develop a complex application, for free, without much guidance beyond some vague, generic instructions? In this case, the applicant, Jose Vargas, was applying for a position at Kagi, the search startup with the, shall we say, somewhat evangelical fanbase. After applying, he was asked to develop a complete e-mail client, either as a TUI/CLI or a web application that can view and send emails, using a fake or a real backend, which can display at least plaintext e-mails. None of this was going to be paid labour, of course. Vargas started out by sending in a detailed proposal of what he was planning to create, ending with the obvious question what kind of response he'd get if he actually implemented the detailed proposal. He got a generic response in return, without an answer to that question, but he set out to work regardless. In the end, it took him about a week to complete the project and send it in. He eventually received a canned rejection notice in response, and after asking for clarification the hiring manager told him they wanted something simpler and stronger", so he didn't make the cut. I'm not interested in debating whether or not Vargas was suited for the position, or if the unpaid work he sent in was any good. What I do want to talk about, though, is the insane amount of unpaid labour applicants are apparently asked to do in Silicon Valley, the utter lack of clear and detailed instructions, and how the hiring manager didn't answer the question Vargas sent in alongside his detailed proposal. After all, the hiring manager could've saved everyone a ton of time by letting Vargas know upfront the proposal wasn't what Kagi was looking for. Everything about this feels completely asinine to me. As a (former) translator, I'm no stranger to having to do some work to give a potential client an idea of what my work looks like, but more than half a page of text to translate was incredibly rare. Only on a few rare occasions did a prospective client want me to translate more than that, and in those cases it was always as paid labour, at the normal, regular rate. For context, half a page of text is less than half an hour of work - a far cry from a week's worth of unpaid labour. I've read a ton of online discourse about this particular story, and there's no clear consensus on whether or not Vargas' feelings are justified. Personally, I find the instructions given by Kagi overly broad and vague, the task of creating an email client to be overly demanding, and the canned (AI"?) responses by the hiring manager insulting - after sending in such a detailed proposal, it should've been easy for a halfway decent hiring manager to realise Vargas might not be a good fit for the role, and tell him so before he started doing any work. Kagi is fully within its right to determine who is and is not a good fit for the company, and who they hire is entirely up to them. If such stringent, demanding hiring practices are par for the course in Silicon Valley, I also can't really fault them for toeing the industry line. The hiring manager's behaviour seems problematic, but everyone makes mistakes and nobody's perfect. In short, I'm not even really mad at Kagi specifically here. However, if such hiring practices are indeed the norm, can I, as an outsider, just state the obvious? What on earth are you people doing to each other over there in Silicon Valley? Is this really how you want to treat potential applicants, and how you, yourself, want to be treated? Imagine if a someone applies to be a retail clerk at a local supermarket, and the supermarket's hiring manager asks the applicant to work an entire week in the store, stocking shelves and helping shoppers, without paying the person any wages, only to deny their application after the week of free labour is over? You all realise how insane that sounds, right? Why not look at a person's previous work, hosted on GitHub or any of its alternatives? Why not contact their previous employers and ask about their performance there, as happens in so many other industries? Why, instead of asking someone to craft an entire email client, don't you just give them a few interesting bugs to look at that won't take an entire week of work? Why not, you know, pay for their labour if you demand a week's worth of work? I'm so utterly baffled by all of this. Y'all developers need a union.
E-COM: the $40 million USPS project to send email on paper
How do you get email to the folks without computers? What if the Post Office printed out email, stamped it, dropped it in folks' mailboxes along with the rest of their mail, and saved the USPS once and for all? And so in 1982 E-COM was born-and, inadvertently, helped coin the term e-mail." Justin Duke The implementation of E-COM was awesome. You'd enter the messages on your computer, send it to the post office using a TTY or IBM 2780/3789 terminals, to Sperry Rand Univac 1108 computer systems at one of 25 post offices. Postal staff would print the messages and send them through the regular postal system to their recipients. The USPS actually tried to get a legal monopoly on this concept, but the FCC fought them in court and won out. E-COM wasn't the breakout success the USPS had hoped for, but it did catch on in one, unpleasant way: spam. The official-looking E-COM enevelopes from the USPS were very attractive to junk mail companies, and it was estimated that about six companies made up 70% of the total E-COM volume of 15 million messages in its second year of operation. The entire article is definitely recommended reading, as it contains a ton more information about E-COM and some of the other attempts by USPS to ride the coattails of the computer and internet revolution, including the idea to give every US resident an @.us e-mail address. Wild.
Microsoft blinks, extends Office support for Windows 10 by three years
At the start of this year, Microsoft announced that, alongside the end of support for Windows 10, it would also end support for Office 365 (it's called Microsoft 365 now but that makes no sense to me) on Windows 10 around the same time. The various Office applications would continue to work on Windows 10, of course, but would no longer receive bug fixes, security plugs, and so on. Well, it seems Microsoft experienced some pushback on this one, because it just extended this end-of-support deadline for Office 365 on Windows 10 by an additional three years. To help maintain security while you transition to Windows 11, Microsoft will continue providing security updates for Microsoft 365 Apps on Windows 10 for three years after Windows 10 reaches end of support. These updates will be delivered through the standard update channels, ending on October 10, 2028. Microsoft support article The reality is that the vast majority of Windows users are still using Windows 10, and despite countless shady shenanigans and promises of AI" bliss, there's relatively little movement in the breakdown between Windows 10 and Windows 11 users. As such, the idea that Microsoft would just stop fixing security issues and bugs in Office on Windows 10 a few months from now seemed preposterous from the outset, and that seems to have penetrated the walls of Microsoft's executives, too. The real question now is: will Microsoft extend the same courtesy to Windows 10 itself? The clock is ticking, there's only a few months left to go before support for Windows 10 ends, leaving 60-70% of Windows users without security fixes and updates. If they blinked with Office, why wouldn't they blink with Windows 10, too? Who dares to place a bet?
Cracking the Dave & Buster’s anomaly
Let's dive into a peculiar bug in iOS. And by that I mean, let's follow along as Guilherme Rambo dives into a peculiar bug in iOS. The bug is that, if you try to send an audio message using the Messages app to someone who's also using the Messages app, and that message happens to include the name Dave and Buster's", the message will never be received. Guilherme Rambo As I read this first description of the bug, I had no idea what could possibly be causing this. However, once Rambo explained that every audio message is transcribed by Apple into a text version, I immediately assumed what was going on: that and" is throwing up problems because the actual name of the brand is stylised with an ampersand, isn't it? It's always DNS HTML, isn't it? Yes. Yes it is. MessagesBlastDoorService uses MBDXMLParserContext (via MBDHTMLToSuperParserContext) to parse XHTML for the audio message. Ampersands have special meaning in XML/HTML and must be escaped, so the correct way to represent the transcription in HTML would have been "Dave & Buster's". Apple's transcription system is not doing that, causing the parser to attempt to detect a special code after the ampersand, and since there's no valid special code nor semicolon terminating what it thinks is an HTML entity, it detects an error and stops parsing the content. Guilherme Rambo It must be somewhat of a relief to programmers and developers the world over that even a company as large and filled with talented people as Apple can run into bugs like this.
Crosscompiling for OpenBSD arm64
Following on from OpenBSD/arm64 on QEMU, it's not always practical to compile userland software or a new kernel on some systems, particularly small SoCs with limited space and memory - or indeed QEMU, in fear of melting your CPU. There are two scenarios here - the first, if you are looking for a standard cross-compiler for Aarch64, and the second if you want an OpenBSD-specific environment. Daniel Nechtan Exactly what it says on the tin.
Linux removes support for the 486, and now I’m curious what that means for Vortex86 processors
I had to dig through our extensive archive - OSNews was founded in 1997, after all - to see if we reported on it at the time, but it turns out we didn't: in 2006, Intel announced that in 2007, it would cease production of a range of old chips, including the 386 and 486. In Product Change Notification 106013-01, Intel proclaimed these chips dead. Intel Corporation has been manufacturing its MCS 51, MCS 251 and MCS 96 Microcontroller Product Lines for over 25 years now, and the Intel 186 Processor Families, the Intel 386 Processor Families and the Intel 486 Processor Families for over 15 years now. Additionally, we have been manufacturing the i960 32 Bit RISC Processor Families for over 15 years. However, at this time, the forecasted volumes for these product lines are now too low to continue production of these products beyond the year 2007. Therefore, Intel will cease manufacturing silicon wafers for our 6'' based processes in 2007. Affected products include Intel's MCS 51, MCS 251, MCS 96, 80X18X, 80X38X, 80X486DXX, the i960 Family of Microcomputers, in addition to the 82371SB, 82439TX and the 82439HX Chipsets. Intel has no choice but to issue a Product Discontinuance Notice (PDN) effective 3/30/06. Last time orders will be accepted till 3/30/07 with last time ship dates of 9/28/07. Intel Product Change Notification 106013-01 Considering the 386, 486, and i960 families of processors were only used for niche embedded at very low volumes at that point in time, it made sense to call it quits. We're 18 years down the line now, and I don't think anyone really mourns the end of production for these processors. Windows ended support for these chips well before the 2007 end of production date, with Windows 2000 being the last Windows version that would run on a 486, albeit only barely, since it officially required a Pentium processor. Linux, though, continued to support the 486, but that, too, is now coming to an end. In a patch submitted to the LKML, Ingo Molnar, support for a variety of complicated hardware emulation facilities" for x86-32 will be removed, effectively ending support for 486 and very early 586 processors, by increasing the minimum kernel support features to include TSC and CX8 (CMPXCHG8B) hardware support. Linus Torvalds has expressed interest in removing support for the 486 back in 2022, so this move doesn't come as a huge surprise. While most tech news outlets leave it at that, as I was reading this news, I immediately thought of the Vortex86 line of processors and what this would mean for Linux support for those processors. In case you're unaware, the Vortex86 is a line of x86-32-compatible processors, originating at SiS, but now developed and produced by DMP Electronics in Taiwain. The last two variants were the Vortex86DX3, a dual-core chip running at 1Ghz, and the Vortex86EX2, a chip with two asymmetrical cores that can run two operating systems at once. Their platform support documents for Windows and Linux are from 2021, so we can't rely on those for more information. Digging through some of the documentation from ICOP, who sell industrial PCs based on the latest Vortex86DX3, I think support in modern kernels is very much hit and miss even before this news. All Vortex86 processors are supposedly i586 (with later variants being i686, even), but some of the earlier versions were compatible with the 486SX. On top of that, Linux 4.14 seems to be the last kernel that supports any of these chips out-of-the-box based on the documentation by DMP - but then, if you go back to ICOP, you'll find news items about Linux 5.16 adding better support for Vortex86, so I'm definitely confused. My uneducated guess is that the DX3 and EX2 will probably work even after these changes to the Linux kernel, but earlier models might have more problems. Even on the LKML I can find messages from the kind of people who know their stuff who don't know all the ins and outs of these Vortex86 processors, and which instructions they actually support. It won't matter much for people relying on Vortex86 processors in industrial and commercial settings, though, since they tend to use custom stacks built by the vendor, so they're going to be just fine. What's more interesting is the what I assume is a small enthusiast market using Vortex86 processors who might want to run modern Linux kernels on them. I have a feeling these code removals might lead to some issues on especially the earlier models, meaning you'll have to use older kernels. I've always been fascinated by the Vortex86 line of processors, and on numerous occasions I've hovered over the buy button on some industrial PC using the VortexDX3 (or earlier) processor. Let me know if you're interested in seeing what this chip can do, and if there's enough interest, I can see if I can set a Ko-Fi goal to buy one these and mess around with Windows Embedded/CE, Linux, and god knows what else these things can be made to run.
A brief history of the numeric keypad
The title is a lie. This isn't brief at all. Picture the keypad of a telephone and calculator side by side. Can you see the subtle difference between the two without resorting to your smartphone? Don't worry if you can't recall the design. Most of us are so used to accepting the common interfaces that we tend to overlook the calculator's inverted key sequence. A calculator has the 7-8-9 buttons at the top whereas a phone uses the 1-2-3 format. Subtle, but puzzling since they serve the same functional goal - input numbers. There's no logical reason for the inversion if a user operates the interface in the same way. Common sense suggests the reason should be technological constraints. Maybe it's due to a patent battle between the inventors. Some people may theorize it's ergonomics. With no clear explanation, I knew history and the evolution of these devices would provide the answer. Which device was invented first? Which keypad influenced the other? Most importantly, who invented the keypad in the first place? Francesco Bertelli and Manoel do Amara Sometimes, you come across articles that are one-of-a-kind, and this is one of them. Very few people would go to this length to document such a particular thing most people find utterly insignificant, but luckily for us, Francesco Bertelli and Manoel do Amara went all the way with this one. If you want to know anything about the history of the numerical pad and its possibly layouts, this is the place to go. What I've always found fascinating about numerical pads is how effortless the brain can switch between the two most common layouts without really batting an eye. Both layouts seem so ingrained in my brain that it feels like there's barely any context-switching involved, and my fingers just effortlessly flow to the correct numbers. Considering numbers tend to confuse me, I wouldn't have been at all surprised to find myself having issues switching between the two layouts. What makes this even more interesting is when I consider the number row on the keyboard - you know, 1 through 0 - because there I do tend to have a lot of issues finding the right numbers. I don't mean it takes seconds or anything like that, but I definitely experience more hiccups working with the number row than with a numerical keypad of either layout.
A brief history of the BSD Fast FileSystem
We're looking at an article from 2007 here, but I still think it's valuable and interesting, especially from a historical perspective. I first started working on the UNIX file system with Bill Joy in the late 1970s. I wrote the Fast File System, now called UFS, in the early 1980s. In this article, I have written a survey of the work that I and others have done to improve the BSD file systems. Much of this research has been incorporated into other file systems. Marshall Kirk McKusic Variants of UFS are still the default file system in at least NetBSD and OpenBSD, and it's one of the two default options in FreeBSD (alongside ZFS). In other words, this article, and the work described therein, is still relevant to this very day.
Microsoft unveils the new Start menu for Windows 11 users
I think one of the more controversial parts of Windows 11 - aside from its system requirements, privacy issues, crapware, and AI" nonsense - is its Start menu. I've heard so many complaints about how it's organised, its performance, the lack of customisation, and so on. Microsoft heard those complaints, and has unveiled the new Start menu that'll be shipping to Windows 11 soon - and I have to say, there's a ton of genuine improvements here that I think many of you will be happy with. First and foremost, the all applications" view, that until now has been hidden behind a button, will be at the top level, and you can choose between a category view, a grid view, and a list view. This alone makes the Windows 11 Start menu so much more usable, and will be more than enough to make a lot of users want to upgrade, I'm sure. Second, customisation is taken a lot more seriously in this new incarnation of the Start menu. You can actually shrink or remove completely sections you're not using. If you're not interested in those recommendations, you can just remove that section. Don't want to use the feature where you pin applications to the Start menu? Remove that section. This, too, seems to address common complaints, and I'm glad Microsoft is fixing this. Then there's the rest. Microsoft is promising this new Start menu will perform better, which better be true because I've seen some serious lag and delays on incredibly powerful hardware. The recommendations have been improved as well, in case you care about those, and there's a new optional mobile panel that you can slide out, which contains everything related to your phone. Personally, I'm a classic Start menu kind of person - on all my machines (which all run Fedora KDE), I use a classic, very traditional cascading menu that contains nothing but application categories and their respective applications, and nothing more. Still, were I forced to use Windows, these improvements are welcome, and they seem genuine.
Chromium to use “AI” to combat the spam notifications it helped create
Notifications in Chrome are a useful feature to keep up with updates from your favorite sites. However, we know that some notifications may be spammy or even deceptive. We've received reports of notifications diverting you to download suspicious software, tricking you into sharing personal information or asking you to make purchases on potentially fraudulent online store fronts. To defend against these threats, Chrome is launching warnings of unwanted notifications on Android. This new feature uses on-device machine learning to detect and warn you about potentially deceptive or spammy notifications, giving you an extra level of control over the information displayed on your device. Hannah Buonomo and Sarah Krakowiak Criel on the Chromium Blog So first web browser makers introduce notifications, a feature nobody asked for and everybody hates, and now they're using AI" to combat the spam they themselves enabled and forced onto everyone? Don't we have a name for a business model where you purport to protect your clients from threats you yourself pose? Turning off notifications is one of the first things I do after installing a browser. I do not ever want any website sending me a notification, nor do I want any of them to ask me for permission to do so. They're such an obvious annoyance and massive security threat, and it's absolutely mindboggling to me we just accept them as a feature we have to live with. I genuinely wish browsers like Firefox, which claim to protect your privacy, would just have the guts to be opinionated and rip shit features like this straight out of their browser. Using AI" to combat spam notifications instead of just turning notifications off is peak techbro.
Xtool: cross-platform Xcode replacement for Linux, Windows, and macOS
A few months ago I shared my Swift SDK for Darwin, which allows you to build iOS Swift Packages on Linux, amongst other things. I mentioned that a lot of work still needed to be done, such as handling codesigning, packaging, and bundling. I'm super excited to share that we've finally reached the point where all of these things are now possible with cross-platform, open source software. Enter, xtool! This means it's finally possible to build and deploy iOS apps from Linux and Windows (WSL). At the same time, xtool is SwiftPM-based and fully declarative, which means you can also use it to replace Xcode on macOS for building iOS software! kabiroberai While this is obviously an impressive piece of engineering that's taken countless years to fully put together, the issue this doesn't address are Apple's licensing terms when it comes to Xcode and development for Apple's platforms. The Apple Developer Program License Agreement clearly forbids installing Xcode and the Apple SDK on non-Apple branded devices, and as this new xtool requires you download Xcode.xip and use it, it seems it violates these terms. Now, as far as I'm concerned, these terms are idiotic and should be 100% illegal, but if you're an Apple developer who relies on your Apple developer account to make money, using a tool like this definitely has the potential to put your developer account at risk. For experimentation, sure, this is great, but for any official work I would be quite weary until Apple makes some sort of statement about the matter, which is highly unlikely to happen. Perhaps the courts can, at some point, have a say here - especially in the EU - but even then, Apple can always find or manufacture some reason to terminate your account if they really want to. If you want to develop on your own terms, perhaps developing for Apple platforms is not what you should be doing.
A formal analysis of Apple’s iMessage PQ3 protocol
We present the formal verification of Apple's iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple's identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security. We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!). Felix Linker and Ralf Sasse Weekend, light reading, you know how this works by now. Light some candles, make some tea, get comfy.
Even John Siracusa thinks Tim Cook should step down
John Siracusa, one third of the excellent ATP podcast, developer of several niche Mac utilities, and author of some of the best operating system reviews of all time, has called for Apple's CEO, Tim Cook, to step down. Now, countless people call for Tim Cook to stand down all the time, but when someone like Siracusa, an ardent Mac user since the release of the very first Macintosh and a staple of the Apple community, makes such a call, it carries a bit more weight. His main argument is not particularly surprising to anyone who's been keeping tabs on the Apple community, and the Apple developer community in particular: Apple seems to no longer focus on making great products, but on making money. Every decision made by Apple's leadership team is focused solely on extracting as much money from consumers and developers, instead of on making the best possible products. The best leaders can change their minds in response to new information. The best leaders can be persuaded. But we've had decades of strife, lawsuits, and regulations, and Apple has stubbornly dug in its heels even further at every turn. It seems clear that there's only one way to get a different result. In every healthy entity, whether it's an organization, an institution, or an organism, the old is replaced by the new: CEOs, sovereigns, or cells. It's time for new leadership at Apple. The road we're on now does not lead anywhere good for Apple or its customers. It's springtime, and I'm choosing to believe in new life. I swear it's not too late. John Siracusa I reached this same point with Apple a long, long time ago. I was an ardent Mac user during the PowerPC G4 and G5 days, lasting into the early Intel days. However, as the iPhone and related services took over as Apple's primary source of income, I felt that Mac OS X, which I once loved and enjoyed so much, started to languish, and it's been downhill for Apple's desktop operating system ever since. Whenever I have to help my parents with their computers - modern M1 and M2 Macs - I am baffled and saddened by just how big of a convoluted, disjointed, and unintuitive mess macOS has become. I long ago stopped caring about whatever products Apple releases or updates, because I feel like as a user who genuinely cares about his computing experience, Apple simply doesn't make products for me. I'm not sure replacing Tim Cook with someone else will really change anything about Apple's priorities; in the end, it's a publicly traded corporation that thinks it needs to please shareholders, and a focus on great products instead of money isn't going to help with that. Apple long ago stopped being the beleaguered company many of its most ardent fans still seem convinced that it is, and it's now one of those corporate monoliths that can make billions more overnight by squeezing just a bit more out of developers or users, regardless of what that squeezing does to the user experience. Apple is still selling more devices than ever, and it's still raking in more gambling gains through digital slot machines for children, and as long as that's the case, replacing Tim Cook won't do a goddamn thing.
“AI” automated PR reviews mostly useless junk
The team that makes Cockpit, the popular server dashboard software, decided to see if they could improve their PR review processes by adding AI" into the mix. They decided to test both sourcey.ai and GitHub Copilot PR reviews, and their conclusions are damning. About half of the AI reviews were noise, a quarter bikeshedding. The rest consisted of about 50% useful little hints and 50% outright wrong comments. Last week we reviewed all our experiences in the team and eventually decided to switch off sourcery.ai again. Instead, we will explicitly ask for Copilot reviews for PRs where the human deems it potentially useful. This outcome reflects my personal experience with using GitHub Copilot in vim for about 1.5 years - it's a poisoned gift. Most often it just figured out the correct sequence of ), ], and } to close, or automatically generating debug print statements - for that typing helper" work it was actually quite nice. But for anything more nontrivial, I found it took me more time to validate the code and fix the numerous big and subtle errors than it saved me. Martin Pitt AI" companies and other proponents of AI" keep telling us that these tools will save us time and makes things easier, but every time someone actually sits down and does the work of testing AI" tools out in the field, the end results are almost always the same: they just don't deliver the time savings and other advantages we're being promised, and more often than not, they just create more work for people instead of less. Add in the financial costs of using and running these tools, as well as the energy they consume, and the conclusion is clear. When the lack of effectiveness of AI" tools our in the real world is brought up, proponents inevitably resort to yes it sucks now, but just you wait on the next version!" Then that next version comes, people test it out in the field again, and it's still useless, and those same proponents again resort to yes it sucks now, but just you wait on the next version!", like a broken record. We're several years into the hype, and that mythical next version" still isn't here. We're several years into the AI" hype, and I still have seen no evidence it's not a dead end and a massive con.
Google requires Android applications on Google Play to support 16 KB page sizes
About a year ago, we talked about the fact that Android 15 became page size-agnostic, supporting both 4 KB and 16 KB page sizes. Google was already pushing developers to get their applications ready for 16 KB page sizes, which means recompiling for 16 KB alignment and testing on a 16 KB version of an Android device or simulator. Google is taking the next step now, requiring that every application targeting Android 15 or higher submitted to Google Play after 1 November 2025 must support a page size of 16 KB. This is a key technical requirement to ensure your users can benefit from the performance enhancements on newer devices and prepares your apps for the platform's future direction of improved performance on newer hardware. Without recompiling to support 16 KB pages, your app might not function correctly on these devices when they become more widely available in future Android releases. Dan Brown on the Android Developers Blog This is mostly only relevant for developers instead of users, but in the extremely unlikely scenario that one of your favourite applications cannot be made to work with 16 KB page sizes for some weird reason, or the developer refuses to support it or some even weirder reason, you might have to say goodbye to that applications if you use Android 15 or higher. This is absurdly unlikely, but I wouldn't be surprised if it happens to at least one application. If that happens, I want to know which application that is, and ask the developer for their story.
Introducing Mac Themes Garden
I've launched" the Mac Themes Garden! It is a website showcasing more than 3,000 (and counting) Kaleidoscope from the Classic Mac era, ready to be seen, downloaded and explored! Check it out! Oh, and there also is an RSS feed you can subscribe to see themes as they are added/updated! Damien Erambert If you've spent any time on retrocomputing-related social media channels, you've definitely seen the old classic Mac OS themes in your timeline. They are exquisitely beautiful artifacts of a bygone era, and the work Damien Erambert has been doing to make these easily available and shareable, entirely in his free time, is awesome and a massive service to the retrocomputing community. The process to get these themes loaded up onto the website is actually a lot more involved than you might imagine. It involves a classic Mac OS virtual machine, applying themes, taking screenshots, collecting creator information, and adding everything to a database. This process is mostly manual, and Erambart estimates he's about halfway done. If you have classic Mac OS running somewhere, on real hardware or in a virtual machine, you can now easily theme it at your heart's content.
Reverse-engineering Fujitsu M7MU RELC hardware compression
This is a follow-up to the Samsung NX mini (M7MU) firmware reverse-engineering series. This part is about the proprietary LZSS compression used for the code sections in the firmware of Samsung NX mini, NX3000/NX3300 and Galaxy K Zoom. The post is documenting the step-by-step discovery process, in order to show how an unknown compression algorithm can be analyzed. The discovery process was supported by Igor Skochinsky and Tedd Sterr, and by writing the ideas out on encode.su. Georg Lukas It's not weekend quite yet, but here's some light reading ahead of time.
Microsoft changes pre-production driver signing, ends the device metadata service
As the headline suggests, we're going to be talking about some very dry Windows stuff that only affects a relatively small number of people, but for those people this is a big deal they need to address. If you're working on pre-production drivers that need to be signed, this is important to you. The Windows Hardware Program supports partners signing drivers for use in pre-production environments. The CA that is used to sign the binaries for use in pre-production environments on the Windows Hardware Program is set to expire in July 2025, following which a new CA will be used to sign the preproduction content starting June 9, 2025. Hardware Dev Center Alongside the new CA come a bunch of changes to the rules. First and foremost, expiry of signed drivers will no longer be tied to the expiry of the underlying CA, so any driver signed with the new CA will not expire, regardless of what happens to the CA. In addition, on April 22, May 13, and June 10, 2025, Windows servicing releases (4D/5B/6B) will be shipped to Windows versions (down to Windows Server 2008) to replace the old CAs with the new ones. As such, if you're working on pre-production drivers, you need to install those Latest Cumulative updates. On a very much related note, Microsoft has announced it's retiring device metadata and the Windows Metadata and Internet Services (WMIS). This is what allowed OEMs and device makers to include things like device names, custom device icons, and other information in the form of an XML file. While OEMs can no longer create new device metadata this way, existing metadata already installed on Windows clients will remain functional. As a replacement for this functionality, Microsoft points to the driver's INF files, where such information and icons can also be included. Riveting stuff.
openSUSE removes Deepin from its repositories after long string of security issues and unauthorised security bypass
The openSUSE team has decided to remove the Deepin Desktop Environment from openSUSE, after the project's packager for openSUSE was found to have added workaround specifically to bypass various security requirements openSUSE has in place for RPM packages. Recently we noticed a policy violation in the packaging of the Deepin desktop environment in openSUSE. To get around security review requirements, our Deepin community packager implemented a workaround which bypasses the regular RPM packaging mechanisms to install restricted assets. As a result of this violation, and in the light of the difficult history we have with Deepin code reviews, we will be removing the Deepin Desktop packages from openSUSE distributions for the time being. Matthias Gerstner Matthias Gerstner goes into great detail to lay out every single time the openSUSE team found massive, glaring security issues in Deepin, and the complete lack of adequate responses from the Deepin upstream team over the past 8 or so years. It's absolutely shocking to see how utterly lax the Deepin developers have been regarding the security of their desktop environment and its dependencies, and the openSUSE team could really only come to one harsh conclusion: Deepin has no security culture whatsoever, and it's extremely likely that every corner of the Deepin code is riddled with very serious security issues. As such, despite the relatively large number of Deepin users on openSUSE, the team has decided to remove Deepin from openSUSE entirely, instead pointing users to a third-party repository if they desire to keep using Deepin. I think this is the best possible option in this situation, but it's not exactly ideal. After reading this entire saga, however, I don't think anyone who cares about security should be using Deepin. Of course, I doubt this will be the end of the story. What about all the other Linux distributions out there? The security issues in Deepin itself are most likely also present in Debian, Fedora, and other distributions who have the Deepin Desktop Environment in their repositories, but what about the workaround to bypass packaging security practices? Does that exist elsewhere as well? I think we're about to find out.
curl bans “AI” security reports as Zuckerberg claims we’ll all have more “AI” friends than real ones
Daniel Stenberg, creator and maintainer of curl, has had enough of the neverending torrent of AI"-generated security reports the curl project has to deal with. That's it. I've had it. I'm putting my foot down on this craziness. 1. Every reporter submitting security reports on Hackerone for curl now needs to answer this question: Did you use an AI to find the problem or generate this submission?" (and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions) 2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time. We still have not seen a single valid security report done with AI help. Daniel Stenberg This is the real impact of AI": streams of digital trash real humans have to clean up. While proponents of AI" keep claiming it will increase productivity, actual studies show this not to be the case. Instead, what AI" is really doing is create more work for others to deal with by barfing useless garbage into other people's backyards. It's like the digital version of the western world sending its trash to third-world countries to deal with. The best possible sign that AI" is a toxic trash heap you wouldn't want to have anything to do with are the people fighting for team AI". In Zuckerberg's vision for a new digital future, artificial-intelligence friends outnumber human companions and chatbot experiences supplant therapists, ad agencies and coders. AI will play a central role in the human experience, the Facebook co-founder and CEO of Meta Platforms has said in a series of recent podcasts, interviews and public appearances. Meghan Bobrowsky at the WSJ Mark Zuckerberg, who built his empire by using people's photos without permission so he could rank who was hotter, who used Facebook logins to break into journalists' email accounts because they were about to publish a negative story about him, who called Facebook users dumb fucks" for entrusting their personal information to him, is on the forefront fighting for AI". If that isn't the ultimate proof there's something deeply wrong and ethically unsound about AI", I don't know what is.
TDE’s Qt 3 fork drops the 3
The Trinity Desktop Environment, the continuation of the final KDE 3.x release updated and maintained for modern times, consists of more than just the KDE bits you may think of. The project also maintains a fork of Qt 3 called TQt3, which it obviously needs to be able to work on and improve TDE itself, which is based on it. In the beginning, this fork consisted mainly of renaming things, but in recent years, more substantial changes meant that the code diverged considerably from the original Qt 3. As such, a small name change is in order. TQt3 was born as a fork of Qt3 and for many years it was little more than a mere renaming effort. Over the past few years, many changes were made and the code has significantly diverged from the original Qt3, although still sharing the same roots. With more changes planned ahead and with the intention of better highlighting such difference, the TDE team has decided to drop the 3' from the repository name, which is now simply called TQt. TDE on Mastodon The effect this has on users is rather minimal - users of the current 14.1.x release branch will still see 3s around in file paths and package names, but in future 14.2.x releases, all of these will have been removed, completing the transition. This seems like a small change, and that's because it is, but it's interesting simply because it highlights that a project that seems relatively straightforward on the outside - maintain and carefully modernise the final KDE 3.x release - encompasses a lot more than that. Maintaining an entire Qt 3 fork certainly isn't a small feat, but it's kind of required to keep a project like TDE going.
VectorVFS: your filesystem as a vector database
VectorVFS is a lightweight Python package that transforms your Linux filesystem into a vector database by leveraging the native VFS (Virtual File System) extended attributes. Rather than maintaining a separate index or external database, VectorVFS stores vector embeddings directly alongside each file-turning your existing directory structure into an efficient and semantically searchable embedding store. VectorVFS supports Meta's Perception Encoders (PE) which includes image/video encoders for vision language understanding, it outperforms InternVL3, Qwen2.5VL and SigLIP2 for zero-shot image tasks. We support both CPU and GPU but if you have a large collection of images it might take a while in the first time to embed all items if you are not using a GPU. Christian S. Perone It won't surprise many of you that this goes a bit above my paygrade, but according to my limited understanding, VectorVFS stores information about files inside the xattr part of inodes. The information being stored is converted into vectors first, and this is the part that breaks my brain a bit, because vectors in this context are far too complex for me to understand. I vaguely understand the end result here - making files searchable using vector magic without using a dedicated database or separate files by using extended attributes in inodes - but the process is far more complicated to understand. It still seems like a very interesting approach, though, and I'd love for people smarter than me to take VectorVFS apart and explain it in easier terms for those of us who don't fully grasp it.
Redox gets services management, completes userspace process manager
Can someone please stop these months from coming and going, because I'm getting dizzy with yet another monthly report of all the progress made by Redox. Aside from the usual swath of improvements to the kernel, relibc, drivers, and so on, this month saw the completion of the userspace process manager. In monolithic kernels this management is done in the kernel, resulting in necessary ambient authority, and possibly constrained interfaces if a stable ABI is to be guaranteed. With this userspace implementation, it will be easier to manage access rights using capabilities, reduce kernel bugs by keeping it simpler, and make changes where both sides of the interface can be updated simultaneously. Ribbon and Ron Williams Students at Georgia Tech have been hard at work this winter on Redox as well, building a system health monitoring and recovery daemon and user interface. The Redox team has also done a lot of work to improve the build infrastructure, fixing a number of related issues along the way. The sudo daemon has now replaced the setuid bit for improved user authentication security, and a ton of existing ports have been fixed and updated where needed. Redox' monthly progress is kind of stunning, and it's clear there's a lot of interesting in the Rust-based operating system from outside the project itself as well. I wonder at what point Redox becomes usable for at least some daily, end-user tasks. I think it's not quite there yet, especially when it comes to hardware support, but I feel like it's getting there faster than anyone anticipated.
Google accidentally reveals Android’s Material 3 Expressive interface ahead of I/O
Google's accelerated Android release cycle will soon deliver a new version of the software, and it might look quite different from what you'd expect. Amid rumors of a major UI overhaul, Google seems to have accidentally published a blog post detailing Material 3 Expressive," which we expect to see revealed at I/O later this month. Google quickly removed the post from its design site, but not before the Internet Archive saved it. Ryan Whitwam at Ars Technica Google seems to be very keen on letting us know this new redesign is based on a lot of user research and metrics, which always sets off alarm bells in my mind when it comes to user interfaces. Every single person uses their smartphone and its applications a little differently, and using tons of metrics and data to average all of this out can make it so that anyone who strays to far from that average is going to have a bad time. This is compounded by the fact that each and every one of us is going to stray form the average in at least a few places. Google also seems to be throwing consistency entirely out of the window with this redesign, which chills me to the bone. One of the reasons I like the current iteration of Material Design so much is that it does a great job of visually (and to a less extent, behaviourally) unifying the operating system and the applications you use, which I personally find incredibly valuable. I very much prefer consistency over disparate branding, and the screenshots and wording I'm seeing here seem to indicate Google considers that a problem that needs fixing. As with everything UI, screenshots don't tell the whole story, so maybe it won't be so bad. I mean, it's not like I've got anywhere else to go in case Google messes this up. Monopolies (or duopolies) are fun.
IBM unveils the LinuxONE Emperor 5
Following the recent release of the IBM z17 mainframe, IBM today unveiled the LinuxONE Emperor 5, which packs much of the same hardware as the z17, but focused on Linux use. Today we're announcing IBM LinuxONE 5,performant Linux computing platform for data, applications and your trusted AI, powered by the IBM Telum II processor with built-in AI acceleration. This launch comes at a pivotal time, as technology leaders focus on three critical imperatives: enabling security, improving cost-efficiency, and integrating AI into enterprise systems. Marcel Mitran and Tina Tarquinio Yes, much like the z17, the LinuxONE 5 is a huge AI" buzzword bonanza, but that's to be expected in this day and age. The LinuxONE 5, which, again, few of us will ever get to work with, officially supports Red Hat, OpenSUSE, and Ubuntu, but a variety of other Linux distributions offers support for IBM's Z hardware, as well.
Building your own Atomic (bootc) Desktop
Bootc and associated tools provide the basis for building a personalised desktop. This article will describe the process to build your own custom installation. Daniel Mendizabal at Fedora Magazine The fact that atomic distributions make it relatively easy to create custom distributions" is s really interesting bonus quality of these types of Linux distributions. The developers behind Blue95, which we talked about a few weeks ago, based their entire distribution on this bootc personalised desktop approach using Fedora, and they argue that the term distribution" probably isn't the correct term here: Blue95 is a collection of scripts and YAML files cobbled together to produce a Containerfile, which is built via GitHub Actions and published to the GitHub Container Registry. Which part of this process elevates the project to the status of a Linux distribution? What set of RUN commands in the Containerfile take the project from being merely a Fedora-based OCI image to a full-blown Linux distribution? Adam Fidel While this discussion is mostly academic, I still find it interesting how with the march of technology, and with the aid of new ideas, it's becoming easier and easier to spin up a customised version of you favourite Linux distribution, making it incredibly easy to have your own personal ISO, with all your settings, themes, and customisations applied. This has always been possible, but it seems to be getting easier. Atomic, immutable distributions are not for me, personally, but I firmly believe most distributions focusing on average, normal users - Ubuntu, Fedora, SUSE - will eventually move their immutable variants to the prime spot on their web sites. This will make a whole lot of people big mad, but I think it's inevitable. Of course, traditional Linux distributions won't be going away, but much like how people keep complaining about systemd despite the tons alternatives, I'm guessing the same will happen with immutable distributions.
GTK markup language Blueprint becomes part of GNOME
This week's This Week in GNOME mentions that Blueprint will become part of GNOME. Blueprint is now part of the GNOME Nightly SDK and is expected to be part of the GNOME 49 SDK. This means, apps relying on Blueprint won't have to install it manually anymore. Blueprint is an alternative to defining GTK/Libadwaita user interface via .ui XML-files (GTK Builder files). The goal of blueprint is to provide UI definitions that require less boilerplate than XML and are easier to learn. Blueprint also provides a language server for IDE integration. Sophie Herold Quite a few applications already make use of Blueprint, and even some Core GNOME applications use it, so it seems logical to make it part of the default GNOME installation.
OSle: a tiny boot sector operating system
OSle is an incredibly small operating system, coming in at only 510 bytes, so it fits entirely into a boot sector. It runs in real-mode, and is written in assembly. Despite the small size, it has a shell, a read and write file system, process management, and more. It even has its own tiny SDK and some pre-built programs. The code's available under the MIT license.
EU fines TikTok token amount of €530 million for gross privacy violations
A European Union privacy watchdog fined TikTok 530 million euros ($600 million) on Friday after a four-year investigation found that the video sharing app's data transfers to China put users at risk of spying, in breach of strict EU data privacy rules. Ireland's Data Protection Commission also sanctioned TikTok for not being transparent with users about where their personal data was being sent and ordered the company to comply with the rules within six months. Kelvin Chan for AP News In case you're wondering what Ireland's specific role in this case is, TikTok's European headquarters are located in Ireland, which means that any EU-wide privacy violations by TikTok are handled by Ireland's privacy watchdog. Anyway, sounds like a big fine, right? Let's do some math. TikTok's global revenue last year is estimated at 20 billion. This means that a 530 million fine is 2.65% of TikTok's global yearly revenue. Now let's make this more relatable for us normal people. The yearly median income in Sweden is 34365 (pre-taxes), which means that if the median income Swede had to pay a fine with the same impact as the TikTok fine, they'd have to pay 910. That's how utterly bullshit this fine is. 910 isn't nothing if you make 34000 per year, but would you call this a true punishment for TikTok? Any time you read about any of these coporate fines, you should do math like this to get an idea of what the true impact of the fine really amounts to. You'll be surprised to learn to just how utterly toothless they are.
Microsoft brings back Office application preloading from the ’90s
Back in the late '90s and early 2000s, if you installed a comprehensive office suite on Windows, such as Microsoft's own Office or something like WordPerfect Office or IBM Lotus SmartSuite, it would often come with a little icon in the system tray or a floating toolbar to ensure the applications were preloaded upon logging into Windows. The idea was that this preloading would ensure that the applications would start faster. It's 2025, and Microsoft is bring it back. In a message in the Microsoft 365 Message Center Archive, which is a real thing I didn't make up, the company announced a new Startup Boost task that will preload Office applications on Windows to reduce loading times for the individual Office applications. We are introducing a new Startup Boost task from the Microsoft Office installer to optimize performance and load-time of experiences within Office applications. After the system performs the task, the app remains in a paused state until the app launches and the sequence resumes, or the system removes the app from memory to reclaim resources. The system can perform this task for an app after a device reboot and periodically as system conditions allow. MC1041470 - New Startup Boost task from Microsoft Office installer for Office applications This new task will automatically be added to the Task Scheduler, but only on PCs with 8GB of RAM or more and at least 5GB of available disk space. The task will run 10 minutes after logging into Windows, will be disabled if the Energy Saves feature is enabled, and will be removed if you haven't used Office in a while. The initial rollout of this task will take place in May, and will cover Word only for now. The task can be disabled manually through Task Scheduler or in Word's settings. Since this is Microsoft, every time Office is updated, the task will be re-enabled, which means that users who disable the feature will have to disable it again after each update. This particular behaviour can be disabled using Group Policy. Yes, the sound you're hearing are all the AI" text generators whirring into motion as they barf SEO spam onto the web about how to disable this feature to speed up your computer. I'm honestly rather curious who this is for. I have never found the current crop of Office applications to start up particularly slowly, but perhaps corporate PCs are so full of corpo-junkware they become slow again?
DragonFlyBSD 6.4.1 released
It has been well over two years since the last release of DragonFlyBSD, version 6.4.0, and today the project pushed out a small update, DragonFlyBSD 6.4.1. It fixes a few small, longstanding issues, but as the version number suggests, don't expect any groundbreaking changes here. The legacy IDE/NATA driver had a memory leak fixed, the ca_root_nss package has been updated to support newer Let's Encrypt certificates, the package update command will no longer delete an important configuration file that rendered the command unusable, and more small fixes like that. Existing users can update the usual way.
Zhaoxin’s KX-7000 x86-64 processor
Chips and Cheese takes a very detailed look at the latest processor design from Zhaoxin, the Chinese company that inherited VIA's x86 license and has been making new x86 chips ever since. Their latest design, (Century Avenue), tries to take yet another step closer to current designs chips form Intel and AMD, and while falling way short, that's not really the point here. Ultimately performance is what matters to an end-user. In that respect, the KX-7000 sometimes falls behind Bulldozer in multithreaded workloads. It's disappointing from the perspective that Bulldozer is a 2011-era design, with pairs of hardware thread sharing a frontend and floating point unit. Single-threaded performance is similarly unimpressive. It roughly matches Bulldozer there, but the FX-8150's single-threaded performance was one of its greatest weaknesses even back in 2011. But of course, the KX-7000 isn't trying to impress western consumers. It's trying to provide a usable experience without relying on foreign companies. In that respect, Bulldozer-level single-threaded performance is plenty. And while Century Avenue lacks the balance and sophistication that a modern AMD, Arm, or Intel core is likely to display, it's a good step in Zhaoxin's effort to break into higher performance targets. Chester Lam at Chips and Cheese I find Chinese processors, like the x86-based ones from Zhaoxin or the recent LoongArch processors (which you can buy on AliExpress), incredibly fascinating, and would absolutely love to get my hands on one. A board with two of the most recent LoongArch processors - the 3c6000 - goes for about 4000 at the moment, and I'm keeping my eye on that price to see if there's ever going to be a sharp drop. This is prime OSNews material, after all. No, they're not competitive with the latest offerings from Intel, AMD, or ARM, but I don't really care - they interest me as a computer enthusiast, and since it's highly unlikely we're going to see anyone seriously threaten Intel, AMD, and ARM here in the west, you're going to have to look at China if you're interested in weird architectures and unique processors.
Run x86-64 games on RISC-V with felix86
If RISC-V ever manages to take off, this is going to be an important tool in RISC-V users' toolbox: felix86 is an x86-64 userspace emulator for RISC-V. felix86 emulates an x86-64 CPU running in userspace, which is to say it is not a virtual machine like VMware, rather it directly translates the instructions of an application and mostly uses the host Linux kernel to handle syscalls. Currently, translation happens during execution time, also known as just-in-time (JIT) recompilation. The JIT recompiler in felix86 is focused on fast compilation speed and performs minimal optimizations. It utilizes extensions found on the host system such as the vector extension for SIMD operations, or the B extension for emulating bit manipulation extensions like BMI. The only mandatory extensions for felix86 are G, which every RISC-V general purpose computer should already have, and v1.0 of the standard vector extension. felix86 website The project is still in early development, but a number of popular games already work, which is quite impressive. The code's on GitHub under the MIT license.
US court eviscerates Apple’s malicious compliance, claims company lied under oath several times
Way back in 2021, in the Epic v. Apple court case, judge US District Judge Yvonne Gonzalez Rogers ordered Apple to allow third-party developers to tell users how to make payments inside iOS applications without going through Apple's App Store. As we have come to expect from Apple, the company maliciously complied, lowering the commission on purchases outside of its ecosystem from 30% to 27%, while also adding a whole bunch of hoops and hurdles, like scare screens with doom-and-gloom language to, well, scare consumers into staying within Apple's ecosystem for in-app payments. Well, it turns out Judge Yvonne Gonzalez Rogers is furious, giving Apple, Tim Cook, and its other executives what can only be described as a beatdown - even highlighting how one of Apple's executives, under orders from Tim Cook, lied under oath several times. Gonzalez is referring this to the District Attorney for Northern California to investigate whether criminal contempt proceedings are appropriate." In stark contrast to Apple's initial in-court testimony, contemporaneous business documents reveal that Apple knew exactly what it was doing and at every turn chose the most anticompetitive option. To hide the truth, Vice-President of Finance, Alex Roman, outright lied under oath. Internally, Phillip Schiller had advocated that Apple comply with the Injunction, but Tim Cook ignored Schiller and instead allowed Chief Financial Officer Luca Maestri and his finance team to convince him otherwise. Cook chose poorly. The real evidence, detailed herein, more than meets the clear and convincing standard to find a violation. The Court refers the matter to the United States Attorney for the Northern District of California to investigate whether criminal contempt proceedings are appropriate. US District Judge Judge Yvonne Gonzalez Rogers Gonzalez' entire ruling is scathing, seething with rage, and will probably do more reputational damage to Apple, Tim Cook, and his executive team than any bendgate or antennagate could ever do. Judge Gonzalez: This is an injunction, not a negotiation. There are no do-overs once a party willfully disregards a court order. Time is of the essence. The Court will not tolerate further delays. As previously ordered, Apple will not impede competition. The Court enjoins Apple from implementing its new anticompetitive acts to avoid compliance with the Injunction. Effective immediately Apple will no longer impede developers' ability to communicate with users nor will they levy or impose a new commission on off-app purchases. Apple willfully chose not to comply with this Court's Injunction. It did so with the express intent to create new anticompetitive barriers which would, by design and in effect, maintain a valued revenue stream; a revenue stream previously found to be anticompetitive. That it thought this Court would tolerate such insubordination was a gross miscalculation. As always, the cover-up made it worse. For this Court, there is no second bite at the apple. US District Judge Judge Yvonne Gonzalez Rogers Gonzalez effectively destroyed any ability for Apple to charge commissions on purchases made inside iOS applications but outside Apple's App Store, and this order will definitely find its way to the European Union as well, where it will serve as further evidence of Tim Cook's and Apple's continuous, never-ending contempt for the law and courts that uphold it. For its part, Apple has stated they're going to appeal. Good luck with that.
Sculpt OS 25.04 released
Sculpt OS 25.04 has been released, and with it come a number of very welcome and important improvements. What most users will care about the most is the updated version of the Falkon web browser, built atop Qt 6.2.2 and its accompanying qtwebengine release, which in turn is using version 112 of the Chromium engine. Aside from this major improvement, there's two other things that stand out: Usability-wise, the new version comes with two highly anticipated features. First, building upon the multi-monitor support added with the previous release, the new version takes multi-monitor awareness to the window management level, allowing for the flexible assignment of virtual desktops to physical displays, adding new window-manipulation conveniences, and supporting rotated displays. Second, a new directory browser allows the user to interactively assign arbitrary directories as file systems to components, vastly easing the fine-grained sandboxing of subsystems. Sculpt OS 25.04 release announcement Sculpt OS 25.04 also inherits the improvements of recent Genode Framework releases, such as support for Intel's Meteor-Lake hardware. Sculpt OS is available for PC, the PinePhone, and the MNT Reform laptop.
Why did Windows 7, for a few months, log on slower if you have a solid color background?
Time for another story from Raymond Chen, about why, in Windows 7, logging in took 30 seconds if you had set a solid colour as your background. Windows 7's logon system needs to wait for a number of tasks to be completed, like creating the taskbar, populating the desktop with icons, and setting the background. If all of those tasks are completed or 30 seconds have passed, the welcome screen goes away. As you can guess by the initial report mentioning having to wait for 30 seconds, one of the tasks that need to be completed isn't reporting in, so the welcome screen is displayed for the full 30 seconds. In the case of this bug, that task is obviously setting the background. The code to report that the wallpaper is ready was inside the wallpaper bitmap code, which means that if you don't have a wallpaper bitmap, the report is never made, and the logon system waits in vain for a report that will never arrive. Raymond Chen It turns out that people who enabled the setting the hide desktop icons were experiencing the same delay, and that, too, was caused by the lack of a report from, in this case, the desktop icons. Interestingly, it seems especially settings changed through group policies can cause issues like this. Group policies are susceptible to this problem because they tend to be bolted on after the main code is written. When you have to add a group policy, you find the code that does the thing, and you put a giant if policy allows" around it. Oops, the scope of the if" block extended past the report call, so if the policy is enabled, the icons are never reported as ready, and the logon system stays on the Welcome screen for the full 30 seconds. Raymond Chen These issues were fixed very quickly after the release of Windows 7, and they disappear from the radar within a few months after the release of everyone's favourite Windows version.
Google is working on a big UI overhaul for Android
When Google released the fourth beta of Android 16 this month, many users were disappointed by the lack of major UI changes. As Beta 4 is the final beta, it's likely the stable Android 16 release won't look much different than last year's release. However, that might not hold true for subsequent updates. Google recently confirmed it will unveil a new version of its Material Design theme at its upcoming developer conference, and we've already caught glimpses of these design changes in Android-including a notable increase in background blur effects. Ahead of I/O next month, here's an early look at Google's upcoming Android redesign. Mishaal Rahman at Android Authority With Android, it's hard to really care about changes like these because it will take forever and a day for the Android ecosystem to catch up, and in general in mobile computing, most people use applications that have zero respect for platform integration anyway, preferring their own shit branding and UI design" over that of the platform they're running on. In other words, most people will never really encounter many of these changes, unless they're Pixel users. That being said, these changes seem to basically replace a lot of window" backgrounds with a blur, which makes everything feel more airy and brighter - so much so that in screenshots purporting to show dark mode, it looks like light mode. This doesn't really seem like the big UI overhaul" the linked article claims it to be, but there might be more changes on the way we haven't seen yet. Instead of UI changes, I'm much more concerned about how much worse Google will be making Android by shoving Clippy into every corner of the operating system.
PATH isn’t real on Linux
I have no idea how much relevance this short but informative rundown of how PATH works in Linux has in the real world, but I found it incredibly interesting and enlightening. The basic gist - and I might be wrong, there's code involved and I'm not very smart - is that Linux itself needs absolute paths to binaries, while shells and programming languages do not. In other words, the Linux kernel does not know about PATH, and any lookup you're doing comes from either the shell or the programming language you're using. In practice this doesn't matter, but it's still interesting to know.
“I use zip bombs to protect my server”
The majority of the traffic on the web is from bots. For the most part, these bots are used to discover new content. These are RSS Feed readers, search engines crawling your content, or nowadays AI bots crawling content to power LLMs. But then there are the malicious bots. These are from spammers, content scrapers or hackers. At my old employer, a bot discovered a wordpress vulnerability and inserted a malicious script into our server. It then turned the machine into a botnet used for DDOS. One of my first websites was yanked off of Google search entirely due to bots generating spam. At some point, I had to find a way to protect myself from these bots. That's when I started using zip bombs. Ibrahim Diallo I mean, when malicious bots harm your website, isn't combating them with something like zip bombs simply just self-defense?
Garmin Pay: yes, you can do NFC tap-to-pay in stores without big tech
Late last year, I went on a long journey to rid myself of as much of my remaining ties to the big technology giants as I could. This journey is still ongoing, with only a few thin ties remaining, but there's one big one I can scratch off the list: mobile in-store payments with NFC tap-to-pay. I used Google Pay and a WearOS smartwatch for this, but neither of those work on de-Googled Android - I opted for GrapheneOS - and it seemed like I was just going to have to accept the loss of this functionality. That is, until I stumbled upon a few forum posts here and there suggesting a solution: Garmin, maker of fitness trackers and smartwatches with a strong focus on sports, health, and the outdoor lifestyle, has its own mobile NFC tap-to-pay service that supposedly worked just fine on any Android device, de-Googled or not. In fact, people claimed you could even remove the companion Garmin application from your phone entirely after setting up the payment functionality, and it would still keep working. This seemed like something I should look into, because the lack of NFC tap-to-pay is a recurring concern for many people intending to switch to de-Googled Android. So, late last year, many of you chipped in, allowing me to buy a Garmin smartwatch to try this functionality out, for which I'm incredibly grateful, of course. Here's how all of this works, and if it's a good alternative for Google Pay. The Garmin Instinct 2S Solar First, let's dive into which watch I chose to buy. Garmin has a wide variety of fitness trackers and smartwatches in its line-up, from basic trackers, to Apple Watch/WearOS-like devices, to outdoor-focused rugged devices. I opted for one of the outdoor-focused rugged devices, because not only would it give me the Garmin Pay functionality, but also a few other advantages and unique features I figured OSNews readers would be interested in: a simple black-and-white transflective memory-in-pixel display, a battery life measured in weeks (!), a solar panel built into the display glass, and a case constructed out of lightweight but durable plastics instead of heavy, scratch-prone metal. The specific model I opted for was the Instinct 2S Solar in Mist Grey. I wasn't intending for this to become a review of the watch as a whole, but I figured I might as well share some notes about my experiences with this particular watch model. It's important to note though that Garmin offers a wide variety of smartwatches, from models that look and feel mostly like an Apple Watch or wearOS device, to mechanical models with invisible' OLED displays on the dial, to ruggedised, button-only watches for hardcore outdoor people. If you're interested in a Garmin device, there's most likely a type that fits your wishes. The Instinct 2S is definitely not the most beautiful or attractive watch I've ever had on my wrist. It has that rugged" look some people are really into, but for me, I definitely had to get used to it. I do really like the colour combination I opted for, though, as it complements the black/white transflective memory-in-pixel display really well. I've grown to... Appreciate the look over time. The case and bezel of the watch are made out of what Garmin calls fiber-reinforced polymer", which is probably just a form of fiber-reinforced plastic. Regardless of the buzzwords, it feels nice and sturdy, with a great texture, and not at all plasticy or cheap. Using a material like this over the metals the Apple Watch and most WearOS devices are made of has several advantages; first, it makes the device much lighter and thus more pleasant to wear, and it's a lot sturdier and resilient than metals. I've banged this watch into door sills and countertops a few times now, and there's not a scratch, dent, or discoloration on it - a far cry from the various metal Apple Watches and WearOS devices I own, which accumulated dings and scratches within weeks of buying them. The case material is one of the many ways in which this watch chooses function over form. Sure, metals might feel premium, but a high-quality plastic is cheaper to make, lasts longer, is more resilient, and also happens to be lighter - it's simply the objectively better choice for something you wear on wrist every day, exposed to the elements. I understand why people want their smartwatch to be made out of metal, but much like how the orange-red plastic of the Nexus 5 is still the best smartphone material I've ever experienced (the white and black models uses inferior plastics), this Garmin tops all of the metal watches I own. The strap is made of silicone, and has an absurd amount of tightly-spaced adjustment holes, which makes it very easy to adjust to changing circumstances, like a bit of extra slack for when you're working out. It also has a nice touch in that the second loop has a little peg that slots into an adjustment hole, keeping it in place. Ingenious. Other than that, it's just a silicone band with the clasp made out of the same sturdy, pleasant fiber-reinforced polymer" as the case. The lens over the display is made out of something Garmin calls Power GlassTM", and I have no idea what that means. It just feels like a watch lens to me - solid, glassy, and... I don't know, round? The unique aspect of the display glass is, of course, the built-in solar panel. It's hard for me to tell what kind of impact - if any - the solar panel has on the battery life of the device. What quite obviously does not help is that I live in the Arctic where sun hours come at a bit of a premium, so it's been impossible for me to stand outside and hold out my arm for a while to see if it had an effect on the charge level. There's a software
Trinity Desktop Environment R14.1.4 released
The Trinity Desktop Environment, the modern-day continuation of the KDE 3.x series, has released version R14.1.4. This maintenance release brings new vector wallpapers and colour schemes, support for Unicode surrogate characters and planes above zero (for emoji, among other things), tabs in kpdf, transparency and other new visual effects for Dekorator, and much more. TDE R14.1.4 is already available for a variety of Linux distributions, and can be installed straight from TDE's own repositories if needed.
OpenBSD 7.7 released
Another six months have passed, so it's time for a new OpenBSD release: OpenBSD 7.7 to be exact. Browsing through the long, detailed list of changes, a few important bits jump out. First, OpenBSD 7.7 adds support for Ryzen AI 300 (Strix Point, Strix Halo, Krackan Point), Radeon RX 9070 (Navi 48), and Intel's Arrow Lake, adding support for the latest x86 processors to OpenBSD. There seems to be quite a few entries in the list related to power management, from work on hibernation and suspend, to more fine-grained control over performance profiles when on battery or plugged in. There's also the usual long list of driver improvements, new drivers, and tons and tons of other fixes and changes. OpenBSD 7.7 also ships with the latest GNOME and KDE releases, and contains fixes and improvements for a whole slew of obscure and outdated architectures.
Crucial Wii homebrew library contains code stolen from Nintendo, RTEMS
The Wii homebrew community has been dealt a pretty serious blow, as developers of The Homebrew Channel for the Wii have discovered that not only does an important library most Wii homebrew software rely on use code stolen straight from Nintendo, that same library also uses code taken from an open source real-time operating system without giving proper attribution. Most Wii homebrew software is built atop a library called libogc. This library apparently contains code stolen from Nintendo's SDK as well as from games using this SDK, decompiled and cleaned. This has been known for a while, but it was believed that large, important parts of libogc were at least original, but that, too, turns out to be untrue. Recently it has been discovered that libogc's threading/OS implementation has been stolen from RTEMS, an open source real-time operating system. The developers of libogc have indicated that they do not care, intend to do nothing about it, and deleted any issues reporting the stolen code. What's wild about the code stolen from RTEMS is that it's an open source operating system with a nice, permissive license; there was no need to steal the code at all, and all it would take to address it is proper attribution. As such, the fail0verflow group, which develops The Homebrew Channel for the Wii, has ceased all development on The Homebrew Channel, and archived the code repository. The Wii homebrew community was all built on top of a pile of lies and copyright infringement, and it's all thanks to shagkur (who did the stealing) and the rest of the team (who enabled it and did nothing when it was discovered). Together, the developers deceived everyone into believing their work was original. Please demand that the leaders and major contributors to console or other proprietary device SDKs and toolkits that you use and work with do things legally, and do not tolerate this kind of behavior. The Homebrew Channel GitHub page Considering Nintendo is on a crusade to shutdown emulators, stuff like this is really not helping anyone trying to argue that consoles should be open devices, that emulators play an important role in preservation, and that people have a right to play the games they own on a device other than the console it's intended for. I'm sure this isn't the last we'll hear about this development.
9front “CLAUSE 15 COMMON ELEMENTS OF MAUS AND STAR TYPE” released
Few things in life make me happier than a new 9front release. This new release, 9front CLAUSE 15 COMMON ELEMENTS OF MAUS AND STAR TYPE", comes with a variety of fixes and new features, such as temperature sensor support for Ryzen processors, a new Intel i225 2.5 GbE driver, a number of low-level kernel improvements, and so, so many more small fixes and changes. If you use 9front, you already know all of this, and you're too cool to read OSNews anyway. If you're new to 9front and want to join the cool people club, you can download images for PC, Raspberry Pi, MNT Reform, and QEMU.
RetrOS-32: a 32bit hobby operating system with graphics, multitasking, and more
RetrOS-32 is a 32bit operating system written from scratch, with graphics, multitasking and networking capabilities. The kernel is written in C and assembly, while the userspace applications are written in C++, using Make for compilation, all licensed under the MIT license. It runs on Qemu, of course, but a variety of real hardware is also supported, which is pretty cool and relatively unique for a small hobby project like this. The UI is delightfully retro - as the name obviously implies - and it comes with a set of basic applications, as well as games like Wolfenstein 3D.
The VTech Socratic method
We've had a lot of fun with VTech's computers in the past on this blog. Usually, they're relatively spartan computers with limited functionality, but they did make something very interesting in the late 80s. The Socrates is their hybrid video game console/computer design from 1988, and today we'll start tearing into it. Leaded Solder web log Now we're in for the good stuff. A weird educational computer/game console/toy thing from the late '80s, by VTech. I have a massive soft spot for these toy-like devices, because they're always kind of a surprise - will it be a stupidly simple hardcoded device with zero input/output, or a weirdly capable computer with tons of hidden I/O and a full BASIC ROM? You won't know until you crack it open and take a peek! VTech still makes things like this, and I still find them ever as fascinating.
Torvalds states the obvious: file systems should be case-sensitive
Apparently, the Bcachefs people are having problems with case-folding, and Linus Torvalds himself is not happy about it. Torvalds holds the only right opinion in this matter, which is that filesystems should obviously be case-sensitive. Case-insensitive names are horribly wrong, and you shouldn't have done them at all. The problem wasn't the lack of testing, the problem was implementing it in the first place. Dammit. Case sensitivity is a BUG. The fact that filesystem people still think it's a feature, I cannot understand. It's like they revere the old FAT filesystem so much that they have to recreate it - badly. Linus Torvalds on the LKML It boggles my mind that a modern operating system like macOS still defaults to being case-insensitive (but case-preserving), and opting to install macOS the correct way, i.e. with case-sensitivity, can still lead to issues and bugs because macOS isn't used to it. In 2025. Windows' NTFS is at least case-sensitive, but apparently Win32 applications get all weird about it; if you have several files with identical names save for the case used, Win32 applications will only allow you to open one of them. I'm not sure how up to date that information is, though. Regardless, the notion that Readme.txt is considered the same as readme.txt is absolutely insane, and should be one of those weird relics we got rid of back in the '90s.
Oddly, in defense of Google keeping Chrome
As much as I'm a fan of breaking up Google, I'm not entirely sure carving Chrome out of Google without a further plan for what happens to the browser is a great idea. I mean, Google is bad, but things could be so, so much worse. OpenAI would be interested in buying Google's Chrome if antitrust enforcers are successful in forcing the Alphabet unit to sell the popular web browser as part of a bid to restore competition in search, an OpenAI executive testified on Tuesday at Google's antitrust trial in Washington. Jody Godoy at Reuters OpenAI is not the only AI" vulture circling the skies. Perplexity Chief Business Officer Dmitry Shevelenko said he didn't want to testify in a trial about how to resolve Google's search monopoly because he feared retribution from Google. But after being subpoenaed to appear in court, he seized the moment to pitch a business opportunity for his AI company: buying Chrome. Lauren Feiner at the Verge Or, you know, what about, I don't know, fucking Yahoo!? Legacy search brand Yahoo has been working on its own web browser prototype, and says it would like to buy Google's Chrome if the company is forced by a court to sell it. Lauren Feiner at the Verge If the courts really want Google to divest Chrome, the least-worst position it could possibly end up is in some sort of open source foundation or similar legal construction, where no one company has total control over the world's most popular browser. Of course, such a construction isn't exactly ideal either - it will become a battleground of corporate interests soaked with the blood of ordinary users - but anything, anything is better than cud peddlers like OpenAI or whatever the hell Yahoo! even is these days. As users, we really should not want Google to be forced to divest Chrome at this point in time. No matter the outcome, users are going to be screwed even harder than if it were to stay with Google. I hate to say this, but I don't see an option that's better than having Chrome remain part of Google. The big problem here is that there is no coherent strategy to deal with the big technology companies in the United States. We're looking at individual lawsuits where judges and medieval nonsense like juries try to deal with individual companies, which, even if, say, Google gets broken up, would do nothing but strengthen the other big technology companies. If, I don't know, Android suddenly had to make it on its own as a company, it's not users who would benefit, but Apple. Is that the goal of antitrust? What you really need to deal with the inordinate power of the big technology companies is legislation that deals with the sector as a whole, instead of letting random courts and people forced to do jury duty decide what to do with Google or Amazon or whatever. The European Union is doing this to great success so far, getting all the major players to make sweeping changes to the benefit of users in the EU. If the United States is serious about dealing with the abusive behaviour of the big technology companies, it's going to need to draft and pass legislation similar to the European Union's DMA and DSA. Of course, that's not going to happen. The United States Congress is broken beyond repair, the US president and his gaggle of incompetents are too busy destroying the US economy and infecting children with measles, and the big tech companies themselves are just bribing US politicians in broad daylight. The odds of the US being able to draft and pass effective big tech antitrust regulations is lower than zero. OpenAI Chrome. You feeling better yet about the open web?
Steam to highlight accessibility support for games on store pages
The Steam store and desktop client will soon be able to help players find games that feature accessibility support. If your game has accessibility features, you can now enter that information in the Steamworks edit store' section for your app. Steam announcements page I have a lot of criticism for the Steam client application - it's a overly complex, unattractive, buggy, slow, top-heavy Chrome engine wrapped in an ugly user interface - but this is a great change and very welcome addition to Steam. Basically, with this, game developers can indicate which accessibility features their game has, allowing users to specifically search for those features, create filters, make sure they can play the game before buying, and so on. The client-side part of the feature is not yet available - it seems Valve is giving developers some time to fill in the necessary information - but once it is, you'll be able to tell at a glance what accessibility a game has. Such information on the store page of games tends to be a great marketing tool, with reviews quickly pointing out if certain expected features are not present. Any game that lacks support for the Steam Deck or Proton, for instance, will often have a few reviews at the top mentioning as such, and games with invasive DRM can't get away with that either without reviews on Steam pointing it out. I wouldn't be surprised if these accessibility feature listings well quickly become another thing users will simply expect to be there. Regardless, this is great news for people who rely on such features, but even if you don't specifically - accessibility features are often just useful features, period.
A tour inside the IBM z17
Welcome to a photo-driven tour of the IBM z17. I've scoured the image library to pull dig deep inside these machines that most people don't get an opportunity to see inside, and I'll share some of the specifications gleaned from the announcement and related Redbooks. Elizabeth K. Joseph at the IBM community website These IBM mainframes don't have to be beautiful, but they always are. I wish I could see a z17 up close - hopefully IBM will release a detailed video walkthrough of one of these at some point, including taking one apart and putting it back together.
12345678910...