This article isn't meant to be technical. Instead, it offers a high-level view of what happened through the years with GhostBSD, where the project stands today, and where we want to take it next. As you may know, GhostBSD is a user-friendly desktop BSD operating system built with FreeBSD. Its mission is to deliver a simple, stable, and accessible desktop experience for users who want FreeBSD's power without the complexity of manual setup. I started this journey as a non-technical user. I dreamed of a BSD that anyone could use. Eric Turgeon at the FreeBSD Foundation's website I'm very glad to see this article published on the website of the FreeBSD Foundation. I firmly believe that especially FreeBSD has all the components to become an excellent desktop alternative to desktop Linux distributions, especially now that the Linux world is moving fast with certain features and components not everyone likes. FreeBSD could serve as a valid alternative. GhostBSD plays an important role in this. It offers not just an easily installable FreeBSD desktop, but also several tools to make managing such an installation easier, like in-house graphical user interfaces for managing Wi-Fi and other networks, backups, updates, installing software, and more. They also recently moved from UFS to ZFS, and intend to develop graphical tools to expose ZFS's features to users. GhostBSD can always use more contributors, so if you have the skills, interest, and time, do give it a go.
You want more AI"? No? Well, too damn bad, here's AI" in your file manager. With AI actions in File Explorer, you can interact more deeply with your files by right-clicking to quickly take actions like editing images or summarizing documents. Like with Click to Do, AI actions in File Explorer allow you to stay in your flow while leveraging the power of AI to take advantage of editing tools in apps or Copilot functionality without having to open your file. AI actions in File Explorer are easily accessible - to try out AI actions in File Explorer, just right-click on a file and you will see a new AI actions entry on the content menu that allows you to choose from available options for your file. Amanda Langowski and Brandon LeBlanc at the Windows Blogs What, you don't like it? There, AI" that reads all your email and sifts through your Google Drive to barf up stunt, soulless replies. Gmail's smart replies, which suggest potential replies to your emails, will be able to pull information from your Gmail inbox and from your Google Drive and better match your tone and style, all with help from Gemini, the company announced at I/O. Jay Peters at The Verge Ready to submit? No? Your browser now has AI" integrated and will do your browsing for usyou. Starting tomorrow, Gemini in Chrome will begin rolling out on desktop to Google AI Pro and Google AI Ultra subscribers in the U.S. who use English as their Chrome language on Windows and macOS. This first version allows you to easily ask Gemini to clarify complex information on any webpage you're reading or summarize information. In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf. Josh Woodward Mercy? You want mercy? You sure give up easily, but we're not done yet. We destroyed internet search and now we're replacing it with AI", and you will like it. Announced today at Google I/O, AI Mode is now available to all US users. The focused version of Google Search distills results into AI-generated summaries with links to certain topics. Unlike AI Overviews, which appear above traditional search results, AI Mode is a dedicated interface where you interact almost exclusively with AI. Ben Schoon at 9To5Google We're going to assume control of your phone, too. The technology powering Gemini Live's camera and screen sharing is called Project Astra. It's available as an Android app for trusted testers, and Google today unveiled agentic capabilities for Project Astra, including how it can control your Android phone. Abner Li at 9To5Google And just to make sure our AI" can control your phone, we'll let it instruct developers how to make applications, too. That's precisely the problem Stitch aims to solve - Stitch is a new experiment from Google Labs that allows you to turn simple prompt and image inputs into complex UI designs and frontend code in minutes. Vincent Nallatamby, Arnaud Benard, and Sam El-Husseini You are not needed. You will be replaced. Submit.
Jwno is a highly customizable tiling window manager for Windows 10/11, built with Janet and . It brings to your desktop magical parentheses power, which, I assure you, is not suspicious at all, and totally controllable. Jwno documentation Yes, it's a Lisp system, so open your bag of spare parentheses and start configuring and customising it, because you're going to need it if you want to use Jwno to its fullest. In general, Jwno works as a keyboard driven tiling window manager. When a new window shows up, it tries to transform the window so it fits in the layout you defined. You can then use customized key bindings to modify the layout or manipulate your windows, rather than drag things around using the mouse. But, since a powerful generic scripting engine is built-in, you can literally do anything with it. Jwno documentation It's incredibly lightweight, comes as a single executable, integrates perfectly with Windows' native virtual desktop and window management features, has support for REPL, and much more.
I genuinely believe making games without a big do everything" engine can be easier, more fun, and often less overhead. I am not making a do everything" game and I do not need 90% of the features these engines provide. I am very particular about how my games feel and look, and how I interact with my tools. I often find the default feature implementations in large engines like Unity so lacking I end up writing my own anyway. Eventually, my projects end up being mostly my own tools and systems, and the engine becomes just a vehicle for a nice UI and some rendering... At which point, why am I using this engine? What is it providing me? Why am I letting a tool potentially destroy my ability to work when they suddenly make unethical and terrible business decisions? Or push out an update that they require to run my game on consoles, that also happens to break an entire system in my game, forcing me to rewrite it? Why am I fighting this thing daily for what essentially becomes a glorified asset loader and editor UI framework, by the time I'm done working around their default systems? Noel Berry Interesting and definitely unique perspective, as I feel most game developers just pick one of the existing big engines and work from there. I'm not saying either option is wrong, but I do feel like the dependence on the popular engines can potentially harm the game industry as a whole, as it reduced diversity, drains valuable knowledge and expertise, and leaves developers - especially smaller ones - at the mercy of a few big players. Perhaps not every game needs to be made in Unity or Unreal.
Volker Hilsheimer, chief maintainer of the Qt project, says he has learned lessons from the painful Qt 5 to Qt 6 transition, the importance of Qt Bridges for using Qt from any language, and the significance of the relationship with the Linux KDE desktop. Tim Anderson at Dev Class Qt plays a significant role in the open source desktop world in particular, because it's the framework KDE uses. Hilsheimer notes that KDE's role in the Qt community is actually quite important, because not only is it a source of people learning how to use Qt and who can thus make contributions to the project, KDE also tends to use the latest Qt versions, creating a lot of confidence among the wider Qt community to also adopt the latest versions. The relationship with KDE and Qt is an interesting one, and sometimes leads to questions about the future availability of the open source edition of Qt since the Qt Company licenses Qt under a dual-license structure (both open and proprietary). To avoid any uncertainty, KDE and Qt have an agreement that covers pretty much every possible scenario and which is worded to ensure the availability of Qt as an open source framework. KDE, through the KDE Free Qt Foundation, has a number rights and options to ensure the availability of Qt as an open source framework. I'm no lawyer, so I might get some of the details wrong, but the main points are that if the Qt Company ever decides to discontinue the open source edition of Qt, the KDE Free Qt Foundation has the right to release Qt under a BSD-style license within 12 months. The same applies to any addition to Qt which are not released as open source; they must be released under an open source license within 12 months of initial release. This agreement remains valid in the case of buyouts, mergers, or bankruptcies. This agreement has existed in one form or another since the late '90s, and has survived Qt being owned by Nokia and Digia, as well as various other organisational changes. Despite the issue of Qt's ownership coming up every now and then, the agreement is pretty airtight, and considering its longevity there's no reason to be worried about it at all. Still, this structure is clearly more complex and less straightforward than, say, the status of GTK and its relationship to GNOME, so it's not entirely unreasonable the issue comes up every now and then. I wonder if we'll ever see this situation become less complex, without the need for special agreements. While it wouldn't make a practical difference, it would make things less... Legalese.
Mainframes still play a vital role in today, providing extremely high uptime and low latency for financial transactions. Telum II is IBM's latest mainframe processor, and is designed unlike any other server CPU. It only has eight cores, but runs them at a very high 5.5 GHz and feeds them with 360 MB of on-chip cache. IBM also includes a DPU for accelerating IO, along with an on-board AI accelerator. Telum II is implemented on Samsung's leading edge 5 nm process node. IBM's presentation has already been covered by other outlets. Therefore I'll focus on what I feel like is Telum (II)'s most interesting features. DRAM latency and bandwidth limitations often mean good caching is critical to performance, and IBM has a often deployed interesting caching solutions. Telum II is no exception, carrying forward a virtual L3 and virtual L4 strategy from prior IBM chips. Chester Lam at Chips and Cheese If you've been keeping track, you can possibly deduce that I'm bit of a sucker for IBM's mainframes and big POWER machines. These Telum II processors are absolutely wild.
I recently learned something that blew my mind; you can run a full desktop Linux environment on your phone. That's a graphical environment via X11 with real window management and compositing, Firefox comfortably playing YouTube (including working audio), and a status bar with system stats. It launches in less than a second and feels snappy. Hold the Robot In and of itself, this is a neat trick most of us are probably aware of. Running a full Linux distribution on an Android phone using chroot is an awesome party trick, but I doubt many people take this concept to its logical conclusion by connecting it up to a display, keyboard, and mouse, and use it as their mobile workstation. Well, the author of this article did, and he took it even one step further by replacing the display part of the logical conclusion with AR glasses. The AR glasses in question were a pair of Xreal Air 2 Pro, which put a 120Hz 1080p display in front of your eyes using Sony micro-OLED panels. This will create the illusion of a 130'' screen with a 46 field of view, from a pair of glasses that honestly do not feel that much more massive than regular sunglasses or some of the thicker glasses frames some people like. I'm honestly kind of impressed this is possible these days. Add in a keyboard and mouse, and you've got a mobile workstation that takes up very little space, especially since you're carrying your phone with you at all times anyway. Of course, you have to be comfortable with using Linux - no Windows or macOS here - and the software side of the equation requires more setup and fiddling than I thought it would, but the end result is exactly like using a regular Linux desktop, but on your phone and a pair of AR glasses instead of on a laptop or desktop. If I had the cash to throw around on fun side projects like this (you can help with that, actually, through Ko-Fi donations), I would totally order a pair of these Xreal glasses to try this out.
Today we're very excited to announce the open-source release of the Windows Subsystem for Linux. This is the result of a multiyear effort to prepare for this, and a great closure to the first ever issue raised on the Microsoft/WSL repo: Will this be Open Source? Issue #1 microsoft/WSL. That means that the code that powers WSL is now available on GitHub at Microsoft/WSL and open sourced to the community! You can download WSL and build it from source, add new fixes and features and participate in WSL's active development. Pierre Boulay at the Windows Blogs Windows Subsystem for Linux seems like a relatively popular choice for people who want a modern, Linux-based development environment but are stuck using Windows. I'm happy to see Microsoft releasing it as open source, which is no longer something to be surprised by at this point in time. It leaves one to wonder how long it's going to be before more parts of Windows will be released as open source, since it could allow Microsoft's leadership to justify some serious job cuts. I honestly have no idea how close to the real thing Windows Subsystem for Linux is, and if it can actually fully replace a proper Linux installation, with all the functionality and performance that entails. I'm no developer, have no interest in Windows, so I've never actually tried it. I'd love to hear some experiences from all of you. Aside from releasing WSL as open source, Microsoft also released a new command-line text editor - simply called Edit. It's also open source, in its early stages, and is basically the equivalent of Nano. It turns out 32bit versions of Windows up until Windows 10 still shipped with the MS-DOS Editor, but obviously that one needed a replacement. It already has support for multiple documents, mouse support, and a few more basic features.
Every so often people yearn for a lost (1980s or so) era of single user computers', whether these are simple personal computers or high end things like Lisp machines and Smalltalk workstations. It's my view that the whole idea of a 1980s style single user computer" is not what we actually want and has some significant flaws in practice. Chris Siebenmann I think the premise of this entire article is flawed, and borders on being a strawman argument. I honestly don't think there's many people out there who genuinely and seriously want to use an '80s home computer for all their computing tasks, but this article seems to think that there are. Virtually every single person expressing interest in and a desire for classic computers does so from a point of nostalgia, as a learning experience, or as a hobby. They're definitely not interested in using any of those '80s machine to do their banking or to collaborate with their colleagues. Additionally, the problems and issues people have with modern computing platforms is not that they are too complex, but that they are no longer designed with the user in mind. Windows, macOS, iOS; they're all first and foremost designed to extract money from you through ads, upsells, nag screens, and similar anti-user features, and it's those things that people are sick of. Coincidentally, they are all things we didn't have to deal with back in the '80s and '90s. In other words, remove the user-hostility from modern operating systems, and people wouldn't complain about them so much. Which seems rather obvious, doesn't it? It's why using a Linux desktop like Fedora is such a breath of fresh air. There's no upsells for cloud storage or streaming services, no restrictions on what I can and cannot install to protect some multitrillion euro company's revenue streams, no ads and nag screens infesting my operating system - it's just an operating system waiting for me to tell it what it do, and then it does it. It's wild how increasingly revolutionary that's becoming. Whenever I am forced to interact with Windows 11 or whatever the current version of macOS is, I feel such a profound and deep sadness for what they've become, and it seems only natural to me that this sadness is fueling a longing for back when these systems weren't so user-hostile.
Tuxguitar is a quite powerful application written in a mixture of Java / C. It is able to render a score in real time either via Fluidsynth or via pure MIDI. The development of Tuxguitar started in 2008 on Sourceforce and after a halt in 2022, the project restarted on Github and is still actively developed. The goal of this article is to try to render a score via Tuxguitar, and various other applications connected to Tuxguitar, via Jack or Pipewire-Jack. The score used throughout this article will be The Pursuit Of Vikings by the band Amon Amarth. It has 2 guitars, a bass and a drum track. Yann Collette at Fedora Magazine If you're into audio production and are considering using Linux for your audio needs, this article is a good starting point.
Last time, we looked at the legacy icons in progman.exe. But what about moricons.dll? Here's a table of the icons that were present in the original Windows 3.1 moricons.dll file (in file order) and the programs that Windows used the icons for. As with the icons in progman.exe, these icons are mapped from executables according to the information in the APPS.INF file. Raymond Chen These icons age like a fine wine. They're clear, well-designed, easy to read, and make extraordinary good use of the limited amount of available pixels. Icons from Mac OS, BeOS, OS/2, and a few others from the same era also look timeless, and I wish modern designers learned a thing or two from these.
You may not have heard of the Transparency & Consent Framework", but you've most likely interacted with it, probably on a daily basis. The TCF is used by 80% of the internet to obtain consent" from users to collect their data and share it among advertisers - you know, the cookie popups. In a landmark EU ruling yesterday, the TCF has been declared to violate the GDPR, making it illegal. For seven years, the tracking industry has used the TCF as a legal cover for Real-Time Bidding (RTB), the vast advertising auction system that operates behind the scenes on websites and apps. RTB tracks what Internet users look at and where they go in the real world. It then continuously broadcasts this data to a host of companies, enabling them to keep dossiers on every Internet user. Because there is no security in the RTB system it is impossible to know what then happens to the data. As a result, it is also impossible to provide the necessary information that must accompany a consent request. Irish Council for Civil Liberties It's no secret that cookie consent popups do not actually comply with the GDPR, and that they are not even necessary if you simply don't do any cross-site sharing of personal information. It seems that this ruling confirms this in a legal sense, forcing the advertising industry to come up with a new, better system. On top of that, every individual company that participated in this scheme is now liable for fines and damages. Complaints coordinated by Johnny Ryan, Director of Enforce at the Irish Council for Civil Liberties, prompted the ruling. He said: Today's court's decision shows that the consent system used by Google, Amazon, X, Microsoft, deceives hundreds of millions of Europeans. The tech industry has sought to hide its vast data breach behind sham consent popups. Tech companies turned the GDPR into a daily nuisance rather than a shield for people. Irish Council for Civil Liberties The problem here is not so much the clarity of applicable laws and regulations, but the cost and effectiveness of enforcement. If it takes years of expensive and complex legal proceedings to bring a company that violates the GDPR to heel, is it really an effective legal framework? Especially when you take into account just how many companies, big and small, there are that violate the GDPR? OSNews uses a cookie popup and displays advertising, something we have to do to gain a little bit of extra income - but I'm not happy about it. Our ads don't provide us with much income, perhaps about 150-200, but that's still a decent enough chunk of our income pie that we need it. I would greatly prefer we turn off these ads altogether, but in order to be able to afford that, we'd need to up our Patreon income. OSNews Patreons get an ad-free version of OSNews. That's a long and slow process, especially with the current economic uncertainty making people reconsider their expenses. Disabling our ads altogether for everyone once we're fully reader-funded is still my end goal, but until the world around us settles down a bit, that's a little while off. If you want to speed this process up - you can become an OSNews Patreon and enjoy an ad-free OSNews today.
I generally don't pay attention to the releases of programming languages unless they're notable for some reason or another, and I think this one qualifies. Rust is celebrating its ten year anniversary with a brand new release, Rust 1.87.0. This release adds anonymous pipes to the standard library, inline assembly can now jump to labeled blocks in Rust code, and support for the i586 Windows target has been removed. Considering Windows 7 was the last Windows version to support i586, I'd say this is fair. You can update to the new version using the rustup command, or wait until your operating system adds it to its repository if you're using a modern operating system.
Accessibility in the software world is a problem in general, but it's an even bigger problem on open source desktops, as painfully highlighted by this excellent article detailing the utterly broken state of accessibility on Linux. Reading the article is soul-crushing as it starts to dawn on you just how bad the situation really is for those among us who require accessibility features, making it virtually impossible for them to switch to Linux. This obviously has to change, and it just so happens that both on the GTK/GNOME and KDE side, recent work on accessibility has delivered some valuable results. Starting with GTK and GNOME, the framework has recently merged the AccessKit backend with GTK 4.18, which enables accessibility features when running GTK applications on Windows and macOS. On Linux, GTK still defaults to at-spi, but I'm sure this will change eventually too. Another major improvement are the special keyboard shortcuts normally provided by the screen reader Orca. Support for these was in the works for a while but incomplete, but now this work has been completed, and the new shortcuts ship as part of GNOME 48. Accessibility support for GNOME Web has been greatly improved as well, and Elevado is a new tool that shows you what applications expose on the a11y bus. There's a ton additional, smaller changes too. On the KDE side, a number of accessibility improvements have been implemented as part of the project's goal to improving input handling. You can now use the numerical pad's arrow keys to move the mouse cursor, there's a new 3-finger gesture to invoke the desktop zoom accessibility feature, keyboard navigation in general has been improved in a wide variety of places in KDE, and a whole bunch more improvements. In addition, a number of financial grants have been given to developers working on accessibility in KDE, such as a project to make file management-related features - think open/save dialogs, Dolphin, and so on - fully accessibly, and projects to make touchpad and screen gestures fully customisable. Accessibility is never really done" or perfect", but there's definitely an increasing awareness among the two major open source desktops of just how important it is. A few confounding factors - like the switch to Wayland or the complicated history of audio on Linux - have actually hurt accessibility, and it's only now that things are starting to look up again. However, as anyone with reduced vision or auditory problems can tell you, Linux and the open source desktop still has a very long way to go.
Following rumors, Xiaomi today announced that it will launch its very own chip for smartphones later this month. The XRING 01" is a chip that the company has apparently been working on for over 10 years now. Details about the chip are scarce so far, but GizmoChina points to recent leaks that suggest the chip is built on a 4nm process through TSMC. The chip supposedly has a 1+3+4 layout and should lag just a bit behind Snapdragon 8 Elite and Dimensity 9400 in terms of raw horsepower, sounding familiar to Google's work with Tensor chips. Ben Schoon at 9To5Google I like this. Having almost every Android device use Qualcomm's chips is not good for fostering competition, and weakens Android OEMs' bargaining position. If we have more successful SoC makers, consumers will not only gain access to a wider variety of chips that may better suit their needs, it will also force Qualcomm to lower its prices, compete better, or both. Everybody wins. Well, except Qualcomm, I guess.
You'd almost forget, but aside from the enterprise-focused variant of Solaris for which Oracle sells support contracts, the company has also nominally maintained and released a version of Solaris aimed at non-production use and enthusiasts. This version, called Solaris CBE or Common Build Environment, has always been free to download and use, but since it was last updated all the way back in early 2022, you'd be forgiven for having forgotten all about it. Today, though, Oracle has finally released a new version of Solaris CBE, after three years of silence. The Common Build Environment (CBE) release for Oracle Solaris 11.4 SRU 81 is now available via pkg update" from the release repository or by downloading the install images from the Oracle Solaris Downloads page. As with the first Oracle Solaris 11.4 CBE, this is licensed for free/open source developers and non-production personal use, and this is not the final, supported version of the 11.4.81 SRU, but the pre-release version on which the SRU was built. It contains all of the new features and interfaces, but not all of the final rounds of bug fixes, from the 11.4.81 SRU. The previous version was the CBE for 11.4.42, so there's more than 3 years worth of changes between these two releases. If you wanted to read about the changes in every intervening SRU, you can find the monthly SRU release announcements for every SRU, and the What's New summaries for each quarterly feature release starting with SRU 63, on the Oracle Solaris blog. Some FOSS version updates are also listed in Oracle Solaris 11.4 Bundled Software Updates. You can also find posts about some of the new features from the SRUs on Joerg Moellenkamp's blog and Marcel Hofstetter's blog. Alan Coopersmith and Jan Pechanec With three years of changes, updates, and fixes to talk about, it's no surprise there's a lot of things this new release covers, and credit to Oracle: the blog post announcing this new release is incredibly detailed, lists a ton of the changes in great detail, and is definitely required reading if you're interested in trying this release out for yourself. I'm definitely tempted, even if it's Oracle. Solaris 11.4 SRU 81 CBE comes with much more recent versions of the free and open source tools and frameworks you've come to expect, like updated versions of GCC, LLVM/clang, tons of programming languages like Python, Perl, and Rust, as well as updates to all the related toolchain components. The CTF (Compact C Type Format) utilities (ctfconvert, ctfdump, and ctfmerge), used to build Solaris itself and crucial for tools like DTrace, have also been updated, and now reside in /usr/bin. These updates are joined by a massive number of other, related low-level changes. For desktop users, GNOME has been updated from the veritably ancient GNOME 3.38 to GNOME 45 (current is 48.1), which is a big jump for Solaris desktop users. Firefox and Thunderbird jump from 91 ESR to 128 ESR, which should deliver a much-improved browsing and email experience. All of this graphical desktop use is powered by version 21.1 of the X server, Mesa 21.3.8, and version 470.182 of the NVIDIA driver. Grub has been updated to version 2.12, and thanks to a new secure boot shim, users no longer have to make any changes to secure boot settings, as Solaris will always default to installing the secure boot image. This is just a small selection of all the changes, and it seems Oracle is planning on releasing these CBE versions more often from here on out, as they say this release contains a ton of preparatory work for changes in upcoming releases, which should come more often than once every three years going forward". Do note that while Solaris CBE releases are free for non-production use, they're not open source.
I don't know anything about hiring processes in Silicon Valley, or about hiring processes in general since I've always worked for myself (and still do, running OSNews, relying on your generous Patreon and Ko-Fi support), so when I ran into this horror story of applying for a position at a Silicon Valley startup, I was horrified. Apparently it's not unheard of - it might even be common? - to ask applicants for a coding position to develop a complex application, for free, without much guidance beyond some vague, generic instructions? In this case, the applicant, Jose Vargas, was applying for a position at Kagi, the search startup with the, shall we say, somewhat evangelical fanbase. After applying, he was asked to develop a complete e-mail client, either as a TUI/CLI or a web application that can view and send emails, using a fake or a real backend, which can display at least plaintext e-mails. None of this was going to be paid labour, of course. Vargas started out by sending in a detailed proposal of what he was planning to create, ending with the obvious question what kind of response he'd get if he actually implemented the detailed proposal. He got a generic response in return, without an answer to that question, but he set out to work regardless. In the end, it took him about a week to complete the project and send it in. He eventually received a canned rejection notice in response, and after asking for clarification the hiring manager told him they wanted something simpler and stronger", so he didn't make the cut. I'm not interested in debating whether or not Vargas was suited for the position, or if the unpaid work he sent in was any good. What I do want to talk about, though, is the insane amount of unpaid labour applicants are apparently asked to do in Silicon Valley, the utter lack of clear and detailed instructions, and how the hiring manager didn't answer the question Vargas sent in alongside his detailed proposal. After all, the hiring manager could've saved everyone a ton of time by letting Vargas know upfront the proposal wasn't what Kagi was looking for. Everything about this feels completely asinine to me. As a (former) translator, I'm no stranger to having to do some work to give a potential client an idea of what my work looks like, but more than half a page of text to translate was incredibly rare. Only on a few rare occasions did a prospective client want me to translate more than that, and in those cases it was always as paid labour, at the normal, regular rate. For context, half a page of text is less than half an hour of work - a far cry from a week's worth of unpaid labour. I've read a ton of online discourse about this particular story, and there's no clear consensus on whether or not Vargas' feelings are justified. Personally, I find the instructions given by Kagi overly broad and vague, the task of creating an email client to be overly demanding, and the canned (AI"?) responses by the hiring manager insulting - after sending in such a detailed proposal, it should've been easy for a halfway decent hiring manager to realise Vargas might not be a good fit for the role, and tell him so before he started doing any work. Kagi is fully within its right to determine who is and is not a good fit for the company, and who they hire is entirely up to them. If such stringent, demanding hiring practices are par for the course in Silicon Valley, I also can't really fault them for toeing the industry line. The hiring manager's behaviour seems problematic, but everyone makes mistakes and nobody's perfect. In short, I'm not even really mad at Kagi specifically here. However, if such hiring practices are indeed the norm, can I, as an outsider, just state the obvious? What on earth are you people doing to each other over there in Silicon Valley? Is this really how you want to treat potential applicants, and how you, yourself, want to be treated? Imagine if a someone applies to be a retail clerk at a local supermarket, and the supermarket's hiring manager asks the applicant to work an entire week in the store, stocking shelves and helping shoppers, without paying the person any wages, only to deny their application after the week of free labour is over? You all realise how insane that sounds, right? Why not look at a person's previous work, hosted on GitHub or any of its alternatives? Why not contact their previous employers and ask about their performance there, as happens in so many other industries? Why, instead of asking someone to craft an entire email client, don't you just give them a few interesting bugs to look at that won't take an entire week of work? Why not, you know, pay for their labour if you demand a week's worth of work? I'm so utterly baffled by all of this. Y'all developers need a union.
How do you get email to the folks without computers? What if the Post Office printed out email, stamped it, dropped it in folks' mailboxes along with the rest of their mail, and saved the USPS once and for all? And so in 1982 E-COM was born-and, inadvertently, helped coin the term e-mail." Justin Duke The implementation of E-COM was awesome. You'd enter the messages on your computer, send it to the post office using a TTY or IBM 2780/3789 terminals, to Sperry Rand Univac 1108 computer systems at one of 25 post offices. Postal staff would print the messages and send them through the regular postal system to their recipients. The USPS actually tried to get a legal monopoly on this concept, but the FCC fought them in court and won out. E-COM wasn't the breakout success the USPS had hoped for, but it did catch on in one, unpleasant way: spam. The official-looking E-COM enevelopes from the USPS were very attractive to junk mail companies, and it was estimated that about six companies made up 70% of the total E-COM volume of 15 million messages in its second year of operation. The entire article is definitely recommended reading, as it contains a ton more information about E-COM and some of the other attempts by USPS to ride the coattails of the computer and internet revolution, including the idea to give every US resident an @.us e-mail address. Wild.
At the start of this year, Microsoft announced that, alongside the end of support for Windows 10, it would also end support for Office 365 (it's called Microsoft 365 now but that makes no sense to me) on Windows 10 around the same time. The various Office applications would continue to work on Windows 10, of course, but would no longer receive bug fixes, security plugs, and so on. Well, it seems Microsoft experienced some pushback on this one, because it just extended this end-of-support deadline for Office 365 on Windows 10 by an additional three years. To help maintain security while you transition to Windows 11, Microsoft will continue providing security updates for Microsoft 365 Apps on Windows 10 for three years after Windows 10 reaches end of support. These updates will be delivered through the standard update channels, ending on October 10, 2028. Microsoft support article The reality is that the vast majority of Windows users are still using Windows 10, and despite countless shady shenanigans and promises of AI" bliss, there's relatively little movement in the breakdown between Windows 10 and Windows 11 users. As such, the idea that Microsoft would just stop fixing security issues and bugs in Office on Windows 10 a few months from now seemed preposterous from the outset, and that seems to have penetrated the walls of Microsoft's executives, too. The real question now is: will Microsoft extend the same courtesy to Windows 10 itself? The clock is ticking, there's only a few months left to go before support for Windows 10 ends, leaving 60-70% of Windows users without security fixes and updates. If they blinked with Office, why wouldn't they blink with Windows 10, too? Who dares to place a bet?
Let's dive into a peculiar bug in iOS. And by that I mean, let's follow along as Guilherme Rambo dives into a peculiar bug in iOS. The bug is that, if you try to send an audio message using the Messages app to someone who's also using the Messages app, and that message happens to include the name Dave and Buster's", the message will never be received. Guilherme Rambo As I read this first description of the bug, I had no idea what could possibly be causing this. However, once Rambo explained that every audio message is transcribed by Apple into a text version, I immediately assumed what was going on: that and" is throwing up problems because the actual name of the brand is stylised with an ampersand, isn't it? It's always DNS HTML, isn't it? Yes. Yes it is. MessagesBlastDoorService uses MBDXMLParserContext (via MBDHTMLToSuperParserContext) to parse XHTML for the audio message. Ampersands have special meaning in XML/HTML and must be escaped, so the correct way to represent the transcription in HTML would have been "Dave & Buster's". Apple's transcription system is not doing that, causing the parser to attempt to detect a special code after the ampersand, and since there's no valid special code nor semicolon terminating what it thinks is an HTML entity, it detects an error and stops parsing the content. Guilherme Rambo It must be somewhat of a relief to programmers and developers the world over that even a company as large and filled with talented people as Apple can run into bugs like this.
Following on from OpenBSD/arm64 on QEMU, it's not always practical to compile userland software or a new kernel on some systems, particularly small SoCs with limited space and memory - or indeed QEMU, in fear of melting your CPU. There are two scenarios here - the first, if you are looking for a standard cross-compiler for Aarch64, and the second if you want an OpenBSD-specific environment. Daniel Nechtan Exactly what it says on the tin.
I had to dig through our extensive archive - OSNews was founded in 1997, after all - to see if we reported on it at the time, but it turns out we didn't: in 2006, Intel announced that in 2007, it would cease production of a range of old chips, including the 386 and 486. In Product Change Notification 106013-01, Intel proclaimed these chips dead. Intel Corporation has been manufacturing its MCS 51, MCS 251 and MCS 96 Microcontroller Product Lines for over 25 years now, and the Intel 186 Processor Families, the Intel 386 Processor Families and the Intel 486 Processor Families for over 15 years now. Additionally, we have been manufacturing the i960 32 Bit RISC Processor Families for over 15 years. However, at this time, the forecasted volumes for these product lines are now too low to continue production of these products beyond the year 2007. Therefore, Intel will cease manufacturing silicon wafers for our 6'' based processes in 2007. Affected products include Intel's MCS 51, MCS 251, MCS 96, 80X18X, 80X38X, 80X486DXX, the i960 Family of Microcomputers, in addition to the 82371SB, 82439TX and the 82439HX Chipsets. Intel has no choice but to issue a Product Discontinuance Notice (PDN) effective 3/30/06. Last time orders will be accepted till 3/30/07 with last time ship dates of 9/28/07. Intel Product Change Notification 106013-01 Considering the 386, 486, and i960 families of processors were only used for niche embedded at very low volumes at that point in time, it made sense to call it quits. We're 18 years down the line now, and I don't think anyone really mourns the end of production for these processors. Windows ended support for these chips well before the 2007 end of production date, with Windows 2000 being the last Windows version that would run on a 486, albeit only barely, since it officially required a Pentium processor. Linux, though, continued to support the 486, but that, too, is now coming to an end. In a patch submitted to the LKML, Ingo Molnar, support for a variety of complicated hardware emulation facilities" for x86-32 will be removed, effectively ending support for 486 and very early 586 processors, by increasing the minimum kernel support features to include TSC and CX8 (CMPXCHG8B) hardware support. Linus Torvalds has expressed interest in removing support for the 486 back in 2022, so this move doesn't come as a huge surprise. While most tech news outlets leave it at that, as I was reading this news, I immediately thought of the Vortex86 line of processors and what this would mean for Linux support for those processors. In case you're unaware, the Vortex86 is a line of x86-32-compatible processors, originating at SiS, but now developed and produced by DMP Electronics in Taiwain. The last two variants were the Vortex86DX3, a dual-core chip running at 1Ghz, and the Vortex86EX2, a chip with two asymmetrical cores that can run two operating systems at once. Their platform support documents for Windows and Linux are from 2021, so we can't rely on those for more information. Digging through some of the documentation from ICOP, who sell industrial PCs based on the latest Vortex86DX3, I think support in modern kernels is very much hit and miss even before this news. All Vortex86 processors are supposedly i586 (with later variants being i686, even), but some of the earlier versions were compatible with the 486SX. On top of that, Linux 4.14 seems to be the last kernel that supports any of these chips out-of-the-box based on the documentation by DMP - but then, if you go back to ICOP, you'll find news items about Linux 5.16 adding better support for Vortex86, so I'm definitely confused. My uneducated guess is that the DX3 and EX2 will probably work even after these changes to the Linux kernel, but earlier models might have more problems. Even on the LKML I can find messages from the kind of people who know their stuff who don't know all the ins and outs of these Vortex86 processors, and which instructions they actually support. It won't matter much for people relying on Vortex86 processors in industrial and commercial settings, though, since they tend to use custom stacks built by the vendor, so they're going to be just fine. What's more interesting is the what I assume is a small enthusiast market using Vortex86 processors who might want to run modern Linux kernels on them. I have a feeling these code removals might lead to some issues on especially the earlier models, meaning you'll have to use older kernels. I've always been fascinated by the Vortex86 line of processors, and on numerous occasions I've hovered over the buy button on some industrial PC using the VortexDX3 (or earlier) processor. Let me know if you're interested in seeing what this chip can do, and if there's enough interest, I can see if I can set a Ko-Fi goal to buy one these and mess around with Windows Embedded/CE, Linux, and god knows what else these things can be made to run.
The title is a lie. This isn't brief at all. Picture the keypad of a telephone and calculator side by side. Can you see the subtle difference between the two without resorting to your smartphone? Don't worry if you can't recall the design. Most of us are so used to accepting the common interfaces that we tend to overlook the calculator's inverted key sequence. A calculator has the 7-8-9 buttons at the top whereas a phone uses the 1-2-3 format. Subtle, but puzzling since they serve the same functional goal - input numbers. There's no logical reason for the inversion if a user operates the interface in the same way. Common sense suggests the reason should be technological constraints. Maybe it's due to a patent battle between the inventors. Some people may theorize it's ergonomics. With no clear explanation, I knew history and the evolution of these devices would provide the answer. Which device was invented first? Which keypad influenced the other? Most importantly, who invented the keypad in the first place? Francesco Bertelli and Manoel do Amara Sometimes, you come across articles that are one-of-a-kind, and this is one of them. Very few people would go to this length to document such a particular thing most people find utterly insignificant, but luckily for us, Francesco Bertelli and Manoel do Amara went all the way with this one. If you want to know anything about the history of the numerical pad and its possibly layouts, this is the place to go. What I've always found fascinating about numerical pads is how effortless the brain can switch between the two most common layouts without really batting an eye. Both layouts seem so ingrained in my brain that it feels like there's barely any context-switching involved, and my fingers just effortlessly flow to the correct numbers. Considering numbers tend to confuse me, I wouldn't have been at all surprised to find myself having issues switching between the two layouts. What makes this even more interesting is when I consider the number row on the keyboard - you know, 1 through 0 - because there I do tend to have a lot of issues finding the right numbers. I don't mean it takes seconds or anything like that, but I definitely experience more hiccups working with the number row than with a numerical keypad of either layout.
We're looking at an article from 2007 here, but I still think it's valuable and interesting, especially from a historical perspective. I first started working on the UNIX file system with Bill Joy in the late 1970s. I wrote the Fast File System, now called UFS, in the early 1980s. In this article, I have written a survey of the work that I and others have done to improve the BSD file systems. Much of this research has been incorporated into other file systems. Marshall Kirk McKusic Variants of UFS are still the default file system in at least NetBSD and OpenBSD, and it's one of the two default options in FreeBSD (alongside ZFS). In other words, this article, and the work described therein, is still relevant to this very day.
I think one of the more controversial parts of Windows 11 - aside from its system requirements, privacy issues, crapware, and AI" nonsense - is its Start menu. I've heard so many complaints about how it's organised, its performance, the lack of customisation, and so on. Microsoft heard those complaints, and has unveiled the new Start menu that'll be shipping to Windows 11 soon - and I have to say, there's a ton of genuine improvements here that I think many of you will be happy with. First and foremost, the all applications" view, that until now has been hidden behind a button, will be at the top level, and you can choose between a category view, a grid view, and a list view. This alone makes the Windows 11 Start menu so much more usable, and will be more than enough to make a lot of users want to upgrade, I'm sure. Second, customisation is taken a lot more seriously in this new incarnation of the Start menu. You can actually shrink or remove completely sections you're not using. If you're not interested in those recommendations, you can just remove that section. Don't want to use the feature where you pin applications to the Start menu? Remove that section. This, too, seems to address common complaints, and I'm glad Microsoft is fixing this. Then there's the rest. Microsoft is promising this new Start menu will perform better, which better be true because I've seen some serious lag and delays on incredibly powerful hardware. The recommendations have been improved as well, in case you care about those, and there's a new optional mobile panel that you can slide out, which contains everything related to your phone. Personally, I'm a classic Start menu kind of person - on all my machines (which all run Fedora KDE), I use a classic, very traditional cascading menu that contains nothing but application categories and their respective applications, and nothing more. Still, were I forced to use Windows, these improvements are welcome, and they seem genuine.
Notifications in Chrome are a useful feature to keep up with updates from your favorite sites. However, we know that some notifications may be spammy or even deceptive. We've received reports of notifications diverting you to download suspicious software, tricking you into sharing personal information or asking you to make purchases on potentially fraudulent online store fronts. To defend against these threats, Chrome is launching warnings of unwanted notifications on Android. This new feature uses on-device machine learning to detect and warn you about potentially deceptive or spammy notifications, giving you an extra level of control over the information displayed on your device. Hannah Buonomo and Sarah Krakowiak Criel on the Chromium Blog So first web browser makers introduce notifications, a feature nobody asked for and everybody hates, and now they're using AI" to combat the spam they themselves enabled and forced onto everyone? Don't we have a name for a business model where you purport to protect your clients from threats you yourself pose? Turning off notifications is one of the first things I do after installing a browser. I do not ever want any website sending me a notification, nor do I want any of them to ask me for permission to do so. They're such an obvious annoyance and massive security threat, and it's absolutely mindboggling to me we just accept them as a feature we have to live with. I genuinely wish browsers like Firefox, which claim to protect your privacy, would just have the guts to be opinionated and rip shit features like this straight out of their browser. Using AI" to combat spam notifications instead of just turning notifications off is peak techbro.
A few months ago I shared my Swift SDK for Darwin, which allows you to build iOS Swift Packages on Linux, amongst other things. I mentioned that a lot of work still needed to be done, such as handling codesigning, packaging, and bundling. I'm super excited to share that we've finally reached the point where all of these things are now possible with cross-platform, open source software. Enter, xtool! This means it's finally possible to build and deploy iOS apps from Linux and Windows (WSL). At the same time, xtool is SwiftPM-based and fully declarative, which means you can also use it to replace Xcode on macOS for building iOS software! kabiroberai While this is obviously an impressive piece of engineering that's taken countless years to fully put together, the issue this doesn't address are Apple's licensing terms when it comes to Xcode and development for Apple's platforms. The Apple Developer Program License Agreement clearly forbids installing Xcode and the Apple SDK on non-Apple branded devices, and as this new xtool requires you download Xcode.xip and use it, it seems it violates these terms. Now, as far as I'm concerned, these terms are idiotic and should be 100% illegal, but if you're an Apple developer who relies on your Apple developer account to make money, using a tool like this definitely has the potential to put your developer account at risk. For experimentation, sure, this is great, but for any official work I would be quite weary until Apple makes some sort of statement about the matter, which is highly unlikely to happen. Perhaps the courts can, at some point, have a say here - especially in the EU - but even then, Apple can always find or manufacture some reason to terminate your account if they really want to. If you want to develop on your own terms, perhaps developing for Apple platforms is not what you should be doing.
We present the formal verification of Apple's iMessage PQ3, a highly performant, device-to-device messaging protocol offering strong security guarantees even against an adversary with quantum computing capabilities. PQ3 leverages Apple's identity services together with a custom, post-quantum secure initialization phase and afterwards it employs a double ratchet construction in the style of Signal, extended to provide post-quantum, post-compromise security. We present a detailed formal model of PQ3, a precise specification of its fine-grained security properties, and machine-checked security proofs using the TAMARIN prover. Particularly novel is the integration of post-quantum secure key encapsulation into the relevant protocol phases and the detailed security claims along with their complete formal analysis. Our analysis covers both key ratchets, including unbounded loops, which was believed by some to be out of scope of symbolic provers like TAMARIN (it is not!). Felix Linker and Ralf Sasse Weekend, light reading, you know how this works by now. Light some candles, make some tea, get comfy.
John Siracusa, one third of the excellent ATP podcast, developer of several niche Mac utilities, and author of some of the best operating system reviews of all time, has called for Apple's CEO, Tim Cook, to step down. Now, countless people call for Tim Cook to stand down all the time, but when someone like Siracusa, an ardent Mac user since the release of the very first Macintosh and a staple of the Apple community, makes such a call, it carries a bit more weight. His main argument is not particularly surprising to anyone who's been keeping tabs on the Apple community, and the Apple developer community in particular: Apple seems to no longer focus on making great products, but on making money. Every decision made by Apple's leadership team is focused solely on extracting as much money from consumers and developers, instead of on making the best possible products. The best leaders can change their minds in response to new information. The best leaders can be persuaded. But we've had decades of strife, lawsuits, and regulations, and Apple has stubbornly dug in its heels even further at every turn. It seems clear that there's only one way to get a different result. In every healthy entity, whether it's an organization, an institution, or an organism, the old is replaced by the new: CEOs, sovereigns, or cells. It's time for new leadership at Apple. The road we're on now does not lead anywhere good for Apple or its customers. It's springtime, and I'm choosing to believe in new life. I swear it's not too late. John Siracusa I reached this same point with Apple a long, long time ago. I was an ardent Mac user during the PowerPC G4 and G5 days, lasting into the early Intel days. However, as the iPhone and related services took over as Apple's primary source of income, I felt that Mac OS X, which I once loved and enjoyed so much, started to languish, and it's been downhill for Apple's desktop operating system ever since. Whenever I have to help my parents with their computers - modern M1 and M2 Macs - I am baffled and saddened by just how big of a convoluted, disjointed, and unintuitive mess macOS has become. I long ago stopped caring about whatever products Apple releases or updates, because I feel like as a user who genuinely cares about his computing experience, Apple simply doesn't make products for me. I'm not sure replacing Tim Cook with someone else will really change anything about Apple's priorities; in the end, it's a publicly traded corporation that thinks it needs to please shareholders, and a focus on great products instead of money isn't going to help with that. Apple long ago stopped being the beleaguered company many of its most ardent fans still seem convinced that it is, and it's now one of those corporate monoliths that can make billions more overnight by squeezing just a bit more out of developers or users, regardless of what that squeezing does to the user experience. Apple is still selling more devices than ever, and it's still raking in more gambling gains through digital slot machines for children, and as long as that's the case, replacing Tim Cook won't do a goddamn thing.
The team that makes Cockpit, the popular server dashboard software, decided to see if they could improve their PR review processes by adding AI" into the mix. They decided to test both sourcey.ai and GitHub Copilot PR reviews, and their conclusions are damning. About half of the AI reviews were noise, a quarter bikeshedding. The rest consisted of about 50% useful little hints and 50% outright wrong comments. Last week we reviewed all our experiences in the team and eventually decided to switch off sourcery.ai again. Instead, we will explicitly ask for Copilot reviews for PRs where the human deems it potentially useful. This outcome reflects my personal experience with using GitHub Copilot in vim for about 1.5 years - it's a poisoned gift. Most often it just figured out the correct sequence of ), ], and } to close, or automatically generating debug print statements - for that typing helper" work it was actually quite nice. But for anything more nontrivial, I found it took me more time to validate the code and fix the numerous big and subtle errors than it saved me. Martin Pitt AI" companies and other proponents of AI" keep telling us that these tools will save us time and makes things easier, but every time someone actually sits down and does the work of testing AI" tools out in the field, the end results are almost always the same: they just don't deliver the time savings and other advantages we're being promised, and more often than not, they just create more work for people instead of less. Add in the financial costs of using and running these tools, as well as the energy they consume, and the conclusion is clear. When the lack of effectiveness of AI" tools our in the real world is brought up, proponents inevitably resort to yes it sucks now, but just you wait on the next version!" Then that next version comes, people test it out in the field again, and it's still useless, and those same proponents again resort to yes it sucks now, but just you wait on the next version!", like a broken record. We're several years into the hype, and that mythical next version" still isn't here. We're several years into the AI" hype, and I still have seen no evidence it's not a dead end and a massive con.
About a year ago, we talked about the fact that Android 15 became page size-agnostic, supporting both 4 KB and 16 KB page sizes. Google was already pushing developers to get their applications ready for 16 KB page sizes, which means recompiling for 16 KB alignment and testing on a 16 KB version of an Android device or simulator. Google is taking the next step now, requiring that every application targeting Android 15 or higher submitted to Google Play after 1 November 2025 must support a page size of 16 KB. This is a key technical requirement to ensure your users can benefit from the performance enhancements on newer devices and prepares your apps for the platform's future direction of improved performance on newer hardware. Without recompiling to support 16 KB pages, your app might not function correctly on these devices when they become more widely available in future Android releases. Dan Brown on the Android Developers Blog This is mostly only relevant for developers instead of users, but in the extremely unlikely scenario that one of your favourite applications cannot be made to work with 16 KB page sizes for some weird reason, or the developer refuses to support it or some even weirder reason, you might have to say goodbye to that applications if you use Android 15 or higher. This is absurdly unlikely, but I wouldn't be surprised if it happens to at least one application. If that happens, I want to know which application that is, and ask the developer for their story.
I've launched" the Mac Themes Garden! It is a website showcasing more than 3,000 (and counting) Kaleidoscope from the Classic Mac era, ready to be seen, downloaded and explored! Check it out! Oh, and there also is an RSS feed you can subscribe to see themes as they are added/updated! Damien Erambert If you've spent any time on retrocomputing-related social media channels, you've definitely seen the old classic Mac OS themes in your timeline. They are exquisitely beautiful artifacts of a bygone era, and the work Damien Erambert has been doing to make these easily available and shareable, entirely in his free time, is awesome and a massive service to the retrocomputing community. The process to get these themes loaded up onto the website is actually a lot more involved than you might imagine. It involves a classic Mac OS virtual machine, applying themes, taking screenshots, collecting creator information, and adding everything to a database. This process is mostly manual, and Erambart estimates he's about halfway done. If you have classic Mac OS running somewhere, on real hardware or in a virtual machine, you can now easily theme it at your heart's content.
This is a follow-up to the Samsung NX mini (M7MU) firmware reverse-engineering series. This part is about the proprietary LZSS compression used for the code sections in the firmware of Samsung NX mini, NX3000/NX3300 and Galaxy K Zoom. The post is documenting the step-by-step discovery process, in order to show how an unknown compression algorithm can be analyzed. The discovery process was supported by Igor Skochinsky and Tedd Sterr, and by writing the ideas out on encode.su. Georg Lukas It's not weekend quite yet, but here's some light reading ahead of time.
As the headline suggests, we're going to be talking about some very dry Windows stuff that only affects a relatively small number of people, but for those people this is a big deal they need to address. If you're working on pre-production drivers that need to be signed, this is important to you. The Windows Hardware Program supports partners signing drivers for use in pre-production environments. The CA that is used to sign the binaries for use in pre-production environments on the Windows Hardware Program is set to expire in July 2025, following which a new CA will be used to sign the preproduction content starting June 9, 2025. Hardware Dev Center Alongside the new CA come a bunch of changes to the rules. First and foremost, expiry of signed drivers will no longer be tied to the expiry of the underlying CA, so any driver signed with the new CA will not expire, regardless of what happens to the CA. In addition, on April 22, May 13, and June 10, 2025, Windows servicing releases (4D/5B/6B) will be shipped to Windows versions (down to Windows Server 2008) to replace the old CAs with the new ones. As such, if you're working on pre-production drivers, you need to install those Latest Cumulative updates. On a very much related note, Microsoft has announced it's retiring device metadata and the Windows Metadata and Internet Services (WMIS). This is what allowed OEMs and device makers to include things like device names, custom device icons, and other information in the form of an XML file. While OEMs can no longer create new device metadata this way, existing metadata already installed on Windows clients will remain functional. As a replacement for this functionality, Microsoft points to the driver's INF files, where such information and icons can also be included. Riveting stuff.
The openSUSE team has decided to remove the Deepin Desktop Environment from openSUSE, after the project's packager for openSUSE was found to have added workaround specifically to bypass various security requirements openSUSE has in place for RPM packages. Recently we noticed a policy violation in the packaging of the Deepin desktop environment in openSUSE. To get around security review requirements, our Deepin community packager implemented a workaround which bypasses the regular RPM packaging mechanisms to install restricted assets. As a result of this violation, and in the light of the difficult history we have with Deepin code reviews, we will be removing the Deepin Desktop packages from openSUSE distributions for the time being. Matthias Gerstner Matthias Gerstner goes into great detail to lay out every single time the openSUSE team found massive, glaring security issues in Deepin, and the complete lack of adequate responses from the Deepin upstream team over the past 8 or so years. It's absolutely shocking to see how utterly lax the Deepin developers have been regarding the security of their desktop environment and its dependencies, and the openSUSE team could really only come to one harsh conclusion: Deepin has no security culture whatsoever, and it's extremely likely that every corner of the Deepin code is riddled with very serious security issues. As such, despite the relatively large number of Deepin users on openSUSE, the team has decided to remove Deepin from openSUSE entirely, instead pointing users to a third-party repository if they desire to keep using Deepin. I think this is the best possible option in this situation, but it's not exactly ideal. After reading this entire saga, however, I don't think anyone who cares about security should be using Deepin. Of course, I doubt this will be the end of the story. What about all the other Linux distributions out there? The security issues in Deepin itself are most likely also present in Debian, Fedora, and other distributions who have the Deepin Desktop Environment in their repositories, but what about the workaround to bypass packaging security practices? Does that exist elsewhere as well? I think we're about to find out.
Daniel Stenberg, creator and maintainer of curl, has had enough of the neverending torrent of AI"-generated security reports the curl project has to deal with. That's it. I've had it. I'm putting my foot down on this craziness. 1. Every reporter submitting security reports on Hackerone for curl now needs to answer this question: Did you use an AI to find the problem or generate this submission?" (and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions) 2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time. We still have not seen a single valid security report done with AI help. Daniel Stenberg This is the real impact of AI": streams of digital trash real humans have to clean up. While proponents of AI" keep claiming it will increase productivity, actual studies show this not to be the case. Instead, what AI" is really doing is create more work for others to deal with by barfing useless garbage into other people's backyards. It's like the digital version of the western world sending its trash to third-world countries to deal with. The best possible sign that AI" is a toxic trash heap you wouldn't want to have anything to do with are the people fighting for team AI". In Zuckerberg's vision for a new digital future, artificial-intelligence friends outnumber human companions and chatbot experiences supplant therapists, ad agencies and coders. AI will play a central role in the human experience, the Facebook co-founder and CEO of Meta Platforms has said in a series of recent podcasts, interviews and public appearances. Meghan Bobrowsky at the WSJ Mark Zuckerberg, who built his empire by using people's photos without permission so he could rank who was hotter, who used Facebook logins to break into journalists' email accounts because they were about to publish a negative story about him, who called Facebook users dumb fucks" for entrusting their personal information to him, is on the forefront fighting for AI". If that isn't the ultimate proof there's something deeply wrong and ethically unsound about AI", I don't know what is.
The Trinity Desktop Environment, the continuation of the final KDE 3.x release updated and maintained for modern times, consists of more than just the KDE bits you may think of. The project also maintains a fork of Qt 3 called TQt3, which it obviously needs to be able to work on and improve TDE itself, which is based on it. In the beginning, this fork consisted mainly of renaming things, but in recent years, more substantial changes meant that the code diverged considerably from the original Qt 3. As such, a small name change is in order. TQt3 was born as a fork of Qt3 and for many years it was little more than a mere renaming effort. Over the past few years, many changes were made and the code has significantly diverged from the original Qt3, although still sharing the same roots. With more changes planned ahead and with the intention of better highlighting such difference, the TDE team has decided to drop the 3' from the repository name, which is now simply called TQt. TDE on Mastodon The effect this has on users is rather minimal - users of the current 14.1.x release branch will still see 3s around in file paths and package names, but in future 14.2.x releases, all of these will have been removed, completing the transition. This seems like a small change, and that's because it is, but it's interesting simply because it highlights that a project that seems relatively straightforward on the outside - maintain and carefully modernise the final KDE 3.x release - encompasses a lot more than that. Maintaining an entire Qt 3 fork certainly isn't a small feat, but it's kind of required to keep a project like TDE going.
VectorVFS is a lightweight Python package that transforms your Linux filesystem into a vector database by leveraging the native VFS (Virtual File System) extended attributes. Rather than maintaining a separate index or external database, VectorVFS stores vector embeddings directly alongside each file-turning your existing directory structure into an efficient and semantically searchable embedding store. VectorVFS supports Meta's Perception Encoders (PE) which includes image/video encoders for vision language understanding, it outperforms InternVL3, Qwen2.5VL and SigLIP2 for zero-shot image tasks. We support both CPU and GPU but if you have a large collection of images it might take a while in the first time to embed all items if you are not using a GPU. Christian S. Perone It won't surprise many of you that this goes a bit above my paygrade, but according to my limited understanding, VectorVFS stores information about files inside the xattr part of inodes. The information being stored is converted into vectors first, and this is the part that breaks my brain a bit, because vectors in this context are far too complex for me to understand. I vaguely understand the end result here - making files searchable using vector magic without using a dedicated database or separate files by using extended attributes in inodes - but the process is far more complicated to understand. It still seems like a very interesting approach, though, and I'd love for people smarter than me to take VectorVFS apart and explain it in easier terms for those of us who don't fully grasp it.
Can someone please stop these months from coming and going, because I'm getting dizzy with yet another monthly report of all the progress made by Redox. Aside from the usual swath of improvements to the kernel, relibc, drivers, and so on, this month saw the completion of the userspace process manager. In monolithic kernels this management is done in the kernel, resulting in necessary ambient authority, and possibly constrained interfaces if a stable ABI is to be guaranteed. With this userspace implementation, it will be easier to manage access rights using capabilities, reduce kernel bugs by keeping it simpler, and make changes where both sides of the interface can be updated simultaneously. Ribbon and Ron Williams Students at Georgia Tech have been hard at work this winter on Redox as well, building a system health monitoring and recovery daemon and user interface. The Redox team has also done a lot of work to improve the build infrastructure, fixing a number of related issues along the way. The sudo daemon has now replaced the setuid bit for improved user authentication security, and a ton of existing ports have been fixed and updated where needed. Redox' monthly progress is kind of stunning, and it's clear there's a lot of interesting in the Rust-based operating system from outside the project itself as well. I wonder at what point Redox becomes usable for at least some daily, end-user tasks. I think it's not quite there yet, especially when it comes to hardware support, but I feel like it's getting there faster than anyone anticipated.
Google's accelerated Android release cycle will soon deliver a new version of the software, and it might look quite different from what you'd expect. Amid rumors of a major UI overhaul, Google seems to have accidentally published a blog post detailing Material 3 Expressive," which we expect to see revealed at I/O later this month. Google quickly removed the post from its design site, but not before the Internet Archive saved it. Ryan Whitwam at Ars Technica Google seems to be very keen on letting us know this new redesign is based on a lot of user research and metrics, which always sets off alarm bells in my mind when it comes to user interfaces. Every single person uses their smartphone and its applications a little differently, and using tons of metrics and data to average all of this out can make it so that anyone who strays to far from that average is going to have a bad time. This is compounded by the fact that each and every one of us is going to stray form the average in at least a few places. Google also seems to be throwing consistency entirely out of the window with this redesign, which chills me to the bone. One of the reasons I like the current iteration of Material Design so much is that it does a great job of visually (and to a less extent, behaviourally) unifying the operating system and the applications you use, which I personally find incredibly valuable. I very much prefer consistency over disparate branding, and the screenshots and wording I'm seeing here seem to indicate Google considers that a problem that needs fixing. As with everything UI, screenshots don't tell the whole story, so maybe it won't be so bad. I mean, it's not like I've got anywhere else to go in case Google messes this up. Monopolies (or duopolies) are fun.
Following the recent release of the IBM z17 mainframe, IBM today unveiled the LinuxONE Emperor 5, which packs much of the same hardware as the z17, but focused on Linux use. Today we're announcing IBM LinuxONE 5,performant Linux computing platform for data, applications and your trusted AI, powered by the IBM Telum II processor with built-in AI acceleration. This launch comes at a pivotal time, as technology leaders focus on three critical imperatives: enabling security, improving cost-efficiency, and integrating AI into enterprise systems. Marcel Mitran and Tina Tarquinio Yes, much like the z17, the LinuxONE 5 is a huge AI" buzzword bonanza, but that's to be expected in this day and age. The LinuxONE 5, which, again, few of us will ever get to work with, officially supports Red Hat, OpenSUSE, and Ubuntu, but a variety of other Linux distributions offers support for IBM's Z hardware, as well.
Bootc and associated tools provide the basis for building a personalised desktop. This article will describe the process to build your own custom installation. Daniel Mendizabal at Fedora Magazine The fact that atomic distributions make it relatively easy to create custom distributions" is s really interesting bonus quality of these types of Linux distributions. The developers behind Blue95, which we talked about a few weeks ago, based their entire distribution on this bootc personalised desktop approach using Fedora, and they argue that the term distribution" probably isn't the correct term here: Blue95 is a collection of scripts and YAML files cobbled together to produce a Containerfile, which is built via GitHub Actions and published to the GitHub Container Registry. Which part of this process elevates the project to the status of a Linux distribution? What set of RUN commands in the Containerfile take the project from being merely a Fedora-based OCI image to a full-blown Linux distribution? Adam Fidel While this discussion is mostly academic, I still find it interesting how with the march of technology, and with the aid of new ideas, it's becoming easier and easier to spin up a customised version of you favourite Linux distribution, making it incredibly easy to have your own personal ISO, with all your settings, themes, and customisations applied. This has always been possible, but it seems to be getting easier. Atomic, immutable distributions are not for me, personally, but I firmly believe most distributions focusing on average, normal users - Ubuntu, Fedora, SUSE - will eventually move their immutable variants to the prime spot on their web sites. This will make a whole lot of people big mad, but I think it's inevitable. Of course, traditional Linux distributions won't be going away, but much like how people keep complaining about systemd despite the tons alternatives, I'm guessing the same will happen with immutable distributions.
This week's This Week in GNOME mentions that Blueprint will become part of GNOME. Blueprint is now part of the GNOME Nightly SDK and is expected to be part of the GNOME 49 SDK. This means, apps relying on Blueprint won't have to install it manually anymore. Blueprint is an alternative to defining GTK/Libadwaita user interface via .ui XML-files (GTK Builder files). The goal of blueprint is to provide UI definitions that require less boilerplate than XML and are easier to learn. Blueprint also provides a language server for IDE integration. Sophie Herold Quite a few applications already make use of Blueprint, and even some Core GNOME applications use it, so it seems logical to make it part of the default GNOME installation.
OSle is an incredibly small operating system, coming in at only 510 bytes, so it fits entirely into a boot sector. It runs in real-mode, and is written in assembly. Despite the small size, it has a shell, a read and write file system, process management, and more. It even has its own tiny SDK and some pre-built programs. The code's available under the MIT license.
A European Union privacy watchdog fined TikTok 530 million euros ($600 million) on Friday after a four-year investigation found that the video sharing app's data transfers to China put users at risk of spying, in breach of strict EU data privacy rules. Ireland's Data Protection Commission also sanctioned TikTok for not being transparent with users about where their personal data was being sent and ordered the company to comply with the rules within six months. Kelvin Chan for AP News In case you're wondering what Ireland's specific role in this case is, TikTok's European headquarters are located in Ireland, which means that any EU-wide privacy violations by TikTok are handled by Ireland's privacy watchdog. Anyway, sounds like a big fine, right? Let's do some math. TikTok's global revenue last year is estimated at 20 billion. This means that a 530 million fine is 2.65% of TikTok's global yearly revenue. Now let's make this more relatable for us normal people. The yearly median income in Sweden is 34365 (pre-taxes), which means that if the median income Swede had to pay a fine with the same impact as the TikTok fine, they'd have to pay 910. That's how utterly bullshit this fine is. 910 isn't nothing if you make 34000 per year, but would you call this a true punishment for TikTok? Any time you read about any of these coporate fines, you should do math like this to get an idea of what the true impact of the fine really amounts to. You'll be surprised to learn to just how utterly toothless they are.
Back in the late '90s and early 2000s, if you installed a comprehensive office suite on Windows, such as Microsoft's own Office or something like WordPerfect Office or IBM Lotus SmartSuite, it would often come with a little icon in the system tray or a floating toolbar to ensure the applications were preloaded upon logging into Windows. The idea was that this preloading would ensure that the applications would start faster. It's 2025, and Microsoft is bring it back. In a message in the Microsoft 365 Message Center Archive, which is a real thing I didn't make up, the company announced a new Startup Boost task that will preload Office applications on Windows to reduce loading times for the individual Office applications. We are introducing a new Startup Boost task from the Microsoft Office installer to optimize performance and load-time of experiences within Office applications. After the system performs the task, the app remains in a paused state until the app launches and the sequence resumes, or the system removes the app from memory to reclaim resources. The system can perform this task for an app after a device reboot and periodically as system conditions allow. MC1041470 - New Startup Boost task from Microsoft Office installer for Office applications This new task will automatically be added to the Task Scheduler, but only on PCs with 8GB of RAM or more and at least 5GB of available disk space. The task will run 10 minutes after logging into Windows, will be disabled if the Energy Saves feature is enabled, and will be removed if you haven't used Office in a while. The initial rollout of this task will take place in May, and will cover Word only for now. The task can be disabled manually through Task Scheduler or in Word's settings. Since this is Microsoft, every time Office is updated, the task will be re-enabled, which means that users who disable the feature will have to disable it again after each update. This particular behaviour can be disabled using Group Policy. Yes, the sound you're hearing are all the AI" text generators whirring into motion as they barf SEO spam onto the web about how to disable this feature to speed up your computer. I'm honestly rather curious who this is for. I have never found the current crop of Office applications to start up particularly slowly, but perhaps corporate PCs are so full of corpo-junkware they become slow again?
It has been well over two years since the last release of DragonFlyBSD, version 6.4.0, and today the project pushed out a small update, DragonFlyBSD 6.4.1. It fixes a few small, longstanding issues, but as the version number suggests, don't expect any groundbreaking changes here. The legacy IDE/NATA driver had a memory leak fixed, the ca_root_nss package has been updated to support newer Let's Encrypt certificates, the package update command will no longer delete an important configuration file that rendered the command unusable, and more small fixes like that. Existing users can update the usual way.
Chips and Cheese takes a very detailed look at the latest processor design from Zhaoxin, the Chinese company that inherited VIA's x86 license and has been making new x86 chips ever since. Their latest design, (Century Avenue), tries to take yet another step closer to current designs chips form Intel and AMD, and while falling way short, that's not really the point here. Ultimately performance is what matters to an end-user. In that respect, the KX-7000 sometimes falls behind Bulldozer in multithreaded workloads. It's disappointing from the perspective that Bulldozer is a 2011-era design, with pairs of hardware thread sharing a frontend and floating point unit. Single-threaded performance is similarly unimpressive. It roughly matches Bulldozer there, but the FX-8150's single-threaded performance was one of its greatest weaknesses even back in 2011. But of course, the KX-7000 isn't trying to impress western consumers. It's trying to provide a usable experience without relying on foreign companies. In that respect, Bulldozer-level single-threaded performance is plenty. And while Century Avenue lacks the balance and sophistication that a modern AMD, Arm, or Intel core is likely to display, it's a good step in Zhaoxin's effort to break into higher performance targets. Chester Lam at Chips and Cheese I find Chinese processors, like the x86-based ones from Zhaoxin or the recent LoongArch processors (which you can buy on AliExpress), incredibly fascinating, and would absolutely love to get my hands on one. A board with two of the most recent LoongArch processors - the 3c6000 - goes for about 4000 at the moment, and I'm keeping my eye on that price to see if there's ever going to be a sharp drop. This is prime OSNews material, after all. No, they're not competitive with the latest offerings from Intel, AMD, or ARM, but I don't really care - they interest me as a computer enthusiast, and since it's highly unlikely we're going to see anyone seriously threaten Intel, AMD, and ARM here in the west, you're going to have to look at China if you're interested in weird architectures and unique processors.
If RISC-V ever manages to take off, this is going to be an important tool in RISC-V users' toolbox: felix86 is an x86-64 userspace emulator for RISC-V. felix86 emulates an x86-64 CPU running in userspace, which is to say it is not a virtual machine like VMware, rather it directly translates the instructions of an application and mostly uses the host Linux kernel to handle syscalls. Currently, translation happens during execution time, also known as just-in-time (JIT) recompilation. The JIT recompiler in felix86 is focused on fast compilation speed and performs minimal optimizations. It utilizes extensions found on the host system such as the vector extension for SIMD operations, or the B extension for emulating bit manipulation extensions like BMI. The only mandatory extensions for felix86 are G, which every RISC-V general purpose computer should already have, and v1.0 of the standard vector extension. felix86 website The project is still in early development, but a number of popular games already work, which is quite impressive. The code's on GitHub under the MIT license.
Way back in 2021, in the Epic v. Apple court case, judge US District Judge Yvonne Gonzalez Rogers ordered Apple to allow third-party developers to tell users how to make payments inside iOS applications without going through Apple's App Store. As we have come to expect from Apple, the company maliciously complied, lowering the commission on purchases outside of its ecosystem from 30% to 27%, while also adding a whole bunch of hoops and hurdles, like scare screens with doom-and-gloom language to, well, scare consumers into staying within Apple's ecosystem for in-app payments. Well, it turns out Judge Yvonne Gonzalez Rogers is furious, giving Apple, Tim Cook, and its other executives what can only be described as a beatdown - even highlighting how one of Apple's executives, under orders from Tim Cook, lied under oath several times. Gonzalez is referring this to the District Attorney for Northern California to investigate whether criminal contempt proceedings are appropriate." In stark contrast to Apple's initial in-court testimony, contemporaneous business documents reveal that Apple knew exactly what it was doing and at every turn chose the most anticompetitive option. To hide the truth, Vice-President of Finance, Alex Roman, outright lied under oath. Internally, Phillip Schiller had advocated that Apple comply with the Injunction, but Tim Cook ignored Schiller and instead allowed Chief Financial Officer Luca Maestri and his finance team to convince him otherwise. Cook chose poorly. The real evidence, detailed herein, more than meets the clear and convincing standard to find a violation. The Court refers the matter to the United States Attorney for the Northern District of California to investigate whether criminal contempt proceedings are appropriate. US District Judge Judge Yvonne Gonzalez Rogers Gonzalez' entire ruling is scathing, seething with rage, and will probably do more reputational damage to Apple, Tim Cook, and his executive team than any bendgate or antennagate could ever do. Judge Gonzalez: This is an injunction, not a negotiation. There are no do-overs once a party willfully disregards a court order. Time is of the essence. The Court will not tolerate further delays. As previously ordered, Apple will not impede competition. The Court enjoins Apple from implementing its new anticompetitive acts to avoid compliance with the Injunction. Effective immediately Apple will no longer impede developers' ability to communicate with users nor will they levy or impose a new commission on off-app purchases. Apple willfully chose not to comply with this Court's Injunction. It did so with the express intent to create new anticompetitive barriers which would, by design and in effect, maintain a valued revenue stream; a revenue stream previously found to be anticompetitive. That it thought this Court would tolerate such insubordination was a gross miscalculation. As always, the cover-up made it worse. For this Court, there is no second bite at the apple. US District Judge Judge Yvonne Gonzalez Rogers Gonzalez effectively destroyed any ability for Apple to charge commissions on purchases made inside iOS applications but outside Apple's App Store, and this order will definitely find its way to the European Union as well, where it will serve as further evidence of Tim Cook's and Apple's continuous, never-ending contempt for the law and courts that uphold it. For its part, Apple has stated they're going to appeal. Good luck with that.