Earlier this year, under pressure from the European Union, Apple was finally forced to open up iOS and allow alternative browser engines, at least in the EU. Up until then, Apple only allowed its own WebKit engine to run on iOS, meaning that even what seemed like third-party browsers - Chrome, Firefox, and so on - were all just Safari skins, running Apple's WebKit underneath (with additional restrictions to make them perform worse than Safari). Even with other browser engines now being allowed on iOS in the EU, there's still hurdles, as Apple requires browser makers to maintain two different browsers, one for the EU, and another one for the rest of the world. It seems the Chromium community is already working on bringing the Chromium Blink browser engine to iOS, but there's still a lot of work to be done. A blog post by the open source consultancy company Igalia digs into the details, since they are contributing to the effort. While they've got the basics covered, it's far from completed or ready for release. We've briefly looked at the current status of the project so far, but many functionalities still need to be supported. For example, regarding UI features, functionalities such as printing preview, download, text selection, request desktop site, zoom text, translate, find in page, and touch events are not yet implemented or are not functioning correctly. Moreover, there are numerous failing or skipped tests in unit tests, browser tests, and web tests. Ensuring that these tests are enabled and passing the test should also be a key focus moving forward. Gyuyoung Weblog I don't use iOS, nor do I intend to any time soon, but the coming availability of browser engines that compete with WebKit is going to be great for the web. I've heard from so many web developers that Safari on iOS is a bit of a nightmare to support, since without any competition on iOS it often stagnates and lags behind in supporting features other browsers already implemented. With WebKit on iOS facing competition, that might change. Now, there's a line of thought that all this will do is make Chrome even more dominant, but I don't think that's going to be an issue. Safari is still the default for most people, and changing defaults is not something most people will do, especially not the average iOS user. On top of that, this is only available in the EU, so I honestly don't think we have to worry about this any time soon, but obviously, we do have to remain vigilant.
I have no contracts, agreements, or business with Apple, I do not use any Apple products, I do not rely on any Apple services, and none of my work requires the use of any of Apple's tools. Yet, I'm forced to deal with Apple's 30% tax. Today, Patreon, which quite a few of you use to support OSNews, announced that Apple is forcing them to change its billing system, or risk being banned from the App Store. This has some serious consequences for people who use Patreon's iOS application to subscribe to Patreons, and for the creators you subscribe to. First: Apple will be applying their 30% App Store fee to all new memberships purchased in the Patreon iOS app, in addition to anything bought in your Patreon shop. Patreon's website First things first: the 30% mafia tax will only be applied to new Patreons subscribers using the Patreon iOS application to subscribe, starting early November 2024. Existing Patreons will not be affected, iOS or no. Anyone who subscribes through the Patreon website or Android application will not be affected either. Since creators like myself obviously have no intention of just handing over 30% of what our iOS-using supporters donate to us, Patreon has added an option to automatically increase the prices of subscriptions inside the Patreon iOS application by 30%. In other words, starting this November, subscribing to the OSNews Patreon through the iOS application will be 30% more expensive than subscribing from anywhere else. As such, I'm pondering updating the description of our Patreon to strongly suggest anyone wishing to subscribe to the OSNews Patreon to do so either on the web, or through the Patreon Android application instead. If you're hell-bent on subscribing through the Patreon iOS application, you'll be charged an additional 30% to pay protection money to Apple. And just to reiterate once more: if you're already a Patreon, nothing will change and you'll continue to pay the regular amounts per tier. Second: Any creator currently on first-of-the-month or per-creation billing plans will have to switch over to subscription billing to continue earning in the iOS app, because that's the only billing type Apple's in-app purchase system supports. Patreon's website This is Patreon inside baseball, but as it stands right now, subscribers to the OSNews Patreon are billed on the first of the month, regardless of when during a month you subscribe. This is intentional, since I really like the clarity it provides to subscribers, and the monthly paycheck it results in for myself. Sadly, Apple is forcing Patreon to force me to change this - I am now forced to switch to subscription billing instead, somewhere before November 2025. This means that once I make that forced switch, new Patreons will be billed on their subscription date every month (if you subscribe on 25 April, you'll be charged every 25th of the month). Luckily, nothing will change for existing subscribers - you will still be billed on the 1st of the month. This whole thing is absolutely batshit insane. Not only is Patreon being forced by Apple to do this at the risk of having their iOS application banned from the App Store, Apple is also making it explicitly impossible for Patreon to go any other route. As we all know, Patreon won't be allowed to advertise that subscribing will be cheaper on the web, but Apple is also not allowing Patreon to remove subscribing in the Patreon iOS application altogether - if Patreon were to do that, Apple will ban the application from the App Store as well. And with how many people use iOS, just outright deprecating the Patreon iOS application is most likely going to hurt a lot of creators, especially ones outside of the tech sphere. Steven Troughton-Smith did some math, and concluded that Apple will be making six times as much from donations to Patreon creators than Patreon itself will. In other words, if you use iOS, and subscribe to a creator from within the Patreon iOS application, you will be supporting Apple - a three trillion dollar corporation - more than Patreon, which is actually making it possible to support the small creators you love in the first place. That is absolutely, balls-to-the-wall, batshit insanity. Remember that ad Apple made where it crushed a bunch of priceless instruments and art supplies into an iPad - the ad it had to pull and apologise for because creators, artists, writers, and so on thought it was tasteless and dystopian? Who knew that ad was literal.
Haiku, the platform grossing in ported browsers while its native WebPositive browser languishes, has added another notch to its belt - and this time, it's a big one. Firefox has been tentatively ported to Haiku, but it's early days and there's no package ready to download - you'll have to compile it yourself if you want to get it running. It's version 128, so it's the latest version, too. Without the ability to easily test and run it, there's not much more to add at this point, but it's still a major achievement. I hope there'll be a nice Haiku package soon.
Way back in the early before time, Microsoft thought it would be a good idea to brand Windows 10 entirely around the label creators", and one distinctly odd consequence of that was an application called Paint 3D", a replacement for the traditional Paint application that Microsoft had been shipping one way or another since 1985, when it included a simple bitmap editing program called Doodle" with its mouse drivers for DOS. Doodle would be replaced shortly after by a whitelabel version of ZSoft Corporation's PC Paintbrush, and once Windows 1.0 rolled around, it was rebranded as Paint, a name that has stuck until today. Paint 3D was supposed to replace the regular Paint, with a focus on creating and manipulating 3D objects, serving as an extension to Microsoft's failed efforts to bring VR and AR to the masses. Microsoft even went so far as to list the regular Paint as deprecated, but after a lot of outcry, has since reneged and refocused its efforts on improving it. Paint 3D, however, is not officially going to be deprecated, and has been added to Microsoft's list of deprecated Windows features. Paint 3D is deprecated and will be removed from the Microsoft Store on November 4, 2024. To view and edit 2D images, you can use Paint or Photos. For viewing 3D content, you can use 3D Viewer. Microsoft's list of deprecated Windows features I don't think anyone is going to shed a tear on this, but at the same time, as with everything Microsoft changes or removes from Windows, there's bound to be at least a few people whose entire workflow heavily depends on Paint 3D, and they're going to be pissed.
About two years ago, the very popular and full-featured Android launcher Nova Launcher was acquired by mobile links and analytics company Branch. This obviously caused quite the stir, and ever since, whenever Nova is mentioned online, people point out what kind of company acquired Nova and that you probably should be looking for an alternative. While Branch claimed, as the acquiring party always does, that nothing was going to change, most people, including myself, were skeptical. Several decades covering this industry have taught me that acquisitions like this pretty much exclusively mean doom, and usually signal a slow but steady decline in quality and corresponding increase in user-hostile features. I'm always open to being proven wrong, but I don't have a lot of hope. Thom Holwerda Up until a few days ago, I have to admit I was wrong. Nova remained largely the same, got some major new features, and it really didn't get any worse in any meaningful way - in fact, Nova just continued to get better, adopted all the new Android Material You and other features, and kept communicating with its users quite well. After a while, I kind of forgot all about Nova being owned by Branch, as nothing really changed for the worse. It's rare, but it happens - apparently. So I, and many others who were skeptical at first as well, kept on using Nova. Not only because it just continued being what I think is the best, most advanced, and most feature-rich launcher for Android, but also because... Well, there's really nothing else out there quite like Nova. I'm sure many of you are already firing up the comment engine, but as someone who has always been fascinated by alternative, non-stock mobile device launchers - from Palm OS, PocketPC, and Zaurus, all the way to the modern day with Android - I've seen them all and tried them all, and while the launcher landscape is varied, abundant, and full of absolutely amazing alternatives for every possible kind of user, there's nothing else out there that is as polished, feature-rich, fast, and endlessly tweakable as Nova. So, I've been continuing to use Nova since the acquisition, interchanged with Google's own Pixel Launcher ever since I bought a Pixel 8 Pro on release, with Nova's ownership status relegated to some dusty, barely used croft of my mind. As such, it came as a bit of a shock this week when it came out that Branch had done a massive round of lay-offs, including firing the entire Nova Launcher team, save for Nova's original creator, Kevin Barry. Around a dozen or so people were working on Nova at Branch, and aside from Barry, they're all gone now. Once the news got out, Barry took to Nova Launcher's website and released a statement about the layoffs, and the future of Nova. There has been confusion and misinformation about the Nova team and what this means for Nova. I'd like to clarify some things. The original Nova team, for many years, was just me. Eventually I added Cliff to handle customer support, and when Branch acquired Nova, Cliff continued with this role. I also had contracted Rob for some dev work prior to the Branch acquisition and some time after the acquisition closed we were able to bring him onboard as a contractor at Branch. The three of us were the core Nova team. However, I've always been the lead and primary contributor to Nova Launcher and that hasn't changed. I will continue to control the direction and development of Nova Launcher. Kevin Barry This sounds great, and I'm glad the original creator will keep control over Nova. However, with such a massive culling of developers, it only makes sense that any future plans will have to be scaled down, and that's exactly what both Barry and other former team members are saying. First, Rob Wainwright, who was laid off, wrote the following in Nova's Discord: To be clear, Nova development is not stopping. Kevin is remaining at Branch as Nova's only full time developer. Development will undoubtedly slow with less people working on the app but the current plan is for updates to continue in some form. Rob Wainwright Barry followed up with an affirmation: I am planning on wrapping up some Nova 8.1 work and getting more builds out. I am going to need to cut scope compared to what was planned. Kevin Barry In other words, while development on Nova will continue, it's now back to being a one-man project, which will have some major implications for the pace of development. It makes me wonder if the adoption of the yearly drop of new Android features will be reduced, and if we're going to see much more unresolved bugs and issues. On top of that, one has to wonder just how long Branch is for this world - they've just laid off about a hundred people, so what will happen to Barry if Branch goes under? Will he have to find some other job, leaving even less time for Nova development? And if Branch doesn't go under, it is still clearly in dire financial straits, which must make somehow monetising Nova users in less pleasant ways come into the picture. The future of Nova was definitely dealt a massive blow this week, and I'm fearful for its future. Again.
I regularly report on the progress made by the Servo project, the Rust-based browser engine that was spun out of Mozilla into its own project. Servo has its own reference browser implementation, too, but did you know there's already other browsers using Servo, too? Sure, it's clearly a work-in-progress thing, and it's missing just about every feature we've come to expect from a browser, but it's cool nonetheless. Verso is a web browser built on top of Servo web engine. It's still under development. We dont' accept any feature request at the moment. But if you are interested, feel free to help test it. Verso GitHub page It runs on Linux, Windows, and macOS.
Nearly three years in the making, the ext-image-capture-source-v1 and ext-image-copy-capture-v1 protocols have been merged into the Wayland Protocols repository for vastly improving screen capture support on the Wayland desktop. The ext-image-capture-source-v1 and ext-image-copy-capture-v1 screen copy protocols build upon wlroots' wlr-screencopy-unstable-v1 with various improvements for better screen capture support under Wayland. These new protocols should allow for better performance and window capturing support for use-cases around RDP/VNC remote desktop, screen sharing, and more. Michael Larabel A very big addition to Wayland, as this has been a sore spot for many people wishing to move to Wayland from X. One of the developers behind the effort has penned a blog post with more details about these new protocols.
In line with the release of the COSMIC alpha, parts of which are also available for Redox, we've got another monthly update for the Rust-based operating system. First, in what in hindsight seems like a logical step, Redox is joining hands with Servo, the Rust-based browser engine, and they proposed focus will be on Servo's cross-compilation support and a font stack written in Rust. It definitely makes sense for these two projects to work together in some way, and I hope there can be more cross-pollination in the future. Simple HTTP Server, an HTTP server written in Rust, has been ported to Redox, and the Apache port is getting some work, too. Wget now works on Redox, and several bugs in COSMIC programs were squashed. UEFI also saw some work, including fixing a violation of the EUFI specification, as well as adding several workarounds for buggy firmware, which should increase the number of machines that can boot Redox. Another area of progress is self-hosting, and Redox can now compile hello world-programs in Rust, C and C++ - an important step towards compiling more complex programs and the end-goal of compiling Redox itself on Redox. There's way more in this update, so head on over to get the full details.
After two year of development, System76 has released the very first alpha of COSMIC, their new Rust-based desktop environment for Linux. This is an alpha release, so they make it clear there's going to be bugs and that there's a ton of missing features at this point. As a whole, COSMIC is a comprehensive operating system GUI (graphical user interface) environment that features advanced functionality and a responsive design. Its modular architecture is specifically designed to facilitate the creation of unique, branded user experiences with ease. System76 website Don't read too much into branded experience" here - it just means other Linux distributions can easily use their colours, branding, and panel configurations. The settings application is also entirely modular, so distributors can easily add additional panels, and replace things like the update panel with one that fits their package management system of choice. COSMIC also supports extensive theming, and if you're wondering - yes, all of these are answers to the very reason COSMIC was made in the first place: GNOME's restrictiveness. There's not much else to say here yet, since it's an alpha release, but if you want to give it a go, the announcement post contains links to instructions for a variety of Linux distributions. COSMIC is also slowly making its way into Redox, the Rust-based operating system led by Jeremy Soller, a System76 employee.
When you launch an app, macOS connects to Apple's OCSP service to check whether the app's Developer ID code signing certificate has been revoked by Apple. In November 2020, Apple's OCSP service experienced a mass outage, preventing Mac users worldwide from launching apps. In response and remedy to this outage, Apple made several explicit promises to Mac users in a support document, which can still be seen in a Wayback Machine archive from September 24, 2023. Jeff Johnson One of the explicit promises Apple made was that it would allow macOS users to turn off phoning home to Cupertino every time you launch an application on macOS. It's four years later now, and this promise has not been kept - Apple still does not allow you to turn off phoning home. In fact, it turns out that last year, Apple scrubbed this promise from all of its documentation, hoping we're all going to forget about it. In other words, Apple is never going to allow its macOS users to stop the operating system from phoning home to Cupertino every time you launch an application. Even though the boiling frog story is nonsensical, it's apt here. More and more Apple is limiting its users' control over macOS, locking it down to a point where you're not really the owner of your computer anymore. Stuff like this gives me the creeps.
Speaking of an operating system for toddlers: Apple is eliminating the option to Control-click to open Mac software that is not correctly signed or notarized in macOS Sequoia. To install apps that Gatekeeper blocks, users will need to open up System Settings and go to the Privacy and Security section to review security information" before being able to run the software. Juli Clover at MacRumors On a related note, I've got an exclusive photo of the next MacBook Pro.
With macOS Sequoia this fall, using apps that need access to screen recording permissions will become a little bit more tedious. Apple is rolling out a change that will require you to give explicit permission on a weekly basis to these types of apps, and every time you reboot your Mac. If you've been using the macOS Sequoia beta this summer in conjunction with a third-party screenshot or screen recording app, you've likely been prompted multiple times to continue allowing that app access to your screen. While many speculated this could be a bug, that's not the case. Chance Miller Everybody is making comparisons to Windows Vista, but I don't think that's fair at all. Windows Vista suffered from an avelanche of permission dialogs because the wider Windows application, driver, and peripheral ecosystem was not at all used to the security boundaries present in Windows NT being enforced. Vista was the first consumer-focused version of Windows that started doing this, and after a difficult transition period, the flood of dialogs settled down, and for a long time now you can blame Windows for a lot of things, but it's definitely not throwing up more permission dialogs than, say, an average desktop-focused Linux distribution. In other words, Vista's UAC dialogs were a desperately necessary evil, an adjustment period the Windows ecosystem simply had to go through, and Windows as a whole is better off for it today. This, however, is different. This is Apple having such a low opinion of its users, and such a deep disregard for basic usability and computer ownership, that it feels entirely okay with bothering its users with weekly - or more, if you tend to reboot - nag dialogs for applications the user has already properly given permission to. I don't have any real issues with a reminder or permission dialog upon first launching a newly installed screen recording application - or when an exisiting application gains this functionality in an update - but nagging users weekly is just beyond insanity. More and more it feels like macOS is becoming an operating system for toddlers - or at least, that's how Apple seems to view its users.
When you're shopping online, you'll likely find yourself jumping between multiple tabs to read reviews and research prices. It can be cumbersome doing all that back and forth tab switching, and online comparison is something we hear users want help with. In the next few weeks, starting in the U.S., Chrome will introduce Tab compare, a new feature that presents an AI-generated overview of products from across multiple tabs, all in one place. Imagine you're looking for a new Bluetooth portable speaker for an upcoming trip, but the product details and reviews are spread across different pages and websites. Soon, Chrome will offer to generate a comparison table by showing a suggestion next to your tabs. By bringing all the essential details - product specs, features, price, ratings - into one tab, you'll be able to easily compare and make an informed decision without the endless tab switching. Parisa Tabriz Is this really what people want from their browser, or am I just completely out of touch? I'm not at all convinced the latter isn't the case, but this just seems like a filler feature. Is this really what all the AI hype is about? Is this kind of nonsense the end game we're killing the planet even harder for?
It seems to be bootloader season, because we've got another one - this time, a research project with very limited application for most people. SentinelBoot is a cryptographically secure bootloader aimed at enhancing boot flow safety of RISC-V through memory-safe principles, predominantly leveraging the Rust programming language with its ownership, borrowing, and lifetime constraints. Additionally, SentinelBoot employs public-key cryptography to verify the integrity of a booted kernel (digital signature), by the use of the RISC-V Vector Cryptography extension, establishing secure boot functionality. SentinelBoot achieves these objectives with a 20.1% hashing overhead (approximately 0.27s additional runtime) when compared to an example U-Boot binary (mainline at time of development), and produces a resulting binary one-tenth the size of an example U-Boot binary with half the memory footprint. Lawrence Hunter SentinelBoot is a project undertaken at the University of Manchester, and its goal is probably clear from the description: to develop a more secure bootloader for RISC V devices. An additional element is that they looked specifically at devices that receive updates over-the-air, like smartphones. In addition, scenarios where an attacker has physical access to the device in question were not considered, for obvious reasons - in such cases, the attacker can just replace the bootloader altogether anyway, and no amount of fancy Rust code is going to save you there. The details of the implementation as described in the article are definitely a little bit over my head, but the gist seems to be that the project's been able to achieve a much more secure boot process without giving up much in performance. This being a research project with an intentionally limited scope does mean it's most just something that'll immediately benefit all of us, but it's these kinds of projects that can really push the state of the art and try out the viability of new ideas.
As you all know, I continue to use WordStar for DOS 7.0 as my word-processing program. It was last updated in December 1992, and the company that made it has been defunct for decades; the program is abandonware. There was no proper archive of WordStar for DOS 7.0 available online, so I decided to create one. I've put weeks of work into this. Included are not only full installs of the program (as well as images of the installation disks), but also plug-and-play solutions for running WordStar for DOS 7.0 under Windows, and also complete full-text-searchable PDF versions of all seven manuals that came with WordStar - over a thousand pages of documentation. Robert J. Sawyer WordStar for DOS is definitely a bit of a known entity in our circles for still being used by a number of world-famous authors. WordStar 4.0 is still being used by George R. R. Martin - assuming he's still even working on The Winds of Winter - and there must be some sort of reason as to why it's still so oddly popular. Thanks to this work by author Robert J. Sawyer, accessing and using version 7 of WordStar for DOS is now easier than ever. One of the reasons Sawyer set out to do this was making sure that if he passes away, the people responsible for his estate and works will have an easy way to access his writings. It's refreshing to see an author think ahead this far, and it will surely help a ton of other people too, since there's quite a few documents lingering around using the WordStar format.
That sure is a big news drop for a random Tuesday. A federal judge ruled that Google violated US antitrust law by maintaining a monopoly in the search and advertising markets. After having carefully considered and weighed the witness testimony and evidence, the court reaches the following conclusion: Google is a monopolist, and it has acted as one to maintain its monopoly," according to the court's ruling, which you can read in full at the bottom of this story. It has violated Section 2 of the Sherman Act." Lauren Feiner at The Verge Among many other things, the judge mentions Google's own admissions that the company can do pretty much whatever it wants with Google Search and its advertisement business, without having to worry about users opting to go elsewhere or ad buyers leaving the Google platform. Studies from inside Google itself made it very clear that Google could systematically make Search worse without it affecting user and/or usage numbers in any way, shape, or form - because users have nowhere else to realistically go. While the ability to raise prices at will without fear of losing customers is a sure sign of being a monopoly, so is being able to make a product worse without fear of losing customers, the judge argues. Google plans to appeal, obviously, and this ruling has nothing yet to say about potential remedies, so what, exactly, is going to change is as of yet unknown. Potential remedies will be handled during the next phase of the proceedings, with the wildest and most aggressive remedy being a potential break-up of Google, Alphabet, or whatever it's called today. My sights are definitely set on a break-up - hopefully followed by Apple, Amazon, Facebook, and Microsoft - to create some much-needed breathing room into the technology market, and pave the way for a massive number of newcomers to compete on much fairer terms. Of note is that the judge also put yet another nail in the coffin of Google's various exclusivity deals, most notable with Apple and, for our interests, with Mozilla. Google pays Apple well over 20 billion dollars a year to be the default search engine on iOS, and it pays about 80% of Mozilla's revenue to be the default search engine in Firefox. According to the judge, such deals are anticompetitive. Mehta rejected Google's arguments that its contracts with phone and browser makers like Apple were not exclusionary and therefore shouldn't qualify it for liability under the Sherman Act. The prospect of losing tens of billions in guaranteed revenue from Google - which presently come at little to no cost to Apple - disincentivizes Apple from launching its own search engine when it otherwise has built the capacity to do so," he wrote. Lauren Feiner at The Verge If the end of these deals become part of the package of remedies, it will be a massive financial blow to Apple - 20 billion dollars a year is about 15% of Apple's total annual operating profits, and I'm also pretty sure those Google billions are counted as part of Tim Cook's much-vaunted services revenue, so losing it would definitely impact Apple directly where it hurts. Sure, it's not like it'll make Apple any less of a dangerous behemoth, but it will definitely have some explaining to do to investors. Much more worrisome, however, is the similar deal Google has with Mozilla. About 80% of Mozilla's total revenue comes from a search deal with Google, and if that deal were to be dissolved, the consequences for Mozilla, and thus for Firefox, would be absolutely immense. This is something I've been warning about for years now, and the end of this deal would be yet another worry that I've voiced repeatedly becoming reality, right after Mozilla becoming an advertising company and making Firefox worse in the name of quick profits. One by one, every single concern I've voiced about the future of Firefox is becoming reality. Canonical, Fedora, KDE, GNOME, and many other stakeholders - ignore these developments at your own peril.
After a number of very bug security incidents involving Microsoft's software, the company promised it would take steps to put security at the top of its list of priorities. Today we got another glimpse of the step it's taking, since the company is going to take security into account during performance reviews. Kathleen Hogan, Microsoft's chief people officer, has outlined what the company expects of employees in an internal memo obtained by The Verge. Everyone at Microsoft will have security as a Core Priority," says Hogan. When faced with a tradeoff, the answer is clear and simple: security above all else." A lack of security focus for Microsoft employees could impact promotions, merit-based salary increases, and bonuses. Delivering impact for the Security Core Priority will be a key input for managers in determining impact and recommending rewards," Microsoft is telling employees in an internal Microsoft FAQ on its new policy. Tom Warren at The Verge Now, I've never worked in a corporate environment or something even remotely close to it, but something about this feels off to me. Often, it seems that individual, lower-level employees know all too well they're cutting corners, but they're effectively forced to because management expects almost inhuman results from its workers. So, in the case of a technology company like Microsoft, this means workers are pushed to write as much code as possible, or to implement as many features as possible, and the only way to achieve the goals set by management is to take shortcuts - like not caring as much about code quality or security. In other words, I don't see how Microsoft employees are supposed to make security their top priority, while also still having to achieve any unrealistic goals set by management and other higher-ups. What I'm missing from this memo and associated reporting is Microsoft telling its employees that if unrealistic targets, crunch, low pay, and other factors that contribute to cutting corners get in the way of putting security first, they have the freedom to choose security. If employees are not given such freedom, demanding even more from them without anything in return seems like a recipe for disaster to me, making this whole memo quite moot. We'll have to see what this will amount to in practice, but with how horrible employees are treated in most industries these days, especially in countries with terrible union coverage and laughable labour protection laws like the US, I don't have high hopes for this.
CP/M is turning 50 this year. The ancient Control Program for Microcomputers, or CP/M for short, has been enjoying a modest renaissance in recent years. By 21st century standards, it's unimaginably tiny and simple. The whole OS fits into under 200 kB, and the resident bit of the kernel is only about 3 kB. Today, in the era of end-user OSes in the tens-of-gigabytes size range, this exerts a fascination to a certain kind of hobbyist. Back when it was new, though, this wasn't minimalist - it was all that early hardware could support. Liam Proven I'm a little too young to have experienced CP/M as anything other than a retro platform - I'm from 1984, and we got our first computer in 1990 or so - but its importance and influence cannot be overstated. Many of the conventions set by CP/M made their way to the various DOS variants, and in turn, we still see some of those conventions in Windows today. Had Digital Research, the company CP/M creator Gary Kildall set up to sell CP/M, accepted the deal with IBM to make CP/M the default operating system for the then newly-created IBM PC, we'd be living in a very different world today. Digital Research would also create several other popular and/or influential software products beyond CP/M, such as DR DOS and GEM, as well as various other DOS variants and CP/M versions with DOS compatibility. It would eventually be acquired by Novell, where it faded into obscurity.
Not too long ago I linked to a blog post by long-time OSNews reader (and silver Patreon) and friend of mine Morgan, about how to set up OpenBSD as a workstation operating system - and in fact, I personally used that guide in my own OpenBSD journey. Well, Morgan's back with another, similar article, this time covering FreeBSD. After going through the basic steps needed to make FreeBSD a bit more amenable to desktop use, Morgan notes about performance: Now let's compare FreeBSD. Well, quite frankly, there is no comparison! FreeBSD just feels snappier and more responsive on the desktop; at the same 170Hz refresh it actually feels like 170Hz. Void Linux always felt fast enough and I thought it had no lag at all at that refresh rate, but comparing them side by side (FreeBSD installed on the NVMe drive, Void running from a USB 4 SSD with similar performance), FreeBSD is smooth as glass and I started noticing just the slightest lag/stutter on Void. The same holds true for Firefox; I use smooth scrolling and on FreeBSD it really is perfectly smooth. Similarly, Youtube performance is unreal, with no dropped frames at any resolution all the way up to 4Kp60, and the videos look so much smoother! Morgan/kaidenshi This is especially relevant for me personally, since the prime reason I switched my workstation back to Fedora KDE was OpenBSD's performance issues. While those performance issues were entirely expected and the result of the operating system's focus on security and hardening, it did mean it's just not suitable for me as a workstation operating system, even if I like the internals and find it a joy to use, even under the hood. If FreeBSD delivers more solid desktop and workstation performance, it might be time I set up a FreeBSD KDE installation and see if it can handle my workstation's 270Hz 4K display. As I keep reiterating - the BSD world has a lot to offer those wishing to run a UNIX-like workstation operating system, and it's articles like these that help people get started. A lot of the steps taken may seem elementary to many of us, but for people coming from Linux or even Windows, they may be unfamiliar and daunting, so having it all laid out in a straightforward manner is quite helpful.
As uBlock Origin lead developer and maintainer Raymond Hill explained on Friday, this is the result of Google deprecating support for the Manifest v2 (MV2) extensions platform in favor of Manifest v3 (MV3). uBO is a Manifest v2 extension, hence the warning in your Google Chrome browser. There is no Manifest v3 version of uBO, hence the browser will suggest alternative extensions as a replacement for uBO," Hill explained. Sergiu Gatlan at Bleeping Computer If you're still using Chrome, or any possible Chrome skins who have not committed to keeping Manifest v2 extensions enabled, it's really high time to start thinking about jumping ship if ad blocking matters to you. Of course, we don't know for how long Firefox will remain able to properly block ads either, but for now, it's obviously the better choice for those of us who care about a better browsing experience. And just to reiterate: I fully support anyone's right to block ads, even on OSNews. Your computer, your rules. There are a variety of other, better means to support OSNews - our Patreon, individual donations through Ko-Fi, or buying our merch - that are far better for us than ads will ever be.
Limine is an advanced, portable, multiprotocol bootloader that supports Linux, multiboot1 and 2, the native Limine boot protocol, and more. Limine is lightweight, elegant, fast, and the reference implementation of the Limine boot protocol. The Limine boot protocol's main target audience is operating system and kernel developers that want to use a boot protocol which supports modern features in an elegant manner, that GRUB's aging multiboot protocols do not (or do not properly). Limine website I wish trying out different bootloaders was an easier thing to do. Personally, since my systems only run Fedora Linux, I'd love to just move them all over to systemd-boot and not deal with GRUB at all anymore, but since it's not supported by Fedora I'm worried updates might break the boot process at some point. On systems where only one operating system is installed, as a user I should really be given the choice to opt for the simplest, most basic boot sequence, even if it can't boot any other operating systems or if it's more limited than GRUB.
Following our recent work 5 with Ubuntu 24.04 LTS where we enabled frame pointers by default to improve debugging and profiling, we're continuing our performance engineering efforts by evaluating the impact of O3 optimization in Ubuntu. O3 is a GCC optimization 14 level that applies more aggressive code transformations compared to the default O2 level. These include advanced function and the use of sophisticated algorithms aimed at enhancing execution speed. While O3 can increase binary size and compilation time, it has the potential to improve runtime performance. Ubuntu Discourse If these optimisations deliver performance improvements, and the only downside is larger binaries and longer compilation times, it seems like a bit of a no-brainer to enable these, assuming those mentioned downsides are within reason. Are there any downsides they're not mentioning? Browsing around and doing some minor research it seems that -O3 optimisations may break some packages, and can even lead to performance degradation, defeating the purpose altogether. Looking at a set of benchmarks from Phoronix from a few years ago, in which the Linux kernel was compiled with either O2 and O3 and their performance compared, the results were effectively tied, making it seem not worth it at all. However, during these benchmarks, only the kernel was tested; everything else was compiled normally in both cases. Perhaps compiling the entire system with O3 will yield improvements in other parts of the system that do add up. For now, you can download unsupported Ubuntu ISOs compiled with O3 optimisations enabled to test them out.
Another month, another chunk of progress for the Servo rendering engine. The biggest addition is enabling table rendering to be spread across CPU cores. Parallel table layout is now enabled, spreading the work for laying out rows and their columns over all available CPU cores. This change is a great example of the strengths of Rayon and the opportunistic parallelism in Servo's layout engine. Servo blog On top of this, there's tons of improvements to the flexbox layout engine, support generic font families like sans-serif' and monospace' has been added, and Servo now supports OpenHarmony, the operating system developed by Huawei. This month also saw a lot of work on the development tools.
Most application on GNU/Linux by convention delegate to xdg-open when they need to open a file or a URL. This ensures consistent behavior between applications and desktop environments: URLs are always opened in our preferred browser, images are always opened in the same preferred viewer. However, there are situations when this consistent behavior is not desired: for example, if we need to override default browser just for one application and only temporarily. This is where xdg-override helps: it replaces xdg-open with itself to alter the behavior without changing system settings. xdg-override GitHub page I love this project ever since I came across it a few days ago. Not because I need it - I really don't - but because of the story behind its creation. The author of the tool, Dmytro Kostiuchenko, wanted Slack, which he only uses for work, to only open his work browser - which is a different browser from his default browser. For example, imagine you normally use Firefox for everything, but for all your work-related things, you use Chrome. So, when you open a link sent to you in Slack by a colleague, you want that specific link to open in Chrome. Well, this is not easily achieved in Linux. Applications on Linux tend to use freedesktop.org's xdg-open for this, which looks at the file mimeapps.list to learn which application opens which file type or URL. To solve Kostiuchenko's issue, changing the variable $XDG_CONFIG_HOME just for Slack to point xdg-open to a different configuration file doesn't work, because the setting will be inherited by everything else spwaned from Slack itself. Changing mimeapps.list doesn't work either, of course, since that would affect all other applications, too. So, what's the actual solution? We'd like also not to change xdg-open implementation globally in our system: ideally, the change should only affect Slack, not all other apps. But foremost, diverging from upstream is very unpractical. However, in the spirit of this solution, we can introduce a proxy implementation of xdg-open, which we'll inject" into Slack by adding it to PATH. Dmytro Kostiuchenko xdg-override takes this idea and runs with it: It is based on the idea described above, but the script won't generate proxy implementation. Instead, xdg-override will copy itself to /tmp/xdg-override-$USER/xdg-open and will set a few $XDG_OVERRIDE_* variables and the $PATH. When xdg-override is invoked from this new location as xdg-open, it'll operate in a different mode, parsing $XDG_OVERRIDE_MATCH and dispatching the call appropriately. I tested this script briefly, but automated tests are missing, so expect some rough edges and bugs. Dmytro Kostiuchenko I don't fully understand how it works, but I get the overall gist of what it's doing. I think it's quite clever, and solves a very specific issue in a non-destructive way. While it's not something most people will ever need, it feels like something that if you do need it, it will quickly become a default part of your toolbox or workflow.
Today, every Unix-like system can trace their ancestry back to the original Unix. That includes Linux, which uses the GNU tools - and the GNU tools are based on the Unix tools. Linux in 2024 is removed from the original Unix design, and for good reason - Linux supports architectures and tools not dreamt of during the original Unix era. But the core command line experience in Linux is still very similar to the Unix command line of the 1970s. The next time you use ls to list the files in a directory, remember that you're using a command line that's been with us for more than fifty years. Jim Hall An excellent overview of some of the more ancient UNIX commands that are still with us today. One thing I always appreciate when I dive into an operating system closer to real" UNIX, like OpenBSD, or a actual UNIX, like HP-UX, is just how much more logical sense they make under the hood than a Linux system does. This is not a dunk on modern Linux - it has to cater to endless more modern needs than something ancient and dead like HP-UX - but what I learn while using these systems closer to the UNIX has made me appreciate proper UNIX more than I used to in the past. In what surely sounds like utter lunacy to system administrators who actually had to seriously administer HP-UX systems back in the day, I genuinely love using HP-UX, setting it up, configuring it, messing around with it, because it just makes so much more logical sense than the systems we use today. The knowledge gained from using BSD, HP-UX, and others, while not always directly applicable to Linux, does aid me in understanding certain Linux things better than I did before. What I'm trying to say is - go and load up an old UNIX, or at least a modern BSD. Aside from being great operating systems in their own right, they're much easier to grasp than a modern Linux system, and you'll learn a lot form the experience.
Android 14 introduced the ability for application stores to claim ownership over application updates, to ensure other installation sources won't accidentally update applications they shouldn't. What is still lacking, however, is for users to easily change the update ownership for applications. In other words, say you install an application by downloading an APK from GitHub, and later the application makes its way to F-Droid, you'll get warning popups when F-Droid tries to update that application. That's about to change, it seems, as Android Authority discovered that the Play Store application seems to be getting a new feature where it can take ownership of an application's updates. A new flag spotted in the latest Google Play Store release suggests that users may see the option to install updates for apps downloaded from a different source. As you can see in the attached screenshots, the Play Store will show available updates for apps downloaded from different sources. On the app listing, you'll also see a new Update from Play" button that will switch the update ownership from the original source to the Play Store. Pranob Mehrotra at Android Authority Assuming this functionality is just an API other application stores can also tap into, this will be a great addition to Android for power users who use multiple application stores and want to properly manage which store updates what applications. It's not something most people will ever really use or need, but if you're the kind of person who does need it - it'll become indispensable.
This is my second book written with Sphinx, after the new Learn TLA+. Sphinx uses a peculiar markup called reStructured Text (rST), which has a steeper learning curve than markdown. I only switched to it after writing a couple of books in markdown and deciding I needed something better. So I want to talk about why rst was that something. Hillel Wayne I've never liked Markdown - I find it quite arbitrary and unpleasant to look at, and the fact there's countless variants that all differ a tiny bit doesn't help - so even though I don't actually use Markdown for anything, I always have a passing interest in possible alternatives, if only to see what other, different, and unique ideas are out there when it comes to relatively simple markup languages. Now, I'm quite sure reStructured Text isn't for me either, since I feel like it's far more powerful than Markdown, and serves a different, more complex purpose. That being said, I figured I'd highlight it here since it seems it may be interesting to some of you who work on documentation for your software projects or similar endeavours.
Serpent OS, a new Linux distribution with a completely custom package management system written in Rust, has released its very very rough pre-alpha release. They've been working on this for four years, and they're making some interesting choices regarding packaging that I really like, at least on paper. This will of course appear to be a very rough (crap) prealpha ISO. Underneath the surface it is using the moss package manager, our very own package management solution written in Rust. Quite simply, every single transaction in moss generates a new filesystem tree (/usr) in a staging area as a full, stateless, OS transaction. When the package installs succeed, any transaction triggers are run in a private namespace (container) before finally activating the new /usr tree. Through our enforced stateless design, usr-merge, etc, we can atomically update the running OS with a single renameat2 call. As a neat aside, all OS content is deduplicated, meaning your last system transaction is still available on disk allowing offline rollbacks. Ikey Doherty Since this is only a very rough pre-alpha release, I don't have much more to say at this point, but I do think it's interesting enough to let y'all know about it. Even if you're not the kind of person to dive into pre-alphas, I think you should keep an eye on Serpent OS, because I have a feeling they're on to something valuable here.
Yesterday I highlighted a study that found that AI and ML, and the expectations around them, are actually causing people to need to work harder and more, instead of less. Today, I have another study for you, this time focusing a more long-term issue: when you use something like ChatGPT to troubleshoot and fix a bug, are you actually learning anything? A professor at MIT divided a group of students into three, and gave them a programming task in a language they did not know (FORTRAN). One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta's Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components. Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group remembered nothing, and they all failed," recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade. Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed. Esther Shein at ACM I find this an interesting result, but at the same time, not a very surprising one. It reminds me a lot of that when I went to high school, I was part of the first generation whose math and algebra courses were built around using a graphic calculator. Despite being able to solve and graph complex equations with ease thanks to our TI-83, we were, of course, still told to include our work", the steps taken to get from the question to the answer, instead of only writing down the answer itself. Since I was quite good at computers", and even managed to do some very limited programming on the TI-83, it was an absolute breeze for me to hit some buttons and get the right answers - but since I knew, and know, absolutely nothing about math, I couldn't for the life of me explain how I got to the answers. Using ChatGPT to fix your programming problem feels like a very similar thing. Sure, ChatGPT can spit out a workable solution for you, but since you aren't aware of the steps between problem and solution, you aren't actually learning anything. By using ChatGPT, you're not actually learning how to program or how to improve your skills - you're just hitting the right buttons on a graphing calculator and writing down what's on the screen, without understanding why or how. I can totally see how using ChatGPT for boring boilerplate code you've written a million times over, or to point you in the right direction while still coming up with your own solution to a problem, can be a good and helpful thing. I'm just worried about a degradation in skill level and code quality, and how society will, at some point, pay the price for that.
Is machine learning, also known as artificial intelligence", really aiding workers and increasing productivity? A study by Upwork - which, as Baldur Bjarnason so helpfully points out, sells AI solutions and hence did not promote this study on its blog as it does with its other studies - reveals that this might not actually be the case. Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect. Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way. For example, survey respondents reported that they're spending more time reviewing or moderating AI-generated content (39%), invest more time learning to use these tools (23%), and are now being asked to do more work (21%). Forty percent of employees feel their company is asking too much of them when it comes to AI. Upwork research This shouldn't come as a surprise. We're in a massive hype cycle when it comes to machine learning, and we're being told it's going to revolutionise work and lead to massive productivity gains. In practice, however, it seems these tools just can't measure up to the hyped promises, and in fact is making people do less and work slower. There's countless stories of managers being told by upper management to shove machine learning into everything, from products to employee workflows, whether it makes any sense to do so or not. I know from experience as a translator that machine learning can greatly improve my productivity, but the fact that there are certain types of tasks that benefit from ML, doesn't mean every job suddenly thrives with it. I'm definitely starting to see some cracks in the hype cycle, and this study highlights a major one. I hope we can all come down to earth again, and really take a careful look at where ML makes sense and where it does not, instead of giving every worker a ChatGPT account and blanket demanding massive productivity gains that in no way match the reality on the office floor. And of course, despite demanding massive productivity increases, it's not like workers are getting an equivalent increase in salary. We've seen massive productivity increases for decades now, while paychecks have not followed suit at all, and many people can actually buy less with their salary today than their parents could decades ago. Demands imposed by managers by introducing AI is only going to make this discrepancy even worse.
Logitech CEO Hanneke Faber talked about someting called the forever mouse", which would be, as the name implies, a mouse that customers could use for a very long time. While you may think this would mean an incredibly well-built mouse, or one that can be easily repaired, which Logitech already makes somewhat possible through a partnership with iFixIt, another option the company is thinking about is a subscription model. Yes. Faber said subscription software updates would mean that people wouldn't need to worry about their mouse. The business model is similar to what Logitech already does with video conferencing services (Logitech's B2B business includes Logitech Select, a subscription service offering things like apps, 24/7 support, and advanced RMA). Having to pay a regular fee for full use of a peripheral could deter customers, though. HP is trying a similar idea with rentable printers that require a monthly fee. The printers differ from the idea of the forever mouse in that the HP hardware belongs to HP, not the user. However, concerns around tracking and the addition of ongoing expenses are similar. Scharon Harding at Ars Technica Now, buying a mouse whose terrible software requires subscription models would still be a choice you can avoid, but my main immediately conjured up a far darker scenario. PC makers have a long history of adding crapware to their machines in return for payments from the producers of said crapware. I can totally see what's going to happen next. You buy a brand new laptop, unbox it at home, and turn it on. Before you know it, a dialog pops up right after he crappy Windows out-of-box experience asking you to subscribe to your laptop's touchpad software in order to unlock its more advanced features like gestures. But why stop there? The keyboard of that new laptop has RGB backlighting, but if you want to change its settings, you're going to have to pay for another subscription. Your laptop's display has additional features and modes for specific types of content and more settings sliders, but you'll have to pay up to unlock them. And so on. I'm not saying this will happen, but I'm also not saying it won't. I'm sorry for birthing this idea into the world.
Microsoft has published a post-mortem of the CrowdStrike incident, and goes into great depths to describe where, exactly, the error lies, and how it could lead to such massive problems. I can't comment anything insightful on the technical details and code they show to illustrate all of this - I'll leave that discussion up to you - but Microsoft also spends considerable amount of time explaining why security vendors are choosing to use kernel-mode drivers. Microsoft lists three major reasons why security vendors opt for using kernel modules, and none of them will come as a great surprise to OSNews readers: kernel drivers provide more visibility into the system than a userspace tool would, there are performance benefits, and they're more resistant to tampering. The downsides are legion, too, of course, as any crash or similar issue in kernel mode has far-reaching consequences. The goal, then, according to Microsoft, is to balance the need for greater insight, performance, and tamper resistance with stability. And while the company doesn't say it directly, this is clearly where CrowdStrike failed - and failed hard. While you would want a security tool like CrowdStrike to perform as little as possible in kernelspace, and conversely as much as possible in userspace, that's not what CrowdStrike did. They are running a lot of stuff in kernelspace that really shouldn't be there, such as the update mechanism and related tools. In total, CrowdStrike loads four kernel drivers, and much of their functionality can be run in userspace instead. It is possible today for security tools to balance security and reliability. For example, security vendors can use minimal sensors that run in kernel mode for data collection and enforcement limiting exposure to availability issues. The remainder of the key product functionality includes managing updates, parsing content, and other operations can occur isolated within user mode where recoverability is possible. This demonstrates the best practice of minimizing kernel usage while still maintaining a robust security posture and strong visibility. Windows provides several user mode protection approaches for anti-tampering, like Virtualization-based security (VBS) Enclaves and Protected Processes that vendors can use to protect their key security processes. Windows also provides ETW events and user-mode interfaces like Antimalware Scan Interface for event visibility. These robust mechanisms can be used to reduce the amount of kernel code needed to create a security solution, which balances security and robustness. David Weston, Vice President, Enterprise and OS Security at Microsoft In what is surely an unprecedented event, I agree with the CrowdStrike criticism bubbling under the surface of this post-mortem by Microsoft. Everything seems to point towards CrowdStrike stuffing way more things in kernelspace than is needed, and as such creating a far larger surface for things to go catastrophically wrong than needed. While Microsoft obviously isn't going to openly and publicly throw CrowdStrike under the bus, it's very clear what they're hinting at here, and this is about as close to a public flogging we're going to get. Microsoft's post-portem further details a ton of work Microsoft has recently done, is doing, and will soon be doing to further strenghthen Windows' security, to lessen the need for kernelspace security drivers even more, including adding support for Rust to the Windows kernel, which should also aid in mitigating some common problems present in other, older programming languages (while not being a silver bullet either, of course).
Blue screens of death are not exactly in short supply on Windows machines lately, but what if you really want to cause your own kernel panic or complete system crash, just because you love that shade of crashy blue? Well, there's a tool for that called NotMyFault, developed by Mark Russinovich as part of Sysinternals. NotMyFault is a tool that you can use to crash, hang, and cause kernel memory leaks on your Windows system. It's useful for learning how to identify and diagnose device driver and hardware problems, and you can also use it to generate blue screen dump files on misbehaving systems. The download file includes 32-bit and 64-bit versions, as well as a command-line version that works on Nano Server. Chapter 7 in Windows Internals uses NotMyFault to demonstrate pool leak troubleshooting and Chapter 14 uses it for crash analysis examples. Mark Russinovich Using this tool, you can select exactly what kind of crash you want to cause, and after clicking the Crash button, your Windows computer will do exactly as it's told and crash with a lovely blue screen of death. It comes in both a GUI and CLI version, and the latter also works on minimal Windows installations that don't have the Windows shell installed. A tool like this may seem odd, but it can be particularly useful in situations where you're trying to troubleshoot an issue, and to learn how to properly diagnose crashes. Or, you know, you can use it to create a panic at your workplace.
Ah, another microkernel-based hobby operating system. The more, the merrier - and I mean this, without a hint of sarcasm. There's definitely been a small resurgence in activity lately when it comes to small hobby and teaching operating systems, some of which are exploring some truly new ideas, and I'm definitely here for it. Today we have managarm. Some notable properties of managarm are: (i) managarm is based on a microkernel while common Desktop operating systems like Linux and Windows use monolithic kernels, (ii) managarm uses a completely asynchronous API for I/O and (iii) despite those internal differences, managarm provides good compatibility with Linux at the user space level. managarm GitHub page It's a 64bit operating system with SMP support, an ACPI implementation, networking, USB3 support, and, as the quoted blurb details, a lot of support for Linux and POSIX. It can already run Weston, kmscon, and other things like Bash, the GNU Coreutils, and more. While not explicitly mentioned, I assume the best platform to run managarm on are most likely virtualisation tools, and there's a detailed handbook to help you along during building and using this new operating system.
In January of 2021 I was exploring the corpus of Skins I collected for the Winamp Skin Museum and found some that seemed corrupted, so I decided to explore them. Winamp skins are actually just zip files with a different file extension, so I tried extracting their files to see what I could find. This ended up leading me down a series of wild rabbit holes. Jordan Eldredge I'm not going to spoil any of this.
This blog post is a guide explaining how to setup a full-featured email server on OpenBSD 7.5. It was commissioned by a customer of my consultancy who wanted it to be published on my blog. Setting up a modern email stack that does not appear as a spam platform to the world can be a daunting task, the guide will cover what you need for a secure, functional and low maintenance email system. Solene Rapenne If you ever wanted to set up and run your own email server, this is a great way to do it. Solene, an OpenBSD developer, will help you through setting up IMAP, POP, and Webmail, an SMTP server with server-to-server encryption and hidden personal information, every possible measure to make sure your server is regarded as legitimate, and all the usual firewall and anti-spam stuff you are definitely going to need. Taking back email from Google - or even Proton, which is now doing both machine learning and Bitcoin, of all things - is probably one of the most daunting tasks for anyone willing to cut ties with as much of big tech as possible. Not only is there the technical barrier, there's also the fact that the major email providers, like Gmail or whatever Microsoft offers these days, are trying their darnest to make self-hosting email as cumbersome as possible by trying to label everything you send as spam or downright malicious. It's definitely not an easy task, but at least with guides like this there's some set of easy steps to follow to get there.
Normally I'm not that interested in reporting on news coming from OpenAI, but today is a little different - the company launched SearchGPT, a search engine that's supposed to rival Google, but at the same time, they're also kind of not launching a search engine that's supposed to rival Google. What? We're testing SearchGPT, a prototype of new search features designed to combine the strength of our AI models with information from the web to give you fast and timely answers with clear and relevant sources. We're launching to a small group of users and publishers to get feedback. While this prototype is temporary, we plan to integrate the best of these features directly into ChatGPT in the future. If you're interested in trying the prototype, sign up for the waitlist. OpenAI website Basically, before adding a more traditional web-search like feature set to ChatGPT, the company is first breaking them out into a separate, temporary product that users can test, before parts of it will be integrated into OpenAI's main ChatGPT product. It's an interesting approach, and with just how stupidly popular and hyped ChatGPT is, I'm sure they won't have any issues assembling a large enough pool of testers. OpenAI claims SearchGPT will be different from, say, Google or AltaVista, by employing a conversation-style interface with real-time results from the web. Sources for search results will be clearly marked - good - and additional sources will be presented in a sidebar. True to the ChatGPT-style user interface, you can keep talking" after hitting a result to refine your search further. I may perhaps betray my still relatively modest age, but do people really want to talk" to a machine to search the web? Any time I've ever used one of these chatbot-style user interfaces -including ChatGPT - I find them cumbersome and frustrating, like they're just adding an obtuse layer between me and the computer, and that I'd rather just be instructing the computer directly. Why try and verbally massage a stupid autocomplete into finding a link to an article I remember from a few days ago, instead of just typing in a few quick keywords? I am more than willing to concede I'm just out of touch with what people really want, so maybe this really is the future of search. I hope I can just always disable nonsense like this and just throw keywords at the problem.
Simultaneous multithreading (SMT) is a feature that lets a processor handle instructions from two different threads at the same time. But have you ever wondered how this actually works? How does the processor keep track of two threads and manage its resources between them? In this article, we're going to break it all down. Understanding the nuts and bolts of SMT will help you decide if it's a good fit for your production servers. Sometimes, SMT can turbocharge your system's performance, but in other cases, it might actually slow things down. Knowing the details will help you make the best choice. Abhinav Upadhyay Some light reading for the (almost) weekend.
In what started last year as a handful of reports about instability with Intel's Raptor Lake desktop chips has, over the last several months, grown into a much larger saga. Facing their biggest client chip instability impediment in decades, Intel has been under increasing pressure to figure out the root cause of the issue and fix it, as claims of damaged chips have stacked up and rumors have swirled amidst the silence from Intel. But, at long last, it looks like Intel's latest saga is about to reach its end, as today the company has announced that they've found the cause of the issue, and will be rolling out a microcode fix next month to resolve it. Ryan Smith at AnandTech It turns out the root cause of the problem is elevated operating voltages", caused by a buggy algorithm in Intel's own microcode. As such, it's at least fixable through a microcode update, which Intel says it will ship sometime mid-August. AnandTech, my one true source for proper reporting on things like this, is not entirely satisfied, though, as they state microcode is often used to just cover up the real root cause that's located much deeper inside the processor, and as such, Intel's explanation doesn't actually tell us very much at all. Quite coincidentally, Intel also experienced a manufacturing flaw with a small batch of very early Raptor Lake processors. An oxidation manufacturing flaw" found its way into a small number of early Raptor Lake processors, but the company claims it was caught early and shouldn't be an issue any more. Of course, for anyone experiencing issues with their expensive Intel processors, this will linger in the back of their minds, too. Not exactly a flawless launch for Intel, but it seems its main only competitor, AMD, is also experiencing issues, as the company has delayed the launch of its new Ryzen 9000 chips due to quality issues. I'm not at all qualified to make any relevant statements about this, but with the recent launch of the Snapdragon Elite X and Pro chips, these issues couldn't come at a worse time for Intel and AMD.
Choosing an operating system for new technology can be crucial for the success of any project. Years down the road, this decision will continue to inform the speed and efficiency of development.But should you build the infrastructure yourself or rely on a proven system? When faced with this decision, many companies have chosen, and continue to choose, FreeBSD. Few operating systems offer the immediate high performance and security of FreeBSD, areas where new technologies typically struggle. Having a stable and secure development platform reduces upfront costs and development time. The combination of stability, security, and high performance has led to the adoption of FreeBSD in a wide range of applications and industries. This is true for new startups and larger established companies such as Sony, Netflix, and Nintendo. FreeBSD continues to be a dependable ecosystem and an industry-leading platform. FreeBSD Foundation A FreeBSD marketing document highlighting FreeBSD's strengths is, of course, hardly a surprise, but considering it's fighting what you could generously call an uphill battle against the dominance of Linux, it's still interesting to see what, exactly, FreeBSD highlights as its strengths. It should come as no surprise that its licensing model - the simple BSD license - is mentioned first and foremost, since it's a less cumbersome license to deal with than something like the GPL. It's philosophical debate we won't be concluding any time soon, but the point still stands. FreeBSD also highlights that it's apparently quite easy to upstream changes to FreeBSD, making sure that changes benefit everyone who uses FreeBSD. While I can't vouch for this, it does seem reasonable to assume that it's easier to deal with the integrated, one-stop-shop that is FreeBSD, compared to the hodge-podge of hundreds and thousands of groups whose software all together make up a Linux system. Like I said, this is a marketing document so do keep that in mind, but I still found it interesting.
Not everything made by KDE uses C++. This is probably obvious to some people, but it's worth mentioning nevertheless. And I don't mean this as just well duh, KDE uses QtQuick which is written with C++ and QML". I also don't mean this as well duh, Qt has a lot of bindings to other languages". I mean explicitly KDE has tools written primarily in certain languages and specialized formats". Thiago Sueto If you ever wanted to contribute to KDE but weren't sure if your preferred programming language or tools were relevant, this is a great blog post detailing how you can contribute if you are familiar with any of the following: Python, Ruby, Perl, Containerfile/Docker/Podman, HTML/SCSS/JavaScript, Web Assembly, Flatpak/Snap, CMake, Java, and Rust. A complex, large project like KDE needs people with a wide variety of skills, so it's definitely not just C++. An excellent place to start.
The assault on a user's freedom to install whatever they want on what is supposed to be their phone continues. This time, it's Samsung adding an additional blocker to users installing applications from outside the Play Store and its own mostly useless Galaxy Store. Technically, Android already blocks sideloading by default at an operating system level. The permission that's needed to silently install new apps without prompting the user, INSTALL_PACKAGES, can only be granted to preinstalled app stores like the Google Play Store, and it's granted automatically to apps that request it. The permission that most third-party app stores end up using, REQUEST_INSTALL_PACKAGES, has to be granted explicitly by the user. Even then, Android will prompt the user every time an app with this permission tries to install a new app. Samsung's Auto Blocker feature takes things a bit further. The feature, first introduced in One UI 6.0, fully blocks the installation of apps from unauthorized sources, even if those sources were granted the REQUEST_INSTALL_PACKAGES permission. Mishaal Rahman I'm not entirely sure why Samsung felt the need to add an additional, Samsung-specific blocking mechanism, but at least for now, you can turn it off in the Settings application. This means that in order to install an application from outside of the Play Store and the Galaxy Store on brand new Samsung phones - the ones shipping with OneUI 6.1.1 - you need to both give the regular Android permission to do so, but also turn off this nag feature. Having two variants of every application on your Samsung phone wasn't enough, apparently.
This story just never ever ends. After delays, changes in plans, more delays, we now have more changed plans. After years of stalling, Google has now announced it is, in fact, not going to deprecate third-party cookies in Chrome by default. In light of this, we are proposing an updated approach that elevates user choice. Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they'd be able to adjust that choice at any time. We're discussing this new path with regulators, and will engage with the industry as we roll this out. Anthony Chavez Google remains unclear about what, exactly, users will be able to choose between. The consensus seems to be that users will be able to choose between retaining third-party cookies and turning them off, but that's based on a statement by the British Competition and Market Authority, and not on a statement from Google itself. It seems reasonable to assume the CMA knows what it's talking about, but with a company like Google you never know what's going to happen tomorrow, let alone a few months from now. While both Safari and Firefox have already made this move ages ago, it's taking Google and Chrome a lot longer to deal with this issue, because Google needs to find different ways of tracking you that are not using third-party cookies. Google's own testing with Privacy Sandbox, Chrome's sarcastically-named alternative to third-party cookies, shows that it seems to perform reasonable well, which should definitely raise some alarm bells about just how private it really is. Regardless, I doubt this saga will be over any time soon.
A story that's been persistently making the rounds since the CrowdStrike event is that while several airline companies were affected in one way or another, Southwest Airlines escaped the mayhem because they were still using windows 3.1. It's a great story that fits the current zeitgeist about technology and its role in society, underlining that what is claimed to be technological progress is nothing but trouble, and that it's better to stick with the old. At the same time, anybody who dislikes Southwest Airlines can point and laugh at the bumbling idiots working there for still using Windows 3.1. It's like a perfect storm of technology news click and ragebait. Too bad the whole story is nonsense. But how could that be? It's widely reported by reputable news websites all over the world, shared on social media like a strain of the common cold, and nobody seems to question it or doubt the veracity of the story. It seems that Southwest Airlines running on an operating system from 1992 is a perfectly believable story to just about everyone, so nobody is questioning it or wondering if it's actually true. Well, I did, and no, it's not true. Let's start with the actual source of the claim that Southwest Airlines was unaffected by CrowdStrike because they're still using Windows 3.11 for large parts of their primary systems. This claim is easily traced back to its origin - a tweet by someone called Artem Russakovskii, stating that the reason Southwest is not affected is because they still run on Windows 3.1". This tweet formed the basis for virtually all of the stories, but it contains no sources, no links, no background information, nothing. It was literally just this one line. It turned out be a troll tweet. A reply to the tweet by Russakovskii a day later made that very lear: To be clear, I was trolling last night, but it turned out to be true. Some Southwest systems apparently do run Windows 3.1. lol." However, that linked article doesn't cite any sources either, so we're right back where we started. After quite a bit of digging - that is, clicking a few links and like 3 minutes of searching online - following the various reference and links back to their sources, I managed to find where all these stories actually come from to arrive at the root claim that spawned all these other claims. It's from an article by The Dallas Morning News, titled What's the problem with Southwest Airlines scheduling system?" At the end of last year, Southwest Airlines' scheduling system had a major meltdown, leading to a lot of cancelled flights and stranded travelers just around the Christmas holidays. Of course, the media wanted to know what caused it, and that's where this The Dallas Morning News article comes from. In it, we find the paragraphs that started the story that Southwest Airlines is still using Windows 3.1 (and Windows 95!): Southwest uses internally built and maintained systems called SkySolver and Crew Web Access for pilots and flight attendants. They can sign on to those systems to pick flights and then make changes when flights are canceled or delayed or when there is an illness. Southwest has generated systems internally themselves instead of using more standard programs that others have used," Montgomery said. Some systems even look historic like they were designed on Windows 95." SkySolver and Crew Web Access are both available as mobile apps, but those systems often break down during even mild weather events, and employees end up making phone calls to Southwest's crew scheduling help desk to find better routes. During periods of heavy operational trouble, the system gets bogged down with too much demand. Kyle Arnold at The Dallas Morning News That's it. That's where all these stories can trace their origin to. These few paragraphs do not say that Southwest is still using ancient Windows versions; it just states that the systems they developed internally, SkySolver and Crew Web Access, look historic like they were designed on Windows 95". The fact that they are also available as mobile applications should further make it clear that no, these applications are not running on Windows 3.1 or Windows 95. Southwest pilots and cabin crews are definitely not carrying around pocket laptops from the '90s. These paragraphs were then misread, misunderstood, and mangled in a game of social media and bad reporting telephone, and here we are. The fact that nobody seems to have taken the time to click through a few links to find the supposed source of these claims, instead focusing on cashing in on the clicks and rage these stories would illicit, is a rather damning indictment of the state of online (tech) media. Many of the websites reporting on these stories are part of giant media conglomerates, have a massive number of paid staff, and they're being outdone by a dude in the Arctic with a small Patreon, minimal journalism training, and some common sense. This story wasn't hard to debunk - a few clicks and a few minutes of online searching is all it took. Ask yourself - why do these massive news websites not even perform the bare minimum?
Dell UNIX? I didn't know there was such a thing." A couple of weeks ago I had my new XO with me for breakfast at a nearby bakery cafe.Other patrons weredrawn to seeing an XO for the first time, including a Linux person from Dell. I mentioned Dell UNIX and we talked a little about the people who had worked on Dell UNIX. He expressed surprise that mention of Dell UNIX evokes the above quote so often and pointed out that Emacs source still has #ifdef for Dell UNIX. Quick Googling doesn't reveal useful history of Dell UNIX, so here's my version, a summary of the three major development releases. Charles H. Sauer I sure had never heard of Dell UNIX, and despite the original version of the linked article being very, very old - 2008 - there's a few updates from 2020 and 2021 that add links to the files and instructions needed to install, set up, and run Dell UNIX in a virtual machine; 86Box or VirtualBox specifically. What was Dell UNIX? in the late '80s, Dell started a the Olympic project, an effort to create a completely new architecture spanning desktops, workstations, and servers, some of which would be using multiple processors. When searching for an operating system for this project, the only real option was UNIX, and as such, the Olympic team set out to developer a UNIX variant. The first version was based on System V Release 3.2, used Motif and the X Window System, a DOS virtual machine to run, well, DOS applications called Merge, and compatibility with Microsoft Xenix. It might seem strange to us today, but Microsoft's Xenix was incredibly popular at the time, and compatibility with it was a big deal. The Olympic project turned out to be too ambitious on the hardware front so it got cancelled, but the Dell UNIX project continued to be developed. The next release, Dell System V Release 4, was a massive release, and included a full X Window System desktop environment called X.desktop, an office suite, e-mail software, and a lot more. It also contained something Windows wouldn't be getting for quite a few years to come: automatic configuration of device drivers. This was apparently so successful, it reduced the number of support calls during the first 90 days of availability by 90% compared to the previous release. Dell SVR4 finally seemed like real UNIX on a PC. We were justifiably proud of the quality and comprehensiveness, especially considering that our team was so much smaller than those of our perceived competitors at ISC, SCO and Sun(!). The reviewers were impressed. Reportedly, Dell SVR4 was chosen by Intel as their reference implementation in their test labs, chosen by Oracle as their reference Intel UNIX implementation, and used by AT&T USL for in house projects requiring high reliability, in preference to their own ports of SVR4.0. (One count showed Dell had resolved about 1800 problems in the AT&T source.) I was astonished one morning in the winter of 1991-92 when Ed Zander, at the time president of SunSoft, and three other SunSoft executives arrived at my office, requesting Dell help with their plans to put Solaris on X86. Charles H. Sauer Sadly, this would also prove to be the last release of Dell UNIX. After a few more point release, the brass at Dell had realised that Dell UNIX, intended to sell Dell hardware, was mostly being sold to people running it on non-Dell hardware, and after a short internal struggle, the entire project was cancelled since it was costing them more than it was earning them. As I noted, the article contains the files and instructions needed to run Dell UNIX today, on a virtual machine. I'm definitely going to try that out once I have some time, if only to take a peek at that X.desktop, because that looks absolutely stunning for its time.
This is an attempt at building an OpenBSD desktop than could be used by newcomers or by people that don't care about tinkering with computers and just want a working daily driver for general tasks. Somebody will obviously need to know a bit of UNIX but we'll try to limit it to the minimum. Joel Carnat An excellent, to-the-point, no-nonsense guide about turning a default OpenBSD installation into a desktop operating system running Xfce. You definitely don't need intimate, arcane knowledge of OpenBSD to follow along with this one.
Only yesterday, I mentioned one of the main reasons I decided to switch back to Fedora from OpenBSD were performance issues - and one of them was definitely the lack of hardware acceleration for video decoding/encoding. The lack of such technology means that decoding/encoding video is done using the processor, which is far less efficient than letting your GPU do it - which results in performance issues like stuttering and tearing, as well as a drastic reduction in battery life. Well, that's changed now. Thanks to the work of, well, many, a major commit has added hardware accelerated video decoding/encoding to OpenBSD. Hardware accelerated video decode/encode (VA-API) support is beginning to land in #OpenBSD -current. libva has been integrated into xenocara with the Intel userland drivers in the ports tree. AMD requires Mesa support, hence the inclusion in base. A number of ports will be adjusted to enable VA-API support over time, as they are tested. Bryan Steele This is great news, and a major improvement for OpenBSD and the community. Apparently, performance in Firefox is excellent, and with simply watching video on YouTube being something a lot of people do with their computers - especially laptops - anyone using OpenBSD is going to benefit immensely from this work.
NetWare 386 or 3.0 was a very limited release, with very few copies sold before it was superseded by newer versions. As such, it was considered lost to time, since it was only sold to large corporations - for a massive almost 8000 dollar price tag - who obviously didn't care about software preservation. There are no original disks left, but a recent warez" release has made the software available once again. As always, pirates save the day.
The Macintosh was intended to be different in many ways. One of them was its file system, which was designed for each file to consist of two forks, one a regular data fork as in normal file systems, the other a structured database of resources, the resource fork. Resources came to be used to store a lot of standard structured data, such as the specifications for and contents of alerts and dialogs, menus, collections of text strings, keyboard definitions and layouts, icons, windows, fonts, and chunks of code to be used by apps. You could extend the types of resource supported by means of a template, itself stored as a resource, so developers could define new resource types appropriate to their own apps. Howard Oakley And using ResEdit, a tool developed by Apple, you could manipulate the various resources to your heart's content. I never used the classic Mac OS when it was current, and only play with it as a retro platform every now and then, so I ever used ResEdit when it was the cool thing to do. Looking back, though, and learning more about it, it seems like just another awesome capability that Apple lost along the way towards modern Apple. Perhaps I should load up on my old Macs and see with my own eyes what I can do with ResEdit.
In 2018, we announced the deprecation and transition of Google URL Shortener because of the changes we've seen in how people find content on the internet, and the number of new popular URL shortening services that emerged in that time. This meant that we no longer accepted new URLs to shorten but that we would continue serving existing URLs. Today, the time has come to turn off the serving portion of Google URL Shortener. Please read on below to understand more about how this will impact you if you're using Google URL Shortener. Sumit Chandel and Eldhose Mathokkil Babu It should cost Google nothing to keep this running for as long as Google exists, and yet, this, too, has to be killed off and buried in the Google Graveyard. We'll be running into non-resolving Google URL Shortener links for decades to come, both on large, popular websites a well as on obscure forums and small websites. You'll find a solution to some obscure problem a decade from now, but the links you need will be useless, and you'll rightfully curse Google for being so utterly petty. Relying on anything Google that isn't directly serving its main business - ads - is a recipe for disaster, and will cause headaches down the line. Things like Gmail, YouTube, and Android are most likely fine, but anything consumer-focused is really a lottery.