The operating system I'm not cool enough to run has pushed out a new release: 9front THIS TIME DEFINITELY" is now available. 9front is a fork of plan9, created after plan9 languished at Bell Labs. This release enables gefs, the new file system, in the installer, ip/ipconfig now support dhcpv6 dynamic allocations and handles prefix expirations", and it comes with some smaller changes, too, of course. Despite every piece of evidence to the contrary, I am simply not cool enough to run 9front. Maybe one day they'll notice me, and I get invited to the cool table where the Puffs eat lunch. Who doesn't want to ring a bell in the headmaster's office at midnight?
I believe consumers, as a right, should be able to install software of their choosing to any computing device that is owned outright. This should apply regardless of the computer's form factor. In addition to traditional computing devices like PCs and laptops, this right should apply to devices like mobile phones, smart home" appliances, and even industrial equipment like tractors. In 2025, we're ultra-connected via a network of devices we do not have full control over. Much of this has to do with how companies lock their devices' bootloaders, prevent root access, and prohibit installation of software that is not explicitly sanctioned through approval in their own distribution channels. We should really work on changing that. Medhir Bhargava Obviously, this is preaching to the choir here on OSNews. I agree with Bhargava 100%. It should be illegal for any manufacturer of computing devices - with a possible exception for, say, things like medical implants, certain aspects of car control units, and so on - to lock down and/or restrict owners' ability to install whatever software they want, run whatever code they want, and install whatever operating system they want on the devices that they own. Computers are interwoven into the very fabric of every aspect of our society, and having them under the sole control of the biggest megacorporations in the world is utterly dystopian, and wildly dangerous. Personally, I would take it a step further: any and all code that runs on products sold must be open. Not necessarily open source, but at the very least open, so that it can be inspected when malice is suspected. This way, society can make sure that the tech billionaire oligarchs giving nazi salutes aren't in full, black-box control over our devices. Secrecy as a means of corporate control is incredibly dangerous, and forcing all code to be open is the perfect way to combat this. Copyright is more than enough intellectual property protection for code. The odds of this happening are, of course, slim, especially with the aforementioned tech billionaire oligarchs giving nazi salutes effectively running the most powerful military in human history. Reason is in short supply these days, and I doubt that's going to change any time soon.
How do you fit a 250kB dictionary in 64kB of RAM and still perform fast lookups? For reference, even with modern compression techniques like gzip -9, you can't compress this file below 85kB. In the 1970s, Douglas McIlroy faced this exact challenge while implementing the spell checker for Unix at AT&T. The constraints of the PDP-11 computer meant the entire dictionary needed to fit in just 64kB of RAM. A seemingly impossible task. Abhinav Upadhyay They still managed to do it, but had to employ some incredibly clever tricks to make it work, and make it work fast. Such skillful engineers interested in optimising and eeking the most possible performance out of underpowered hardware still exist today, but they're not in any position to make lasting changes at any of the companies defining our technology today. Why spend money on skilled engineers, when you can just throw cheap hardware at the problem? I wonder just how many resources the spellchecking feature in Word or LibreOffice Writer takes up.
GrapheneOS (written GOS from now on) is an Android based operating system that focuses security. It is only compatible with Google Pixel devices for multiple reasons: availability of hardware security components, long term support (series 8 and 9 are supported at least 7 years after release) and the hardware has a good quality / price ratio. The goal of GOS is to provide users a lot more control about what their smartphone is doing. A main profile is used by default (the owner profile), but users are encouraged to do all their activities in a separate profile (or multiples profiles). This may remind you about Qubes OS workflow, although it does not translate entirely here. Profiles can not communicate between each others, encryption is done per profile, and some permissions can be assigned per profile (installing apps, running applications in background when a profile is not used, using the SIM...). This is really effective for privacy or security reasons (or both), you can have a different VPN per profile if you want, or use a different Google Play login, different applications sets, whatever! The best feature here in my opinion is the ability to completely stop a profile so you are sure it does not run anything in the background once you exit it. Solene Rapenne I switched to GrapheneOS on my Pixel 8 Pro as part of my process to cleanse myself of as much Big Tech as possible, and I've been incredibly happy with it. The additional security and privacy control GrapheneOS brings is amazing, and the fact it opted for a sandboxed Google Play Services basically means there's no compatibility issues, unlike when using microG, where compatibility problems are a fact of life. GrapheneOS' security and other updates are on par or even faster than the stock Google Pixel's Android, and the overall user experience is virtually identical to stock Android. The only downside is the reliance on Pixel devices - it's an understandable choice, but does mean giving money to Google if you don't already own a Pixel. A workaround, if you will, is to buy a used or refurbished Pixel, but that may not always be an option either. For me personally, I'll be sticking with my Pixel 8 Pro for a long time, but if it were to break, I'd most likely go the used Pixel route to avoid enriching Google. For pretty much anyone reading OSNews, GrapheneOS would be a great choice, and if you already have a Pixel, I strongly urge you consider switching.
Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors, the new AMD EPYC 9005 Turin" server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency, the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler. Linux 6.13 also brings more Rust programming language infrastructure and more. Michael Larabel A big release, with a ton of new features. It'll make its way to your distribution soon enough.
It's been about 18 months, but we've got a new release for MorphOS, the Amiga-like operating system for PowerPC Macs and some other PowerPC-based machines. Going through the list of changes, it seems MorphOS 3.19 focuses heavily on fixing bugs and addressing issues, rather than major new features or earth-shattering changes. Of note are several small but important updates, like updated versions of OpenSSL and OpenSSH, as well as a ton of new filetype definitions - and so much more. Having a release focused on fixing bugs and addressing smaller issues isn't exactly a bad thing though - I've used MorphOS on my 17'' 1.25Ghz PowerBook G4 often enough to know MorphOS is quite complete, stable, and a ton of fun to use, and much more capable than it has any right to be considering what must be its relatively small developer team and user base. That being said, I do wish MorphOS was available on hardware newer than 20 year old PowerPC Macs, because as much as I like me some classic hardware, the world's moving on and even basic web browsing requires much more performant hardware now. Maybe I should try and buy one of the supported Apple PowerPC G5 machines to see just how much better MorphOS runs on that than on my G4.
Google says it has begun requiring users to turn on JavaScript, the widely used programming language to make web pages interactive, in order to use Google Search. In an email to TechCrunch, a company spokesperson claimed that the change is intended to better protect" Google Search against malicious activity, such as bots and spam, and to improve the overall Google Search experience for users. The spokesperson noted that, without JavaScript, many Google Search features won't work properly and that the quality of search results tends to be degraded. Kyle Wiggers at TechCrunch One of the strangely odd compliments you could give Google Search is that it would load even on the weirdest or oldest browsers, simply because it didn't require JavaScript. Whether I loaded Google Search in the JS-less Dillo, Blazer on PalmOS, or the latest Firefox, I'd end up with a search box I could type something into and search. Sure, beyond that the web would be, shall we say, problematic, but at least Google Search worked. With this move, Google will end such compatibility, which was most likely a side effect more than policy. I know a lot of people lament the widespread reliance on and requirement to have JavaScript, and it surely can be and is abused, but it's also the reality of people asking more and more of their tools on the web. I would love it websites gracefully degraded on browsers without JavaScript, but that's simply not a realistic thing to expect, sadly. JavaScript is part of the web now - and has been for a long time - and every website using or requiring JavaScript makes the web no more or less open" than the web requiring any of the other myriad of technologies, like more recent versions of TLS. Nobody is stopping anyone from implementing support for JS. I'm not a proponent of JavaScript or anything like that - in fact, I'm annoyed I can't load our WordPress backend in browsers that don't have it, but I'm just as annoyed that I can't load websites on older machines just because they don't have later versions of TLS. Technology progresses", and as long as the technologies being regarded as progress" are not closed or encumbered by patents, I can be annoyed by it, but I can't exactly be against it. The idea that it's JavaScript making the web bad and not shit web developers and shit managers and shit corporations sure is one hell of a take.
We've got a new Dillo release for you this weekend! We added SVG support for math formulas and other simple SVG images by patching the nanosvg library. This is specially relevant for Wikipedia math articles. We also added optional support for WebP images via libwebp. You can use the new option ignore_image_formats to ignore image formats that you may not trust (libwebp had some CVEs recently). Dillo website This release also comes with some UI tweaks, like the ability to move the scrollbar to the left, use the scrollbar to go back and forward exactly one page, the ability to define custom link actions in the context menu, and more - including the usual bug fixes, of course. Once the pkgsrc bug on HP-UX I discovered and reported is fixed, Dillo is one of the first slightly more complex packages I intend to try and build on HP-UX 11.11.
Now, if you have been following the development of EndBASIC, this is not surprising. The defining characteristic of the EndBASIC console is that it's hybrid as the video shows. What's newsworthy, however, is that the EndBASIC console can now run directly on a framebuffer exposed by the kernel. No X11 nor Wayland in the picture (pun intended). But how? The answer lies in NetBSD's flexible wscons framework, and this article dives into what it takes to render graphics on a standard Unix system. I've found this exercise exciting because, in the old days, graphics were trivial (mode 13h, anyone?) and, for many years now, computers use framebuffer-backed textual consoles. The kernel is obviously rendering graphics" by drawing individual letters; so why can't you, a user of the system, do so too? Julio Merino This opens up a lot of interesting use cases and fun hacks for developers to implement in their CLI applications. All the code in the article is - as usual - way over my head, but will be trivial for quite a few of you. The mentioned EndBASIC project, created by the author, Julio Merino, is fascinating too: EndBASIC is an interpreter for a BASIC-like language and is inspired by Amstrad's Locomotive BASIC 1.1 and Microsoft's QuickBASIC 4.5. Like the former, EndBASIC intends to provide an interactive environment that seamlessly merges coding with immediate visual feedback. Like the latter, EndBASIC offers higher-level programming constructs and strong typing. EndBASIC's primary goal is to offer a simplified and restricted DOS-like environment to learn the foundations of programming and computing, and focuses on features that quickly reward the learner. These include a built-in text editor, commands to manipulate the screen, commands to interact with shared files, and even commands to interact with the hardware of a Raspberry Pi. EndBASIC website Being able to run this on a machine without having to load either X or Wayland is a huge boon, and makes it accessible fast on quite a lot of hardware on which a full X or Wayland setup would be cumbersome or slow.
Up until now, if you were subscribed to Office 365 - I think it's called Microsoft 365 now - and you wanted the various AI" Copilot features, you needed to pay $20 extra. Well, that's changing, as Microsoft is now adding these features to Microsoft 365 by default, while raising the prices for every subscriber by $3 per month. It seems not enough people were interested in paying $20 per month extra for AI" features in Office, so Microsoft has to force everyone to pay up. It's important to note, though, that your usage of the features is limited by how many AI credits" you have, to really nail that slot machine user experience, and you're only getting a limited number of those per month. Luckily, existing Microsoft 365 subscribers can opt out of these new features and thus avoid the price increase, which is a genuinely welcome move by Microsoft. New subscribers, however, will not be able to opt out. Finally, we understand that our customers have a variety of needs and budgets, so we're committed to providing options. Existing subscribers with recurring billing enabled with Microsoft can switch to plans without Copilot or AI credits like our Basic plan, or, for a limited time, to new Personal Classic or Family Classic plans. These plans will continue to be maintained as they exist today, but for certain new innovations and features you'll need a Microsoft 365 Personal and Family subscription. Bryan Rognier at the Microsoft blog Microsoft wants to spread the immense cost of running datacentres for AI" to everyone, whether you want to use these features or not. When not enough people want to opt into AI" and pay extra, the only other option is to just make everyone pay, whether they want to or not. Still, the opt-out for existing subscribers is nice, and if you are one and don't want to pay $35 per year extra, don't forget to opt out.
Venture is a cross-platform viewer for Windows Event Logs (.evtx files). Built with the Tauri, it is intended as a fast, standalone tool for quickly parsing and slicing Windows Event Log files during incident response, digital forensics, and CTF competitions. Venture GitHub page Neat tool. It makes sense that it would be possible to build third-party viewers for Windows event logs, but I never stopped to think about it and just defaulted to the one built into Windows.
Google has told the EU it will not add fact checks to search results and YouTube videos or use them in ranking or removing content, despite the requirements of a new EU law, according to a copy of a letter obtained by Axios. In a letter written to Renate Nikolay, the deputy director general under the content and technology arm at the European Commission, Google's global affairs president Kent Walker said the fact-checking integration required by the Commission's new Disinformation Code of Practice simply isn't appropriate or effective for our services" and said Google won't commit to it. Sara Fischer at Axios Imagine if any one of us, ordinary folk told the authorities we were just not going to follow the law. We're not going to pay our taxes because tax law simply isn't appropriate or effective for our services". We're not going to follow traffic laws and regulations because doing so simply isn't appropriate or effective for our services". We're not going to respect property laws because doing so simply isn't appropriate or effective for our services". We'd be in trouble within a heartbeat. We'd be buried in fines, court cases, and eventually, crippling debt, bankruptcy, and most likely end up in prison. The arrogance with which these American tech giants willfully declare themselves to be above EU laws and regulations is appalling, and really should have far more consequences than it does right now. Executives should be charged and arrested, products and services banned and taken off the shelves, and eventually, the companies themselves should be banned from operating within the EU altogether. Especially with the incoming regime in the US, which will most likely grant the tech giants even more freedom to do as they please, the EU needs to start standing up against this sort of gross disrespect. The consequences for a corporation knowingly breaking the law should be just as grave as for an individual citizen knowingly breaking the law.
OSNews Sponsor OS-SCi is educating the next generation FOSS engineers, and as part of their coursework, they're looking for worthy open source projects to which they can contribute their time and effort. In addition to the work they provide during their studies, these volunteers will be encouraged to continue to be involved after they finish their courses and proceed into the workforce. If you are involved in an open source project and would like some help, please register here. Also, please leave a comment below to share some details about your project with the OSNews community. Perhaps we can use this forum to bring some OSNews readers together as long term collaborators. In other news, OS-SCi is organizing an international Open Source Hackathon on 21-22 February online and on multiple university campuses. Register for the hackathon here. Read more details here.
It seems we're getting a glimpse at the next stick Microsoft will be using to push people to buy new PCs (we're all rich, according to Microsoft) or upgrade to Windows 11. In a blog post extolling the virtues of a free upgrade from Windows 10 to 11, the company announced that with the end of support for Windows 10, Microsoft will also stop supporting Office applications on Windows 10, otherwise known as Office 365. Lastly, Microsoft 365 Apps will no longer be supported after October 14, 2025, on Windows 10 devices. To use Microsoft 365 Applications on your device, you will need to upgrade to Windows 11. Microsoft's Margaret Farmer Of course, the applications won't stop working on Windows 10 right away after that date, but Microsoft won't be fixing any security issues, bugs, or other issues that might (will) come up. It reads like a threat to Windows users - upgrade by buying a new PC you probably can't afford, or not only use an insecure version of Windows, but also insecure Office applications. I doubt it'll have much of an impact on the staggering number of people still using Windows 10 - more than 60% of Windows users - so I'm sure Microsoft has more draconian plans up its sleeve to push people to upgrade.
If you don't want OpenAI's, Apple's, Google's, or other companies' crawlers sucking up the content on your website, there isn't much you can do. They generally don't care about the venerable robots.txt, and while people like Aaron Schwartz were legally bullied into suicide for downloading scientific articles using a guest account, corporations are free to take whatever they want, permission or no. If corporations don't respect us, why should we respect them? There are ways to fight back against these scrapers, and the latest is especially nasty in all the right ways. This is a tarpit intended to catch web crawlers. Specifically, it's targeting crawlers that scrape data for LLM's - but really, like the plants it is named after, it'll eat just about anything that finds its way inside. It works by generating an endless sequences of pages, each of which with dozens of links, that simply go back into the tarpit. Pages are randomly generated, but in a deterministic way, causing them to appear to be flat files that never change. Intentional delay is added to prevent crawlers from bogging down your server, in addition to wasting their time. Lastly, optional Markov-babble can be added to the pages, to give the crawlers something to scrape up and train their LLMs on, hopefully accelerating model collapse. ZADZMO.org You really have to know what you're doing when you set up this tool. It is intentionally designed to cause harm to LLM web crawlers, but it makes no distinction between LLM crawlers and, say, search engine crawlers, so it will definitely get you removed from search results. On top of that, because Nepenthes is designed to feed LLM crawlers what they're looking for, they're going to love your servers and thus spike your CPU load constantly. I can't reiterate enough that you should not be using this if you don't know what you're doing. Setting it all up is fairly straightforward, but of note is that if you want to use the Markov generation feature, you'll need to provide your own corpus for it to feed from. None is included to make sure every installation of Nepenthes will be different and unique because users will choose their own corpus to set up. You can use whatever texts you want, like Wikipedia articles, royalty-free books, open research corpuses, and so on. Nepenthes will also provide you with statistics to see what cats you've dragged in. You can use Nepenthes defensively to prevent LLM crawlers from reaching your real content, while also collecting the IP ranges of the crawlers so you can start blocking them. If you've got enough bandwith and horsepower, you can also opt to use Nepenthes offensively, and you can have some real fun with this. Let's say you've got horsepower and bandwidth to burn, and just want to see these AI models burn. Nepenthes has what you need: Don't make any attempt to block crawlers with the IP stats. Put the delay times as low as you are comfortable with. Train a big Markov corpus and leave the Markov module enabled, set the maximum babble size to something big. In short, let them suck down as much bullshit as they have diskspace for and choke on it. ZADZMO.org In a world where we can't fight back against LLM crawlers in a sensible and respectful way, tools like these are exactly what we need. After all, the imbalance of power between us normal people and corporations is growing so insanely out of any and all proportions, that we don't have much choice but to attempt to burn it all down with more... Destructive methods. I doubt this will do much to stop LLM crawlers from taking whatever they want without consent - as I've repeatedly said, Silicon Valley does not understand consent - but at least it's joyfully cathartic.
Speaking of Microsoft shipping bad code, how about an absolutely humongous patch Tuesday'? Microsoft today unleashed updates to plug a whopping 161 security vulnerabilities in Windows and related software, including three zero-day" weaknesses that are already under active attack. Redmond's inaugural Patch Tuesday of 2025 bundles more fixes than the company has shipped in one go since 2017. Brian Krebs Happy new year, Windows users.
A change to the Linux 6.13 kernel contributed by a Microsoft engineer ended up changing Linux x86_64 code without proper authorization and in turn causing troubles for users and now set to be disabled ahead of the Linux 6.13 stable release expected next Sunday. Michael Larabel What I like about this story is that it seems to underline that the processes, checks, and balances in place in Linux kernel development seem to be working - at least, this time. A breaking change was caught during the prerelease phase, and a fix has been merged to make sure this issue will be fixed before the stable version of Linux 6.13 is released to the wider public. This all sounds great, but there is an element of this story that raises some serious questions. The change itself was related to EXECMEM_ROX, and was intended to improve performance of 64bit AMD and Intel processors, but in turn, this new code broke Control Flow Integrity on some setups, causing some devices not to wake from hibernation while also breaking other features. What makes this spicy is that the code was merged without acknowledgement from any of the x86 kernel maintainers, which made a lot of people very unhappy - and understandably so. So while the processes and checks and balances worked here, something still went horribly wrong, as such changes should not be able to be merged without acknowledgement from maintainers. This now makes me wonder how many more times this has happened without causing any instantly discoverable issues. For now, some code has been added to revert the offending changes, and Linux 6.13 will ship with Microsoft's bad code disabled.
They tried to keep it from prying eyes, but several people did notice it: Google made a pretty significant policy change regarding the use of fingerprinting by advertisers. While Google did not allow advertisers to use digital fingerprinting, the company has now changed its mind on this one. Google really tried to hide this change. The main support article talking about the reasoning behind the change is intentionally obtuse and nebulous, and doesn't even link to the actual policy changes being implemented - which are found in a separate document. Google doesn't highlight its changes there, so you have to compare the two versions of the policy yourself. Google claims this change has to be implemented because of advances in privacy-enhancing technologies (PETs) such as on-device processing, trusted execution environments, and secure multi-party computation" and the rise of new ad-supported devices and platforms". What I think this word salad means is that users are regaining a modicum of privacy with some specific privacy-preserving features in certain operating systems and on certain devices, and that the use of dedicated, siloed streaming services is increasing, which is harder for Google and advertisers to track. In other words, Google is relaxing its rules on fingerprinting because we're all getting more conscious about privacy. In any event, the advice remains the same: use ad-blockers, preferably at your network level. Install adblocking software and extensions, set up a Pi-Hole, or turn on any adblocking features in your router (my Ubiquiti router has it built-in, and it works like a charm). Remember: your device, your rules. If you don't want to see ads, you don't have to.
In recent months, we've talked twice about FM Towns, Fujitsu's PC platform aimed solely at the Japanese market. It was almost entirely only available in Japanese, so it's difficult to explore for those of us who don't speak Japanese. There's an effort underway to recreate it as open source, which will most likely take a while, but in the meantime, a part of the FM TOWNS Technical Databook, written by Noriaki Chiba, has been translated from Japanese into English by Joe Groff. From the book's preface: That is why the author wrote this book, to serve as an essential manual for enthusiasts, systems developers, and software developers. Typical computer manuals do not adequately cover technical specifications, so users tend to have a hard time understanding the hardware. We have an opportunity to thoroughly break through this barrier, and with this new hardware architecture being a milestone in the FM series, it feels like the perfect time to try. Hardware manuals up to this point have typically only explained the consequences of the hardware design without explaining its fundamentals. By contrast, this book describes the hardware design of the TOWNS from its foundations. Since even expert systems developers can feel like amateurs when working with devices outside of their repertoire, this book focuses on explaining those foundations. This is especially necessary for the FM TOWNS, since it features so many new devices, including a 80386 CPU and a CD-ROM drive. Noriaki Chiba This handbook goes into great detail about the inner workings of the hardware, and chapter II, which hasn't been translated yet, also dives deep into the BIOS of the hardware, from its first revisions to all the additional features added on top as time progressed. This book, as well as its translation, will be invaluable to people trying to use Towns OS today, to developers working on emulators for the platform, and anyone who fits somewhere in between. It seems this translation was done entirely in Groff's free time as a side project, which is commendable. We're looking at about 65000 words in the target language, of a highly technical nature, all translated for free because someone decided it was worth it. Sending this over to a translation agency would most likely cost well over 10000. Of course, that would include additional editing and proofreading by parties other than the initial translator(s), but that's definitely not needed for a passion project like this. Excellent, valuable work.
When someone says you're biased against them because you object to their stated goal of removing you from society, they're not actually asking for fairness-they're demanding complicity. It's the political equivalent of asking why the gazelle seems to have such a negative view of lions. Think about the underlying logic here: I'm biased because I don't give equal weight to both sides of a debate about my fundamental rights. I'm biased because I notice patterns in political movements that explicitly state their intentions regarding people like me. I'm biased because I take them at their word when they tell me what they plan to do. Joan Westenberg OSNews and I will always stand for the right of transgender people to exist, and to enjoy the exact same rights and privileges as any other member of society. This is non-negotiable.
Enlightenment 0.27.0 has been released, and we've got some highly informative release notes. This is the latest release of Enlightenment. This has a lot of fixes mostly with some new features. Carsten Haitzler That's it. That's the release notes. Digging into the commit history between 0.26.0 and 0.27.0 gives some more information, and here we can see that a lot of work has been done on the CPU frequency applet (including hopefully making it work properly on FreeBSD), a lot of updated translations, some RandR work, and a ton of other small changes. Does any one of us use Enlightenment on a daily basis? I'm actually intrigued by this release, as it's the first one in two years, and aside from historical usage decades ago - like many of us, I assume - I haven't really spent any time with the current incarnation.
Two years ago, Twitch streamer albrot discovered a bug in the code for crossing rivers. One of the options is to wait to see if conditions improve"; waiting a day will consume food but not recalculate any health conditions, granting your party immortality. From this conceit the Oregon Trail Time Machine was born; a multiday livestream of the game as the party waits for conditions to improve at the final Snake River crossing until the year 10000, to see if the withered travellers can make it to the ruins of ancient Oregon. The first attempt ended in tragedy; no matter what albrot tried, the party would succumb to disease and die almost immediately. A couple of days before New Years Eve 2025, albrot reached out and asked if I knew anything about Apple II hacking. Scott Percival It may have required some reverse engineering and hackery, but yes, you can reach the ruins of Oregon in the year 16120.
Mastodon, the only remaining social network that isn't a fascist hellhole like Twitter or Facebook, is changing its legal and operational foundation to a proper European non-profit. Simply, we are going to transfer ownership of key Mastodon ecosystem and platform components (including name and copyrights, among other assets) to a new non-profit organization, affirming the intent that Mastodon should not be owned or controlled by a single individual. It also means a different role for Eugen, Mastodon's current CEO. Handing off the overall Mastodon management will free him up to focus on product strategy where his original passion lies and he gains the most satisfaction. Official Mastodon blog Eugen Rochko has always been clear and steadfast about Mastodon not being for sale and not accepting any outside investments despite countless offers, and after eight years of both creating and running Mastodon, it makes perfect sense to move the network and its assets to a proper European non-profit. Mastodon's actual control over the entire federated ActivityPub network - the Fediverse - is actually limited, so it's not like the network is dependent on Mastodon, but there's no denying it's the most well-known part of the Fediverse. The Fediverse is the only social network on which OSNews is actively present (and myself, too, for that matter). By actively present" I only mean I'm keeping an eye on any possible replies; the feed itself consists exclusively of links to our stories as soon as they're published, and that's it. Everything else you might encounter on social media is either legacy cruft we haven't deleted yet, or something a third-party set up that we don't control. RSS means it's easy for people to set up third-party, unaffiliated accounts on any social medium posting links to our stories, and that's entirely fine, of course. However, corporate social media controlled by the irrational whims of delusional billionaires with totalitarian tendencies is not something we want to be a part of, so aside from visiting OSNews.com and using our RSS feeds, the only other official way to follow OSNews is on Mastodon.
It's hard to see how to move forward from here. I think the best bet would be for people to rally around a new community-driven infrastructure. This would likely require a fork of WordPress, though, and that's going to be a messy. The current open source version of WordPress relies on the sites and services Mullenweg controls. Joost de Valk, the original creator of an extremely popular SEO plugin, wrote a blog post with some thoughts on the matter. I'm hoping that more prominent people in the community step up like this, and that some way forward can be found. Update: Moments after posting this, I was pointed to a story on TechCrunch about Mullenweg deactivating the WordPress.org accounts of users planning a fork". This after he previously promoted (though in a slightly mocking way) the idea of forking open source software. In both cases, the people he mentioned weren't actually planning forks, but musing about future ways forward for WordPress. Mullenweg framed the account deactivations as giving people the push they need to get started. Remember that WordPress.org accounts are required to submit themes, plugins, or core code to the WordPress project. These recent events really make it seem like you're no longer welcome to contribute to WordPress if you question Matt Mullenweg. Gavin Anderegg I haven't wasted a single word on the ongoing WordPress drama yet, but the longer Matt Mullenweg, Automattic's CEO and thus owner of WordPress, keeps losing his mind, I can't really ignore the matter any more. OSNews runs, after all, on WordPress - self-hosted, at least, so not on Mullenweg's WordPress.com - and if things keep going the way they are, I simply don't know if WordPress remains a viable, safe, and future-proof CMS for OSNews. I haven't discussed this particular situation with OSNews owner, David Adams, yet, mostly since he's quite hands-off in the day-to-day operations and has more than enough other matters to take care of, but I think the time has come to start planning for a potential worst-case scenario in which Mullenweg takes even more of whatever he's taking and WordPress implodes entirely. Remember - even if you self-host WordPress outside of Automattic, several core infrastructure parts of WordPress still run through Automattic, so we're still dependent on what Mullenweg does or doesn't do. I have no answers, announcements, or even plans at this point, but if you or your company depend on WordPress, you might want to start thinking about where to go from here.
One of the innovations that the V7 Bourne shell introduced was built in shell wildcard globbing, which is to say expanding things like *, ?, and so on. Of course Unix had shell wildcards well before V7, but in V6 and earlier, the shell didn't implement globbing itself; instead this was delegated to an external program, /etc/glob (this affects things like looking into the history of Unix shell wildcards, because you have to know to look at the glob source, not the shell). Chris Siebenmann I never knew expanding wildcars in UNIX shells was once done by a separate program, but if you stop and think about the original UNIX philosophy, it kind of makes sense. On a slightly related note, I'm currently very deep into setting up, playing with, and actively using HP-UX 11i v1 on the HP c8000 I was able to buy thanks to countless donations from you all, OSNews readers, and one of the things I want to get working is email in dtmail, the CDE email program. However, dtmail is old, and wants you to do email the UNIX way: instead of dtmail retrieving and sending email itself, it expects other programs to those tasks for you. In other words, to setup and use dtmail (instead of relying on a 2010 port of Thunderbird), I'll have to learn how to set up things like sendmail, fetchmail, or alternatives to those tools. Those programs will in turn dump the emails in the maildir format for dtmail to work with. Configuring these tools could very well be above my paygrade, but I'll do my best to try and get it working - I think it's more authentic to use something like dtmail than a random Thunderbird port. In any event, this, too, feels very UNIX-y, much like delegating wildcard expansion to a separate program. What this also shows is that the UNIX philosophy" was subject to erosion from the very beginning, and really isn't a modern phenomenon like many people seem to imply. I doubt many of the people complaining about the demise of the UNIX philosophy today even knew wildcard expansion used to be done by a separate program.
Many moons ago, around the time when Andreas formally resigned from being Serenity's BDFL, I decided that I want to get involved in the project more seriously. Looking at it from a perspective of what do I not like about this (codebase)", the first thing that came to mind was that it runs HERE points at QEMU and not THERE points at real hardware. Obvious oversight, let's fix it. sdomi There's no way for me to summarise this cursed saga, so just follow the lovely link and read it. It's a meandering story of complexity, but eventually, a corrupted graphical session appeared. Now the real work starts.
Don't you just love it when companies get together under the thin guise of open source to promote their own interests? Today Google is pleased to announce our partnership with The Linux Foundation and the launch of the Supporters of Chromium-based Browsers. The goal of this initiative is to foster a sustainable environment of open-source contributions towards the health of the Chromium ecosystem and financially support a community of developers who want to contribute to the project, encouraging widespread support and continued technological progress for Chromium embedders. The Supporters of Chromium-based Browsers fund will be managed by the Linux Foundation, following their long established practices for open governance, prioritizing transparency, inclusivity, and community-driven development. We're thrilled to have Meta, Microsoft, and Opera on-board as the initial members to pledge their support. Shruthi Sreekanta on the Chromium blog First, there's absolutely no way around the fact that this entire effort is designed to counter some of the antitrust actions against Google, including a possible forced divestment of Chrome. By setting up an additional fund atop the Chromium organisation, placed under the management of the Linux Foundation, Google creates the veneer of more independence for Chromium than their really is. In reality, however, Chromium is very much a Google-led project, with 94% of code contributions coming from Google, and with the Linux Foundation being very much a corporate affair, of which Google itself is a member, one has to wonder just how much it means that the Linux Foundation is managing this new fund. Second, the initial members of this fund don't exactly instill confidence in the fund's morals and values. We've got Google, the largest online advertising company in the world. Then there's Facebook, another major online advertising company, followed by Microsoft, which, among other business ventures, is also a major online advertising company. Lastly we have Opera, an NFT and cryptoscammer making money through predatory loans in poor countries. It's a veritable who's who of some of the companies you least want near anything related to your browsing experience. I highly doubt a transparent effort like this is going to dissuade any judge or antritrust regulator from backing down. It's clear this fund is entirely self-serving and designed almost exclusively for optics, with an obvious bias towards online advertising companies who want to make the internet worse than towards companies and people trying to make the internet better.
VLC media player, the popular open-source software developed by nonprofit VideoLAN, has topped 6 billion downloads worldwide and teased an AI-powered subtitle system. The new feature automatically generates real-time subtitles - which can then also be translated in many languages - for any video using open-source AI models that run locally on users' devices, eliminating the need for internet connectivity or cloud services, VideoLAN demoed at CES. Manish Singh at TechCrunch VLC is choosing to throw users who rely on subtitles for accessibility or translation reasons under the bus. Using speech-to-text and even AI" as a starting point for a proper accessibility expert of translator is fine, and can greatly reduce the workload. However, as anyone who works with STT and AI" translation software knows, their output is highly variable and wildly unreliable, especially once English isn't involved. Dumping the raw output of these tools onto people who rely on closed captions and subtitles to even be able to view videos is not only lazy, it's deeply irresponsible and demonstrates a complete lack of respect and understanding. I was a translator for almost 15 years, with two university degrees on the subject to show for it. This is obviously a subject close to my heart, and the complete and utter lack of respect and understanding from Silicon Valley and the wider technology world for proper localisation and translation has been a thorn in my side for decades. We all know about bad translations, but it goes much deeper than that - with Silicon Valley's utter disregard for multilingual people drawing most of my ire. Despite about 60 million people in the US alone using both English and Spanish daily, software still almost universally assumes you speak only one language at all times, often forcing fresh installs for something as simple as changing a single application's language, or not even allowing autocorrect on a touch keyboard to work with multiple languages simultaneously. I can't even imagine how bad things are for people who, for instance, require closed-captions for accessibility reasons. Imagine just how bad the AI"-translated Croatian closed-captions on an Italian video are going to be - that's two levels of AI" brainrot between the source and the ears of the Croatian user. It seems subtitles and closed captions are going to be the next area where technology companies are going to slash costs, without realising - or, more likely, without giving a shit - that this will hurt users who require accessibility or translations more than anything. Seeing even an open source project like VLC jump onto this bandwagon is disheartening, but not entirely unexpected - the hype bubble is inescapable, and a lot more respected projects are going to throw their users under the bus before this bubble pops. ...wait a second. Why is VLC at CES in the first place?
On Monday at CES 2025, Nvidia unveiled a desktop computer called Project DIGITS. The machine uses Nvidia's latest Blackwell" AI chip and will cost $3,000. It contains a new central processor, or CPU, which Nvidia and MediaTek worked to create. Responding to an analyst's question during an investor presentation, Huang said Nvidia tapped MediaTek to co-design an energy-efficient CPU that could be sold more widely. Now they could provide that to us, and they could keep that for themselves and serve the market. And so it was a great win-win," Huang said. Previously, Reuters reported that Nvidia was working on a CPU for personal computers to challenge the consumer and business computer market dominance of Intel, Advanced Micro Devices and Qualcomm. Stephen Nellis at Reuters I've long wondered why NVIDIA wasn't entering the general purpose processor market in a more substantial way than it did a few years ago with the Tegra, especially now that ARM has cemented itself as an architecture choice for more than just mobile devices. Much like Intel, AMD, and now Qualcomm, NVIDIA could easily deliver the whole package to laptop, tablet, and desktop makers: processor, chipset, GPU, of course glued together with special NVIDIA magic the other companies opting to use NVIDIA GPUs won't get. There's a lot of money to be made there, and it's the move that could help NVIDIA survive the inevitable crash of the AI" wave it's currently riding, which has pushed the company to become one of the most valuable companies in the world. I'm also sure OEMs would love nothing more than to have more than just Qualcomm to choose from for ARM laptops and desktops, if only to aid in bringing costs down through competition, and to potentially offer ARM devices with the same kind of powerful GPUs currently mostly reserved for x86 machines. I'm personally always for more competition, but this time with the asterisk that NVIDIA really doesn't need to get any bigger than it already is. The company has a long history of screwing over consumers, and I doubt that would change if they also conquered a chunky slice of the general purpose processor market.
So we all know about twisted-pair ethernet, huh? I get a little frustrated with a lot of histories of the topic, like the recent neil breen^w^wserial port video, because they often fail to address some obvious questions about the origin of twisted-pair network cabling. Well, I will fail to answer these as well, because the reality is that these answers have proven very difficult to track down. J. B. Crawford The problems with nailing down an accurate history of the development of the various standards, ideas, concepts, and implementations of Ethernet and other, by now dead, network standards are their age, as well as the fact that their history is entangled with the even longer history of telephone wiring. The reasoning behind some of the choices made by engineers over the past more than 100 years of telephone technology aren't always clear, and very difficult to retrace. Crawford dives into some seriously old and fun history here, trying to piece together the origins of twisted pair the best he can. It's a great read, as all of his writings are.
Hey there! In this book, we're going to build a small operating system from scratch, step by step. You might get intimidated when you hear OS or kernel development, the basic functions of an OS (especially the kernel) are surprisingly simple. Even Linux, which is often cited as a huge open-source software, was only 8,413 lines in version 0.01. Today's Linux kernel is overwhelmingly large, but it started with a tiny codebase, just like your hobby project. We'll implement basic context switching, paging, user mode, a command-line shell, a disk device driver, and file read/write operations in C. Sounds like a lot, however, it's only 1,000 lines of code! Seiya Nuta It's exactly what it says on the tin.
We've all had a good seven years to figure out why our interconnected devices refused to work properly with the HDMI 2.1 specification. The HDMI Forum announced at CES today that it's time to start considering new headaches. HDMI 2.2 will require new cables for full compatibility, but it has the same physical connectors. Tiny QR codes are suggested to help with that, however. The new specification is named HDMI 2.2, but compatible cables will carry an Ultra96" marker to indicate that they can carry 96GBps, double the 48 of HDMI 2.1b. The Forum anticipates this will result in higher resolutions and refresh rates and a next-gen HDMI Fixed Rate Link." The Forum cited AR/VR/MR, spatial reality, and light field displays" as benefiting from increased bandwidth, along with medical imaging and machine vision. Kevin Purdey at Ars Technica I'm sure this will not pose any problems whatsoever, and that no shady no-name manufacturers will abuse this situation at all. DisplayPort is the better standard and connector anyway. No, I will not be taking questions.
NESFab is a new programming language for creating NES games. Designed with 8-bit limitations in mind, the language is more ergonomic to use than C, while also producing faster assembly code. It's easy to get started with, and has a useful set of libraries for making your first - or hundredth - NES game. NESFab website NESFab has some smart features developers of NES games will certainly appreciate, most notably automatic bank switching. Instead of doing this manually, but NESFab will automatically carve your code and data up into banks to be switched in and out of memory when needed. There's also an optional map editor, which makes it very easy to create additional levels for your game. All in all, a very cool project I hadn't heard of, which also claims to perform better than other compilers. If you've ever considered making an NES game, NESFab might be a tool to consider.
An OPO (compiled OPL) interpreter written in Lua and Swift, based on the Psion Series 5 era format (ie ER5, prior to the Quartz 6.x changes). It lets you run Psion 5 programs written in OPL on any iOS device, subject to the limitations described below. OpoLua GitHub page If you're pining for that Psion Series 5, but don't want to deal with the hassle of owning and maintaining a real one - here's a solution if you're an iOS users. Incredibly neat, but with one limitation: only pure OPL programs work. Any program that also has native ARM code will not work.
Dell has announced it's rebranding literally its entire product line, so mainstays like XPS, Latitude, and Inspiron are going away. They're replacing all of these old brands with Dell, Dell Pro, and Dell Pro Max, and within each of these, there will be three tiers: Base, Plus, and Premium. Of course, the reason is AI". The AI PC market is quickly evolving. Silicon innovation is at its strongest and everyone from IT decision makers to professionals and everyday users are looking at on-device AI to help drive productivity and creativity. To make finding the right AI PC easy for customers, we've introduced three simple product categories to focus on core customer needs - Dell (designed for play, school and work), Dell Pro (designed for professional-grade productivity) and Dell Pro Max (designed for maximum performance). We've also made it easy to distinguish products within each of the new product categories. We have a consistent approach to tiering that lets customers pinpoint the exact device for their specific needs. Above and beyond the starting point (Base), there's a Plus tier that offers the most scalable performance and a Premium tier that delivers the ultimate in mobility and design. Kevin Terwilliger on Dell's blog Setting aside the nonsensical reasoning behind the rebrand, I do actually kind of dig the simplicity here. This is a simple, straightforward set of brand names and tiers that pretty much anyone can understand. That being said, the issue with Dell in particular is that once you go to their website to actually buy one of their machines, the clarity abruptly ends and it gets confusing fast. I hope these new brand names and tiers will untangle some of that mess to make it easier to find what you need, but I'm skeptical. My XPS 13 from 2017 is really starting to show its age, and considering how happy I've been with it over the years its current Dell equivalent would be a top contender (assuming I had the finances to do so). I wonder if the Linux support on current Dell laptops has improved since my XPS 13 was new?
Over 60% of Windows users are still using Windows 10, with only about 35% or so - and falling! - of them opting to use Windows 11. As we've talked about many times before, this is a major issue going into 2025, since Windows 10's support will end in October of this year, meaning hundreds of millions of people all over the world will suddenly be running an operating system that will no longer receive security updates. Most of those people don't want to, or cannot, upgrade to Windows 11, meaning Microsoft is leaving 60% of its Windows customer base out to dry. I'm sure this will go down just fine with regulators and governments the world over. Microsoft has tried everything, and it's clear desperation is setting in, because the company just declared 2025 The year of the Windows 11 PC refresh", stating that Windows 11 is the best way to get all the AI" stuff people are clearly clamoring for. All of the innovation arriving on new Windows 11 PCs is coming at an important time. We recently confirmed that after providing 10 years of updates and support, Windows 10 will reach the end of its lifecycle on Oct. 14, 2025. After this date, Windows 10 PCs will no longer receive security or feature updates, and our focus is on helping customers stay protected by moving to modern new PCs running Windows 11. Whether the current PC needs a refresh, or it has security vulnerabilities that require the latest hardware-backed protection, now is the time to move forward with a new Windows 11 PC. Some overpaid executive at Microsoft What makes this so incredibly aggravating and deeply tone-deaf is that for most of the people affected by this, upgrading" to Windows 11 simply isn't a realistic option. Their current PC is most likely performing and working just fine, but the steep and strict hardware requirements prohibit them from installing Windows 11. Buying an entirely new PC is often not only not needed from a performance perspective, but for many, many people also simply unaffordable. In case you haven't noticed, it's not exactly going great, financially, for a lot of people out there, and even in the US alone, 70-80% of people live paycheck-to-paycheck, and they're certainly not going to be able to just move forward with a new Windows 11 PC" for nebulous and often regressive benefits" like AI". The fact that Microsoft seems to think all of those hundreds of millions of people not only want to buy a new PC to get AI" features, but that they also can afford it like it's no big deal, shows some real lack of connective tissue between the halls of Microsoft's headquarters and the wider world. Microsoft's utter lack of a grasp on the financial realities of so many individuals and families today is shocking, at best, and downright offensive, at worst. I guess if you live in a world where you can casually bribe a president-elect for one million dollars, buying a new computer feels like buying a bag of potatoes.
The more than two decades since Half-Life 2s release have been filled with plenty of rumors and hints about Half-Life 3, ranging from the official-ish to the thin to the downright misleading. As we head into 2025, though, we're approaching something close to a critical mass of rumors and leaks suggesting that Half-Life 3 is really in the works this time, and could be officially announced in the coming months. Kyle Orland at Ars Technica We should all be skeptical of anything related to Half-Life 3, but there's no denying something's buzzing. The one reason why I personally think a Half-Life 3 might be happening is the imminent launch of SteamOS for generic PCs, possibly accompanied by prebuilt SteamOS PCs and consoles and third-party Steam Decks. It makes perfect sense for Valve to have such a launch accompanied by the release of Half-Life 3, similar to how Half-Life 2 was accompanied by the launch of Steam. We'll have to wait and see. It will be hard to fulfill all the crazy expectations, though.
I'd like to write a full-fledged blog post about these adventures at some point, but for now I'm going to focus on one particular side quest: getting acceptable video output out of the 1000H when it's running Windows 3.11 for Workgroups. By default, Windows 3.x renders using the standard lowest common denominator" of video: VGA 640*480 at 16 colours. Unfortunately this looks awful on the Eee PC's beautiful 1024*600 screen, and it's not even the same aspect ratio. But how can we do better? Ash Wolf If you ever wanted to know how display drivers work in Windows 3.x, here's your chance. This definitely falls into the category of light reading for the weekend.
James Thomson, developer of, originally, DragThing and now PCalc, also happens to be the developer of the very first publicly shown version of the Mac OS dock. Now that it was shown to the world by Steve Jobs exactly 25 years ago, he reminisces about what it was like to create such an iconic piece of software history. The new Finder (codename Millennium") was at this point being written on Mac OS 9, because Mac OS X wasn't exactly firing on all cylinders quite yet. The filesystem wasn't working well, which is not super helpful when you are trying to write a user interface on top of it. The Dock was part of the Finder then, and could lean on all the high level C++ interfaces for dealing with disks and files that the rest of the team was working on. So, I started on Mac OS 9, working away in Metrowerks Codewarrior. The Finder was a Carbon app, so we could actually make quite a bit of early progress on 9, before the OS was ready for us. I vividly remember the first time we got the code running on Mac OS X. James Thomson I especially like the story about how Steve Jobs really demanded Thomson live in Cupertino in order to work on the dock, instead of remaining remote in Ireland. Thomson and his wife decided not to move to the United States, so he figured he'd lose his assignment, or maybe even his job altogether. Instead, his managers told him something along the lines of don't worry, we'll just tell Steve you moved". What followed were a lot of back-and-forth flights between Ireland and California, and Thomson's colleagues telling Steve all sorts of lies and cover stories for whenever he was in Ireland and Steve noticed. Absolutely wild. The dock is one of those things from my years using Mac OS X - between roughly 2003 and 2009 or so - that has stuck around with me ever since. To this day, I have a dock at the bottom of my screen that looks and works eerily similar to the Mac OS X dock, and I doubt that's going to change any time soon. It suits my way of using my computer incredibly well, and it's the first thing I set up on any new installation I perform (I use Fedora KDE).
Apple's first-generation Vision Pro headset may have now ceased production, following reports of reduced demand and production cuts earlier in the year. Hartley Charlton at MacRumors I think we'll live.
The RTX 5090 and RTX 5080 are receiving their final updates. According to two highly reliable leakers, the RTX 5090 is officially a 575W TDP model, confirming that the new SKU requires significantly more power than its predecessor, the RTX 4090 with TDP of 450W. According to Kopite, there has also been an update to the RTX 5080 specifications. While the card was long rumored to have a 400W TDP, the final figure is now set at 360W. This change is likely because NVIDIA has confirmed the TDP, as opposed to earlier TGP figures that are higher and represent the maximum power limit required by NVIDIA's specifications for board partners. WhyCry at VideoCardz.com These kinds of batshit insane GPU power power requirements are eventually going to run into the limits of the kind of airflow an ATX case can provide. We're still putting the airflow stream of GPUs (bottom to top) perpendicular to the airflow through the case (front to back) like it's 1987, and you'd think at least someone would be thinking about addressing this - especially when a GPU is casually dumping this much heat into the constrained space within a computer case. I don't want more glass and gamer lights. I want case makers to hire at least one proper fluid dynamics engineer.
It is common knowledge that Final Fantasy could have been the last game in the series. It is far less known that Windows 2, released around the same time, could too have been the last. If anything, things were more certain: even Microsoft believed that Windows 2 would be the last. The miracle of overwhelming commercial success brought incredible attention to Windows. The retro community and computer historians generally seem to be interested in the legendary origins of the system (how it all began) or in its turnabout Windows 3.0 release (what did they do right?). This story instead will be about the underdog of Windows, version 2. To understand where it all went wrong, we must start looking at events that happened even before Microsoft was founded. By necessity, I will talk a lot about the origins of Windows, too. Instead of following interpersonal/corporate drama, I will try to focus on the technical aspects of Windows and its competitors, as well as the technological limitations of the computers around the time. Some details are so convoluted and obscure that evenmultiple Microsoft sources, including Raymond Chen, are wrong about essential technical details. It is going to be quite a journey, and it might seem a bit random, but I promise that eventually, it all will start making sense. Nina Kalinina I'm not going to waste your previous time with my stupid babbling when you could instead spend it reading this amazingly detailed, lovingly crafted, beautifully illustrated, and deeply in-depth article by Nina Kalinina about the history, development, and importance of Windows 2. She's delivered something special here, and it's a joy to read and stare at the screenshots from beginning to end. Don't forget to click on the little expander triangles for a ton of in-depth technical stuff and even more background information.
We've just entered the new year, and that means we're going to see some overviews about what the past year has brought. Today we're looking at AROS, as AROS News - great name, very classy, you've got good taste, don't change it - summarised AROS' 2024, and it's been a good year for the project. We don't hear a lot about AROS-proper, as the various AROS distributions are a more optimal way of getting to know the operating system and the project's communication hasn't always been great, but that doesn't mean they've been sitting still. Perhaps the most surprising amount of progress in 2024 was made in the move from 32bit to 64bit AROS. Deadwood also released a 64-bit version of the system (ABIv11) in a Linux hosted version (ABIv11 20241102-1) and AxRuntime version 41.12, which promises a complete switch to 64-bit in the near future. He has also developed a prototype emulator that will enable 64-bit AROS to run programs written for the 32-bit version of the system. Andrzej retrofaza" Subocz at AROS News This is great news for AROS, as being stuck in 32bit isn't particularly future-proof. It might not pose many problems today, as older hardware remains available and 64bit x86 processors can handle running 32bit operating systems just fine, but you never know when that will change. Int the same vein, Deadwood also released a 64bit version of Oddysey, the WebKit-based browser, which was updated this year from August 2015's WebKit to February 2019's WebKit. Sure, 2019 might still be a little outdated, but it does mean a ton of complex sites now work again on AROS, and that's a hugely positive development. Things like Python and GCC were also updated this year, and there was, as is fitting for an Amiga-inspired operating system, a lot of activity in the gaming world, including big updates to Doom 3 and ScummVM. This is just a selection of course, so be sure to read Subocz's entire summary at AROS News.
Do you think streaming platforms and other entities that employ DRM schemes use the TPM in your computer to decrypt stuff? Well, the Free Software Foundation seems to think so, and adds Microsoft's insistence on requiring a TPM for Windows 11 into the mix, but it turns out that's simply not true. I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff. What Icansay is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. Thereishardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU. Matthew Garrett A TPM is imply not designed to handle decryption of media streams, and even if they were, they're far, far too slow and underpowered to decode even a 1080P stream, let alone anything more demanding than that. In reality, DRM schemes like Google's Widevine, Apple's Fairplay, and Microsoft's Playready offer different levels of functionality, both in software and in hardware. The hardware DRM stuff is all done by the GPU, and not by the TPM. By focusing so much on the TPM, Garrett argues, the FSF is failing to see how GPU makers have enabled a ton of hardware DRM without anyone noticing. Personally, I totally understand why organisations like the Free Software Foundation are focusing on TPMs right now. They're one of the main reasons why people can't upgrade to Windows 11, it's the thing people have heard about, and it's the thing that'll soon prevent them from getting security updates for their otherwise perfectly fine machines. I'm not sure the FSF has enough clout these days to make any meaningful media impact, especially in more general, non-tech media, but by choosing the TPM as their focus they're definitely choosing a viable vector. Of course, over here in the tech corner, we don't like it when people are factually inaccurate or twisting and bending the truth, and I'm glad someone as knowledgeable as Garrett stepped up to set the record straight for us tech-focused people, while everyone else can continue to ignore this matter.
Launched in 1998, the 380Z was one very fine ThinkPad. It was the last ThinkPad to come in the classic bulky and rectangular form factor. It was also one of the first to feature a huge 13.3'' TFT display, powerful 233MHz Pentium II, and whopping 160 megs of RAM. I recently stumbled upon one in perfect condition on eBay, and immediately thought it'd be a cool vintage gadget to put on the desk. I only wondered if I could still use it for some slow-paced, distraction-free coding, using reasonably modern software. Luke's web space You know where this is going, right? I evaluated a bunch of contemporary operating systems, including different variants of BSD and Linux. Usually, the experience was underwhelming in terms of performance, hardware support and stability. Well... except for NetBSD, which gave me such perfectly smooth ride, that I thought it was worth sharing. Luke's web space Yeah, of course it was going to be NetBSD (again). This old laptop, too, can run X11 just fine, with the EMWM that we discussed quite recently - in fact, bringing up X required no configuration, and a simple startx was all it needed out of the box. For web browsing, Dillo works just great, and building it took only about 20 minutes. It can even play some low-fi music streams from the internet, only stuttering when doing other processor-intensive tasks. In other words, this little machine with NetBSD turns out to be a great machine for some distraction-free programming. Look, nobody is arguing that a machine like this is all you need. However, it can perform certain specific, basic tasks - anything being better than sending it to a toxic landfill, with all the transportation waste and child labour that entails. If you have old laptops lying around, you should think twice about just sending them to recycling" (which is Western world speak for send to toxic landfill manned by children in poor countries"), since it might be quite easy to turn it into something useful, still.
Rare, hard to come by, but now available on the Internet Archive: the complete book set for the Windows CE Developer's Kit from 1999. It contains all the separate books in their full glory, so if you ever wanted to write either a Windows CE application or driver for Windows CE 2.0, here's all the information you'll ever need. The Microsoft Windows CE Developer's Kit provides all the information you need to write applications for devices based on the Microsofte Windowso CE operating system. Windows CE Developer's Kit The Microsoft Windows CE Programmer's Guide details the architecture of the operating system, how to write applications, how to implement synchronisation with a PC, and much more that pertains to developing applications. The Microsoft Windows CE User Interface Services Guide can be seen as an important addition to the Programmer's Guide, as it details everything related to creating a GUI and how to handle various input methods. Going a few steps deeper, and we arrive at the Microsoft Windows CE Communications Guide, which, as the name implies, tells you all you need to know about infrared connections, telephony, networking and internet connections, and related matter. Finally, we arrive at the Microsoft Windows CE Device Driver Kit, which, as the name implies, is for those of us interested in writing device drivers for Windows CE, something that will surely be of great importance in the future, since Windows CE is sure to dominate our mobile life. To get started, you do need to have Microsoft Visual C++ version 6.0 and the Microsoft Windows CE Toolkit for Visual C++ version 6.0 up and running, since all code samples in the Programmer's Guide are developed with it, but I'm sure you already have this taken care of - why would you be developing for any other platforms, am I right?
LineageOS, the Debian of the custom Android ROM world, released version 22 - or, 22.1 to be more exact - today. On the verge of the new year, they managed to complete the rebase to Android 15, released in September, making this one of their fastest rebases ever. We've been hard at work since Android 15's release in September, adapting our unique features to this new version of Android. Android 15 introduced several complex changes under the hood, but due to our previous efforts adapting to Google's UI-centric adjustments in Android 12 through 14, we were able to rebase onto Android 15's code-base faster than anticipated. Additionally, this is far-and-away the easiest bringup cycle from a device perspective we have seen in years. This means that many more devices are ready on day one that we'd typically expect to have up this early in the cycle! Nolen Johnson LineageOS is also changing its versioning scheme to better match that of Google's new quarterly Android releases, and that's why this new release is 22.1: it's based on Android 15 QPR1. In other words, the 22 aligns with the major Android version number, and the .1 with the QPR it's based on. LineageOS 22.1 brings all the same new features as Android 15 and QPR1, as well as two brand new applications: Twelve, a replacement for LineageOS' aging music player application, and Camelot, a new PDF reader. The list of supported devices is pretty good for a new LineageOS release, and adds the Pixel 9 series of devices right off the bat. LineageOS 22.1 ships with the November Android security patches, and also comes with a few low-level changes, like completely new extract utilities written in Python, which massively improve extracting performance, virtIO support, and much more.
We've talked about Chimera Linux before - it's a unique Linux distribution that combines a BSD userland with the LLVM/Clang toolchain, and musl. Its init system is dinit, and it uses apk-tools from Alpine as its package manager. None of this has anything to do with being anti-anything; the choice of BSD's tools and userland is mostly technical in nature. Chimera Linux is available for x86-64, AArch64, RISC-V, and POWER (both little and big endian). I am unreasonably excited for Chimera Linux, for a variety of reasons - first, I love the above set of choices they made, and second, Chimera Linux' founder and lead developer, q66, is a well-known and respected name in this space. She not only founded Chimera Linux, but also used to maintain the POWER/PowerPC ports of Void Linux, which is the port of Void Linux I used on my POWER9 hardware. She apparently also contributed quite a bit to Enlightenment, and is currently employed by Igalia, through which she can work on Chimera. With the description out of the way, here's the news: Chimera Linux has officially entered beta. Today we have updatedapk-toolsto anrctag. With this, the project is now entering beta phase, after around a year and a half. In general, this does not actually mean much, as the project is rolling release and updates will simply keep coming. It is more of an acknowledgement of current status, though new images will be released in the coming days. Chimera Linux's website Despite my excitement, I haven't yet tried Chimera Linux myself, as I figured its pre-beta stage wasn't meant for an idiot like me who can't contribute anything meaningful, and I'd rather not clutter the airwaves. Now that it's entered beta, I feel like the time is getting riper and riper for me to dive in, and perhaps write about it here. Since the goal of Chimera Linux is to be a general-purpose distribution, I think I'm right in the proper demographic of users. It helps that I'm about to set up my dual-processor POWER9 machine again, and I think I'll be going with Chimera Linux. As a final note, you may have noticed I consistently refer to it as Chimera Linux". This is very much on purpose, as there's also something called ChimeraOS, a more standard Linux distribution aimed at gaming. To avoid confusion, I figured I'd keep the naming clear and consistent.
Here are my notes on running NetBSD 10.1 on my first personal laptop that I still keep, a 1998 i586 Toshiba Satellite Pro with 81Mb of RAM and a 1Gb IBM 2.5'' IDE HD. In summary, the latest NetBSD runs well on this old hardware using an IDE to CF adapter and several changes to the i386 GENERIC kernel. Joel P. I don't think the BSD world - and NetBSD in particular - gets enough recognition for supporting both weird architectures and old hardware as well as it does. This here is a 26 year old laptop running the latest version of NetBSD, working X11 server and all, while other operating systems drop support for devices only a few years old. So many devices could be saved from toxic landfills if only more people looked beyond Windows and macOS.
IncludeOSis an includable, minimalunikerneloperating system for C++ services running in the cloud and on real HW. Starting a program with#include <os>will literally include a tiny operating system into your service during link-time. IncludeOS GitHub page IncludeOS isn't exactly the only one of its kind, but I've always been slightly mystified by what, exactly, unikernels are for. The gist is, as far as I understand it, that if you build an application using a unikernel, it will find out at compile time exactly what it needs from the operating system to run, and then everything it needs from the operating system to run will be linked inside the resulting application. This can then be booted directly by a hypervisor. The advantages are clear: you don't have to deal with an entire operating system just to run that one application or service you need to provide, and footprint is kept to a minimum because only the exact dependencies the application needs from the operating system are linked to it during compilation. The downsides are obvious too - you're not running an operating system so it's far less flexible, and if issues are found in the unikernel you're going to have to recompile the application and the operating system bits inside of it just to fix it (at least, I think that's the case - don't quote me on it). IncludeOS is under heavy development, so take that under advisement if you intend to use it for something serious. The last full release dates back to 2019, but it's still under development as indicated by the GitHub activity. I hope it'll push out a new release soon.