Mary E. Brunkow, Fred Ramsdell and Shimon Sakaguchi received the Nobel Prize in Physiology or Medicine on Monday for their discoveries about how the immune system regulates itself. The three researchers split 11 million Swedish kroner ($1.17 million). Their work identified regulatory T cells and the FOXP3 gene that controls them. Dr. Sakaguchi spent more than a decade solving a puzzle about the thymus. He discovered that the immune system has a backup mechanism to stop harmful cells from attacking the body's own tissues. Dr. Brunkow and Dr. Ramsdell found the specific gene responsible for this process while studying mice that developed severe autoimmune disease. More than 200 clinical trials are now underway based on their research. Cancers attract regulatory T cells to block immune attacks. Researchers are developing drugs to turn the immune system against these cancer cells. In autoimmune diseases, regulatory T cells are missing or defective. The FOXP3 gene provides a starting point for drugs that teach the immune system to stop attacking itself.Read more of this story at Slashdot.
OpenAI and AMD announced a multibillion-dollar partnership on Monday for AI data centers running on AMD processors. OpenAI committed to purchasing 6 gigawatts worth of AMD's MI450 chips starting next year through direct purchases or through its cloud computing partners. AMD chief Lisa Su said the deal will result in tens of billions of dollars in new revenue over the next half-decade. OpenAI will receive warrants for up to 160 million AMD shares at 1 cent per share, representing roughly 10% of the chip company. The warrants will be awarded in phases if OpenAI hits certain deployment milestones. The partnership marks AMD's biggest win in its quest to disrupt Nvidia's dominance among AI semiconductor companies. Mizuho Securities estimates that Nvidia controls more than 70% of the market for AI chips.Read more of this story at Slashdot.
Vibe coding tools "are transforming the job experience for many tech workers," writes the Los Angeles Times. But Gartner analyst Philip Walsh said the research firm's position is that AI won't replace software engineers and will actually create a need for more."There's so much software that isn't created today because we can't prioritize it," Walsh said. "So it's going to drive demand for more software creation, and that's going to drive demand for highly skilled software engineers who can do it..." The idea that non-technical people in an organization can "vibe-code" business-ready software is a misunderstanding [Walsh said]... "That's simply not happening. The quality is not there. The robustness is not there. The scalability and security of the code is not there," Walsh said. "These tools reward highly skilled technical professionals who already know what 'good' looks like." "Economists, however, are also beginning to worry that AI is taking jobs that would otherwise have gone to young or entry-level workers," the article points out. "In a report last month, researchers at Stanford University found "substantial declines in employment for early-career workers'' - ages 22-25 - in fields most exposed to AI. Stanford researchers also found that AI tools by 2024 were able to solve nearly 72% of coding problems, up from just over 4% a year earlier." And yet Cat Wu, project manager of Anthropic's Claude Code, doesn't even use the term vibe coding. "We definitely want to make it very clear that the responsibility, at the end of the day, is in the hands of the engineers."Wu said she's told her younger sister, who's still in college, that software engineering is still a great career and worth studying. "When I talk with her about this, I tell her AI will make you a lot faster, but it's still really important to understand the building blocks because the AI doesn't always make the right decisions," Wu said. "A lot of times the human intuition is really important."Read more of this story at Slashdot.
Steve Jobs died 14 years ago. But the blog Cult of Mac remembers that "Jobs himself was not sentimental."When he left Apple in the mid-1980s, he didn't even clear out his office. That meant personal mementos like his first Apple stock certificate, which had hung on his office wall, got tossed in the trash. Shortly after returning to Apple in the late 1990s, he gave the company's historical archive to Stanford University Libraries. The stash included records that Apple management kept since the mid-1980s. The reason Apple handed over this historical treasure trove? Jobs didn't want the company to fixate on the past... All of which goes some way to saying why it was so heartening that Steve Jobs' death received so much attention. He wasn't the richest technology CEO to die. But the reaction showed that his life - faults and all - meant a lot to a great number of people. Jobs helped create products people cared about, and in turn they cared about him. The site Mac Rumors remembered Sunday that Jobs "died just one day after Apple unveiled the iPhone 4S and Siri." Six years later, Apple CEO Tim Cook reflected on Jobs while opening Apple's first-ever event at Steve Jobs Theater in 2017. "There is not a day that goes by that we don't think about him." And Sunday Cook posted this remembrance of Steve Jobs. "Steve saw the future as a bright and boundless place, lit the path forward, and inspired us to follow. "We miss you, my friend."Read more of this story at Slashdot.
The director of a tour operation remembers two tourists arriving in a rural town in Peru determined to hike alone in the mountains to a sacred canyon recommended by their AI chatbot. But the canyon didn't exists - and a high-altitude hike could be dangerous (especially where cellphone coverage is also spotty). They're part of a BBC report on travellers arriving at their destination "only to find they've been fed incorrect information or steered to a place that only exists in the hard-wired imagination of a robot..." "According to a 2024 survey, 37% of those surveyed who used AI to help plan their travels reported that it could not provide enough information, while around 33% said their AI-generated recommendations included false information." Some examples?- Dana Yao and her husband recently experienced this first-hand. The couple used ChatGPT to plan a romantic hike to the top of Mount Misen on the Japanese island of Itsukushima earlier this year. After exploring the town of Miyajima with no issues, they set off at 15:00 to hike to the montain's summit in time for sunset, exactly as ChatGPT had instructed them. "That's when the problem showed up," said Yao, a creator who runs a blog about traveling in Japan, "[when] we were ready to descend [the mountain via] the ropeway station. ChatGPT said the last ropeway down was at 17:30, but in reality, the ropeway had already closed. So, we were stuck at the mountain top..." - A 2024 BBC article reported that [dedicated travel AI site] Layla briefly told users that there was an Eiffel Tower in Beijing and suggested a marathon route across northern Italy to a British traveller that was entirely unfeasible... - A recent Fast Company article recounted an incident where a couple made the trek to a scenic cable car in Malaysia that they had seen on TikTok, only to find that no such structure existed. The video they'd watched had been entirely AI generated, either to drum up engagement or for some other strange purpose. Rayid Ghani, a distinguished professor in machine learning at Carnegie Melon University, tells them that an AI chatbot "doesn't know the difference between travel advice, directions or recipes. It just knows words. So, it keeps spitting out words that make whatever it's telling you sound realistic..."Read more of this story at Slashdot.
If we could remove the 50 most concerning pieces of space debris in low-Earth orbit, there'd be a 50% reduction in the overall debris-generating potential, reports Ars Technica. That's according to Darren McKnight, lead author of a paper presented Friday at the International Astronautical Congress in Sydney, which calculated the objects most likely to collide with other fragments and create more debris. (Russia and the Soviet Union lead with 34 objects, followed by China with 10, the U.S. with three, Europe with two, and Japan with one.) Even just the top 10 were removed, the debris-generating potential drops by 30%. "The things left before 2000 are still the majority of the problem," he points out, and "76% of the objects in the top 50 were deposited last century." 88% of the objects are post-mission rocket bodies left behind to hurtle through space. "The bad news is, since January 1, 2024, we've had 26 rocket bodies abandoned in low-Earth orbit that will stay in orbit for more than 25 years," McKnight told Ars... China launched 21 of the 26 hazardous new rocket bodies over the last 21 months, each averaging more than 4 metric tons (8,800 pounds). Two more came from US launchers, one from Russia, one from India, and one from Iran. This trend is likely to continue as China steps up deployment of two megaconstellations - Guowang and Thousand Sails - with thousands of communications satellites in low-Earth orbit. Launches of these constellations began last year. The Guowang and Thousand Sails satellites are relatively small and likely capable of maneuvering out of the way of space debris, although China has not disclosed their exact capabilities. However, most of the rockets used for Guowang and Thousand Sails launches have left their upper stages in orbit. McKnight said nine upper stages China has abandoned after launching Guowang and Thousand Sails satellites will stay in orbit for more than 25 years, violating the international guidelines. It will take hundreds of rockets to fully populate China's two major megaconstellations. The prospect of so much new space debris is worrisome, McKnight said. "In the next few years, if they continue the same trend, they're going to leave well over 100 rocket bodies over the 25-year rule if they continue to deploy these constellations," he said. "So, the trend is not good...." Since 2000, China has accumulated more dead rocket mass in long-lived orbits than the rest of the world combined, according to McKnight. "But now we're at a point where it's actually kind of accelerating in the last two years as these constellations are getting deployed." A deputy head of China's national space agency recently said China is "currently researching" how to remove space debris from orbit, according to the article. ("One of the missions China claims is testing space debris mitigation techniques has docked with multiple spacecraft in orbit, but U.S. officials see it as a military threat. The same basic technologies needed for space debris cleanup - rendezvous and docking systems, robotic arms, and onboard automation - could be used to latch on to an adversary's satellite.")Read more of this story at Slashdot.
"Recent attacks show that hackers keep using the same tricks to sneak bad code into popular software registries," writes long-time Slashdot reader selinux geek, suggesting that "the real problem is how these registries are built, making these attacks likely to keep happening."After all, npm wasn't the only software library hit by a supply chain attack, argues the Linux Security blog. "PyPI and Docker Hub both faced their own compromises in 2025, and the overlaps are impossible to ignore." Phishing has always been the low-hanging fruit. In 2025, it wasn't just effective once - it was the entry point for multiple registry breaches, all occurring close together in different ecosystems... The real problem isn't that phishing happened. It's that there weren't enough safeguards to blunt the impact. One stolen password shouldn't be all it takes to poison an entire ecosystem. Yet in 2025, that's exactly how it played out... Even if every maintainer spotted every lure, registries left gaps that attackers could walk through without much effort. The problem wasn't social engineering this time. It was how little verification stood between an attacker and the "publish" button. Weak authentication and missing provenance were the quiet enablers in 2025... Sometimes the registry itself offers the path in. When the failure is at the registry level, admins don't get an alert, a log entry, or any hint that something went wrong. That's what makes it so dangerous. The compromise appears to be a normal update until it reaches the downstream system... It shifts the risk from human error to systemic design. And once that weakly authenticated code gets in, it doesn't always go away quickly, which leads straight into the persistence problem... Once an artifact is published, it spreads into mirrors, caches, and derivative builds. Removing the original upload doesn't erase all the copies... From our perspective at LinuxSecurity, this isn't about slow cleanup; it's about architecture. Registries have no universally reliable kill switch once trust is broken. Even after removal, poisoned base images replicate across mirrors, caches, and derivative builds, meaning developers may keep pulling them in long after the registry itself is "clean." The article condlues that "To us at LinuxSecurity, the real vulnerability isn't phishing emails or stolen tokens - it's the way registries are built. They distribute code without embedding security guarantees. That design ensures supply chain attacks won't be rare anomalies, but recurring events."BR> So in a world where "the only safe assumption is that the code you consume may already be compromised," they argue, developers should look to controls they can enforce themselves: Verify artifacts with signatures or provenance tools. Pin dependencies to specific, trusted versions. Generate and track SBOMs so you know exactly what's in your stack. Scan continuously, not just at the point of install.Read more of this story at Slashdot.
A computer-generated actress appearing in Instagram shorts now has a talent agent, reports the Los Angeles Times. The massive screen actors union SAG-AFTRA "weighed in with a withering response."SAG-AFTRA believes creativity is, and should remain, human-centered. The union is opposed to the replacement of human performers by synthetics. To be clear, "Tilly Norwood" is not an actor, it's a character generated by a computer program that was trained on the work of countless professional performers - without permission or compensation. It has no life experience to draw from, no emotion and, from what we've seen, audiences aren't interested in watching computer-generated content untethered from the human experience. It doesn't solve any "problem" - it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry. Additionally, signatory producers should be aware that they may not use synthetic performers without complying with our contractual obligations, which require notice and bargaining whenever a synthetic performer is going to be used. "They are taking our professional members' work that has been created, sometimes over generations, without permission, without compensation and without acknowledgment, building something new," SAG-AFTRA President Sean Astin told the Los Angeles Times in an interview:"But the truth is, it's not new. It manipulates something that already exists, so the conceit that it isn't harming actors - because it is its own new thing - ignores the fundamental truth that it is taking something that doesn't belong to them," Astin said. "We want to allow our members to benefit from new technologies," Astin said. "They just need to know that it's happening. They need to give permission for it, and they need to be bargained with...." Some actors called for a boycott of any agents who decide to represent Norwood. "Read the room, how gross," In the Heights actor Melissa Barrera wrote on Instagram. "Our members reserve the right to not be in business with representatives who are operating in an unfair conflict of interest, who are operating in bad faith," Astin said. But this week the head of a new studio from startup Luma AI "said all the big companies and studios were working on AI assisted projects," writes Deadline - and then claimed "being under NDA, she was not in a position to announce any of the details."Read more of this story at Slashdot.
"A group of researchers from the University of California, Irvine, have developed a way to use the sensors in high-quality optical mice to capture subtle vibrations and convert them into audible data," reports Tom's Hardware:[T]he high polling rate and sensitivity of high-performance optical mice pick up acoustic vibrations from the surface where they sit. By running the raw data through signal processing and machine learning techniques, the team could hear what the user was saying through their desk. Mouse sensors with a 20,000 DPI or higher are vulnerable to this attack. And with the best gaming mice becoming more affordable annually, even relatively affordable peripherals are at risk.... [T]his compromise does not necessarily mean a complicated virus installed through a backdoor - it can be as simple as an infected FOSS that requires high-frequency mouse data, like creative apps or video games. This means it's not unusual for the software to gather this data. From there, the collected raw data can be extracted from the target computer and processed off-site. "With only a vulnerable mouse, and a victim's computer running compromised or even benign software (in the case of a web-based attack surface), we show that it is possible to collect mouse packet data and extract audio waveforms," the researchers state. The researchers created a video with raw audio samples from various stages in their pipeline on an accompanying web site where they calculate that "the majority of human speech" falls in a frequency range detectable by their pipeline. While the collected signal "is low-quality and suffers from non-uniform sampling, a non-linear frequency response, and extreme quantization," the researchers augment it with "successive signal processing and machine learning techniques to overcome these challenges and achieve intelligible reconstruction of user speech." They've titled their paper Invisible Ears at Your Fingertips: Acoustic Eavesdropping via Mouse Sensors. The paper's conclusion? "The increasing precision of optical mouse sensors has enhanced user interface performance but also made them vulnerable to side-channel attacks exploiting their sensitivity." Thanks to Slashdot reader jjslash for sharing the article.Read more of this story at Slashdot.
"More than 800,000 drivers for ride-hailing companies in California will soon be able to join a union," reports the Associated Press, "and bargain collectively for better wages and benefits under a measure signed Friday by Gov. Gavin Newsom."Supporters said the new law will open a path for the largest expansion of private sector collective bargaining rights in the state's history. The legislation is a significant compromise in the yearslong battle between labor unions and tech companies. California is the second state where Uber and Lyft drivers can unionize as independent contractors. Massachusetts voters passed a ballot referendum in November allowing unionization, while drivers in Illinois and Minnesota are pushing for similar rights... The collective bargaining measure now allows rideshare workers in California to join a union while still being classified as independent contractors and requires gig companies to bargain in good faith. "The new law doesn't apply to drivers for delivery apps like DoorDash."Read more of this story at Slashdot.
ScienceAlert writes that some of the tiny nanoplastic fragments present in soil "can make their way into the edible parts of vegetables, research has found."A team of scientists from the University of Plymouth in the UK placed radishes into a hydroponic (water-based) system containing polystyrene nanoparticles. After five days, almost 5% of the nanoplastics had made their way into the radish roots. A quarter of those were in the edible, fleshy roots, while a tenth had traveled up to the higher leafy shoots, despite anatomical features within the plants that typically screen harmful material from the soil. "Plants have a layer within their roots called the Casparian strip, which should act as a form of filter against particles, many of which can be harmful," says physiologist Nathaniel Clark. "This is the first time a study has demonstrated nanoplastic particles could get beyond that barrier, with the potential for them to accumulate within plants and be passed on to anything that consumes them...." There are some limitations to the study, as it didn't use a real-world farming setup. The concentration of plastics in the liquid solution is higher than estimated for soil, and only one type of plastic and one kind of vegetable were tested. Nevertheless, the basic principle stands: the smallest plastic nanoparticles can apparently sneak past protective barriers in plants, and from there into the food we eat... "There is no reason to believe this is unique to this vegetable, with the clear possibility that nanoplastics are being absorbed into various types of produce being grown all over the world," says Clark. The research has been published in Environmental Research.Read more of this story at Slashdot.
"It's not just you. The internet is getting worse, fast," writes Cory Doctorow. Sunday he shared an excerpt from his upcoming book Enshittification: Why Everything Suddenly Got Worse and What to Do About It. He succinctly explains "this moment we're living through, this Great Enshittening" using Amazon as an example. Platforms amass users, but then abuse them to make things better for their business customers. And then they abuse those business customers too, abusing everybody while claiming all the value for themselves. "And become a giant pile of shit." So first Amazon subsidized prices and shipping, then locked in customers with Prime shipping subscriptions (while adding the chains of DRM to its ebooks and audiobooks)...These tactics - Prime, DRM and predatory pricing - make it very hard not to shop at Amazon. With users locked in, to proceed with the enshittification playbook, Amazon needed to get its business customers locked in, too... [M]erchants' dependence on those customers allows Amazon to extract higher discounts from those merchants, and that brings in more users, which makes the platform even more indispensable for merchants, allowing the company to require even deeper discounts... [Amazon] uses its overview of merchants' sales, as well as its ability to observe the return addresses on direct shipments from merchants' contracting factories, to cream off its merchants' bestselling items and clone them, relegating the original seller to page umpty-million of its search results. Amazon also crushes its merchants under a mountain of junk fees pitched as optional but effectively mandatory. Take Prime: a merchant has to give up a huge share of each sale to be included in Prime, and merchants that don't use Prime are pushed so far down in the search results, they might as well cease to exist. Same with Fulfilment by Amazon, a "service" in which a merchant sends its items to an Amazon warehouse to be packed and delivered with Amazon's own inventory. This is far more expensive than comparable (or superior) shipping services from rival logistics companies, and a merchant that ships through one of those rivals is, again, relegated even farther down the search rankings. All told, Amazon makes so much money charging merchants to deliver the wares they sell through the platform that its own shipping is fully subsidised. In other words, Amazon gouges its merchants so much that it pays nothing to ship its own goods, which compete directly with those merchants' goods.... Add all the junk fees together and an Amazon seller is being screwed out of 45-51 cents on every dollar it earns there. Even if it wanted to absorb the "Amazon tax" on your behalf, it couldn't. Merchants just don't make 51% margins. So merchants must jack up prices, which they do. A lot... [W]hen merchants raise their prices on Amazon, they are required to raise their prices everywhere else, even on their own direct-sales stores. This arrangement is called most-favoured-nation status, and it's key to the U.S. Federal Trade Commission's antitrust lawsuit against Amazon... If Amazon is taxing merchants 45-51 cents on every dollar they make, and if merchants are hiking their prices everywhere their goods are sold, then it follows you're paying the Amazon tax no matter where you shop - even the corner mom-and-pop hardware store. It gets worse. On average, the first result in an Amazon search is 29% more expensive than the best match for your search. Click any of the top four links on the top of your screen and you'll pay an average of 25% more than you would for your best match - which, on average, is located 17 places down in an Amazon search result. Doctorow knows what we need to do:Ban predatory pricing - "selling goods below cost to keep competitors out of the market (and then jacking them up again)."Impose structural separation, "so it can either be a platform, or compete with the sellers that rely on it as a platform."Curb junk fees, "which suck 45-51 cents on every dollar merchants take in."End its most favoured nation deal, which forces merchants "to raise their prices everywhere else, too.Unionise drivers and warehouse workers.Treat rigged search results as the fraud they are.These are policy solutions. (Because "You can't shop your way out of a monopoly," Doctorow warns.) And otherwise, as Doctorow says earlier, "Once a company is too big to fail, it becomes too big to jail, and then too big to care." In the mean time, Doctorow also makes up a new word - "the enshitternet" - calling it "a source of pain, precarity and immiseration for the people we love. "The indignities of harassment, scams, disinformation, surveillance, wage theft, extraction and rent-seeking have always been with us, but they were a minor sideshow on the old, good internet and they are the everything and all of the enshitternet." Thanks to long-time Slashdot readers mspohr and fjo3 for sharing the article.Read more of this story at Slashdot.
Friday OpenAI CEO Sam Altman announced two changes coming "soon" to Sora:First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls... Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences. We are going to try sharing some of this revenue with rightsholders who want their characters generated by users. The exact model will take some trial and error to figure out, but we plan to start very soon. Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we we want both to be valuable. "We are hearing from a lot of rightsholders who are very excited for this new kind of 'interactive fan fiction'," Altman wrote, "and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all)."Read more of this story at Slashdot.
There's an 85-second ad (starring a humanoid robot) that argues "Technology promised to save us time. Instead it stole our focus. Opera Neon gives you both back." Or, as BleepingComputer describes it, Opera Neon "is a new browser that puts AI in control of your tabs and browsing activities, but it'll cost $19.90 per month."It'll do tasks for you, open websites for you, manage tabs for you, and listen to you. The idea behind these agentic browsers is to put AI in control. "Neon acts at your command, opening tabs, conducting research, finding the best prices, assessing security, whatever you need. It delivers outcomes you can use, share, and build on," Opera noted... As spotted on X, Opera Neon, the premium AI browser for Windows & macOS, costs $59.90 for nine months. Opera neon invite. This is an early bird offer, but when the offer expires, Opera Neon will cost $19.90 per month. The browser's web page says Opera Neon "can handle everyday tasks for you, like filling in forms, placing orders, replying to emails, or tidying up files. Reusable cards turn repeated chores into single-step tasks, letting you focus on the work that matters most to you." Opera describes itself as "the company that gave you tabs..."Read more of this story at Slashdot.
The Washington Post notes AI's "increasingly outsize role" in propping up America's economic fortunes. "Last week, the United States reported that the economy expanded at a rate of 1.6 percent in the first half of the year, with most of that growth driven by AI spending. Without AI investment, growth would have been at about a third of that rate, according to data from the Bureau of Economic Analysis."The huge economic influence of AI spending illustrates how Silicon Valley is placing a bet of unprecedented scale that the technology will revolutionize every aspect of life and work. Its sway suggests there will be economic damage far beyond Silicon Valley if that bet doesn't work out or companies pull back. Google, Meta, Microsoft and Amazon are on track to spend nearly $400 billion this year on data centers... Concern about a potential bubble in AI investment has recently grown in technology and financial circles. ChatGPT and other AI tools are hugely popular with companies and consumers, and hundreds of billions of dollars has been sunk into AI ventures over the past three years. But few of the new initiatives are profitable, and huge profits will be needed for the immense investments to pay off... "I'm getting more and more skeptical and more and more concerned with what's happening" with artificial intelligence, said Andrew Odlyzko, an economic historian and University of Minnesota emeritus professor who has studied financial bubbles closely, including the telecom bubble that collapsed in 2001 as part of the dot-com crash. Some industry insiders have expressed concern that the latest AI releases have fallen short of expectations, suggesting the technology may not advance enough to pay back the huge investments being made, he said. "AI is a craze," Odlyzko said... [The Federal Reserve's August "beige book" summarizes interviews with business owners across the country, according to the article - and it found surging investments in AI data centers, which could tie their fortunes to other sectors.] That's boosting demand for electricity and trucking in the Atlanta region, a hot spot for the facilities, and creating new projects for commercial real estate developers in the Philadelphia region. Because tech companies now dominate public markets, any change in their fortunes and share prices can also have a powerful influence on stock indexes, 401(k)s and the wider economy... Stock market slumps can have knock-on effects by undercutting the confidence of American businesses and consumers, leading them to spend less, said Gregory Daco [chief economist at strategy consulting firm EY-Parthenon]... "That directly affects economic activity," he said, potentially widening the economic fallout... Goldman Sachs analysts wrote in a Sept. 4 note to clients that even if AI investment works out for companies like Google, there will be an "inevitable slowdown" in data center construction. That will cut revenue to companies providing the projects with chips and electricity, the note said. In a more extreme scenario where Big Tech pulls back spending to 2022 levels, the entire S&P 500 would lose 30 percent of the revenue growth Wall Street currently expects next year, the analysts wrote. The AI bubble is 17 times the size of the dot-com frenzy - and four times the subprime bubble, according to estimates in a recent note from independent research firm the MacroStrategy Partnership (as reported by MarketWatch). And "never before has so much money been spent so rapidly on a technology that, for all its potential, remains somewhat unproven as a profit-making business model," writes Bloomberg, adding that OpenAI and other large tech companies are "relying increasingly on debt to support their unprecedented spending." (Although Bloomberg also notes that ChatGPT alone has roughly 700 million weekly users, and that last month Anthropic reported roughly three quarters of companies are using Claude to automate work.)Read more of this story at Slashdot.
The book Life 3.0 remembers a 2017 conversation where Alphabet CEO Larry Page "made a 'passionate' argument for the idea that 'digital life is the natural and desirable next step' in 'cosmic evolution'," remembers an essay in the Wall Street Journal. "Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win..." "As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics... "I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science... [Sutton had said if AI becomes smarter than people - and then can be more powerful - why shouldn't it be?] Sutton told me AIs are different from other human inventions in that they're analogous to children. "When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that? "I don't think there's anything sacred about human DNA," Sutton said. "There are many species - most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..." I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species. He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way." We should get out of the way, that is, because it's unjust to favor humans - and because consciousness in the universe will be superior if AIs supplant us. "The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore...." You may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad - right? What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes... While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence... The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right"...Read more of this story at Slashdot.
Currently DNA synthesis companies "deploy biosecurity software designed to guard against nefarious activity," reports the Washington Post, "by flagging proteins of concern - for example, known toxins or components of pathogens." But Microsoft researchers discovered "up to 100 percent" of AI-generated ricin-like proteins evaded detection - and worked with a group of leading industry scientists and biosecurity experts to design a patch. Microsoft's chief science officer called it "a Windows update model for the planet. "We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can." But is that enough?Outside biosecurity experts applauded the study and the patch, but said that this is not an area where one single approach to biosecurity is sufficient. "What's happening with AI-related science is that the front edge of the technology is accelerating much faster than the back end ... in managing the risks," said David Relman, a microbiologist at Stanford University School of Medicine. "It's not just that we have a gap - we have a rapidly widening gap, as we speak. Every minute we sit here talking about what we need to do about the things that were just released, we're already getting further behind." The Washington Post notes not every company deploys biosecurity software. But "A different approach, biosecurity experts say, is to ensure AI software itself is imbued with safeguards before digital ideas are at the cusp of being brought into labs for research and experimentation.""The only surefire way to avoid problems is to log all DNA synthesis, so if there is a worrisome new virus or other biological agent, the sequence can be cross-referenced with the logged DNA database to see where it came from," David Baker, who shared the Nobel Prize in chemistry for his work on proteins, said in an email.Read more of this story at Slashdot.
"When someone searches for 'James Bond' on Prime Video now, all of the classic films will show up..." notes Parade. But recently Amazon's streaming service had tried new thumbnails with "matching minimalist backgrounds," so every Bond actor - from Sean Connery to Daniel Craig - "had a stylish image with '007' emblazoned over a color background." But in most of those "stylized" images, James Bond's guns were edited out. It looks like Amazon backed off. On my TV and on my tablet, selecting Dr. No now brings up a page where Bond is holding his gun. (Just like in the original publicity photo.) And there's also guns in the key art for The Spy Who Loved Me, A View to a Kill, and License to Kill. "Perhaps feeling shame for the terrible botch job on the artwork, not to mention the idea in the first place, Amazon Prime has now reinstated the previous key art across its streaming service," notes the unofficial James Bond fan site MI6. (In most cases guns still aren't shown, but they seem to achieve this by showing a photo from the movie.) That blog post includes a gallery preserving copies of Amazon's original "stylized" images. They'd written Thursday that Amazon didn't just use cropping. "In some cases the images have been digitally manipulated to varying levels of success."Read more of this story at Slashdot.
If you upload an image to serve as the inspiration for an AI-generated video from OpenAI's Sora, "the app will reject your image if it detects a face - any face," writes Mashable." (Unless that person has agreed to participate.) All Sora videos also include a watermark, notes PC Magazine, and Sora banned the creation of AI-generated videos showing public figures. "But it turns out the policy doesn't apply to dead celebrities..."Unlike lower-quality deepfakes, many of the Sora videos appear disturbingly realistic and accurately mimic the voices and facial expressions of deceased celebrities. Some of the clips even contain licensed music... [A]ccording to OpenAI, the videos are fair game. "We don't have a comment to add, but we do allow the generation of historical figures," the company tells PCMag. CNBC reported Saturday that Sora users have also "flooded the platform with artificial intelligence-generated clips of popular brands and animated characters." They noted Sora generated videos with clearly-copyrighted characters like Ronald McDonald, Simpsons characters, Pikachu, Patrick Star from "SpongeBob SquarePants," and Pikachu. (as Cracked.com puts it, "Ever wish 'South Park' was two minutes long and not funny?") OpenAI's "opt-out" policy for copyright holders was unusual, CNBC writes, since "Typically, third parties have to get explicit permission to use someone's work under copyright law"" (as explained by Jason Bloom, partner/chair of the intellectual property litigation practice group at law firm Haynes Boone). "You can't just post a notice to the public saying we're going to use everybody's works, unless you tell us not to," he said. "That's not how copyright works.""A lot of the videos that people are going to generate of these cartoon characters are going to infringe copyright," Mark Lemley, a professor at Stanford Law School, said in an interview. "OpenAI is opening itself up to quite a lot of copyright lawsuits by doing this..."Read more of this story at Slashdot.
Toyota sold just 61 BZ models in September, reports Electrek. "Including the Lexus RZ, which managed 86 sales, Toyota sold just 147 all-electric vehicles in the US last month, over 90% less than the 1,847 it sold in September 2024."Toyota's total sales were up 14% with over 185,700 vehicles sold, meaning EVs accounted for less than 0.1%... So, why is Toyota struggling to sell EVs when the market is booming? For one, Toyota recalled over 95,000 electric vehicles last month, including the bZ4X, Lexus RZ, and Subaru Solterra, all of which are built on the same platform. The recall was due to a faulty defroster, but Toyota instructed its dealers to halt sales of the bZ4X, Lexus RZ, and Subaru Solterra. Toyota hopes to turn things around with a new and improved lineup. The 2026 Toyota BZ (formerly the bZ4X) is arriving at US dealerships, promising to fix some of the biggest complaints with the outgoing electric SUV. Powered by a larger 74.7 kWh battery, the 2026 Toyota BZ offers up to 314 miles of driving range, a 25% improvement from the 2025 bZ4X... Toyota's new electric SUV also features a built-in NACS charge port, allowing for recharging at Tesla Superchargers. It also features a new thermal management system and battery preconditioning, which improves charge times from 10% to 80% in about 30 minutes... It's not just the US that Toyota's EV sales crashed last month, either. In its home market of Japan, Toyota (including Lexus) sold just 18 EVs in September. The Japanese auto giant is betting on new models to drive growth.Read more of this story at Slashdot.
"Microsoft buys a lot of GPUs from both Nvidia and AMD," writes the Register. "But moving forward, Redmond's leaders want to shift the majority of its AI workloads from GPUs to its own homegrown accelerators..."Driving the transition is a focus on performance per dollar, which for a hyperscale cloud provider is arguably the only metric that really matters. Speaking during a fireside chat moderated by CNBC on Wednesday, Microsoft CTO Kevin Scott said that up to this point, Nvidia has offered the best price-performance, but he's willing to entertain anything in order to meet demand. Going forward, Scott suggested Microsoft hopes to use its homegrown chips for the majority of its datacenter workloads. When asked, "Is the longer term idea to have mainly Microsoft silicon in the data center?" Scott responded, "Yeah, absolutely... Microsoft is reportedly in the process of bringing a second-generation Maia accelerator to market next year that will no doubt offer more competitive compute, memory, and interconnect performance... It should be noted that AI accelerators aren't the only custom chips Microsoft has been working on. Redmond also has its own CPU called Cobalt and a whole host of platform security silicon designed to accelerate cryptography and safeguard key exchanges across its vast datacenter domains.Read more of this story at Slashdot.
The Greater Manchester Police force has 12,677 employees. But they've now suspended work-from-home privileges, reports the BBC, "following an investigation into so-called 'key-jamming', which can allow people to falsely appear to be working. "Twenty-six police officers, staff and contractors are facing misconduct proceedings following the probe, the force said." One constable told a hearing that a police detective working from home had made it look like his computer was in use on 38 different occasions over 12 days, according to an earlier BBC article. The evidence "showed lengthy periods where the only activity is single keystrokes, pressing the 'H' key about 30 times, between 10:28 and 11:56 GMT on 3 December, and then the 'I' key more than 16,000 times." The detective "used key jamming for 45 hours out of a total of 85 he was logged in for and was frequently away from the keyboard for half of his working day." The constable said the detective's motivation was "laziness" - and the detective has already resigned. Thanks to long-time Slashdot reader Bruce66423 for sharing the article.Read more of this story at Slashdot.
From 10 a.m. to 7 p.m. today (EDT), the Free Software Foundation celebrates its 40th anniversary with an online and in-person event. "We will broadcast the talks and workshops via a fully free software livestream on fsf.org/live," according to the FSF's official "FSF40 Celebration" page. "Everyone will be able to join the discussion via the #fsf40 IRC channel on Libera.Chat." "4 decades, 4 freedoms, 4 all users" is the event's slogan. And during the ceremony, a 40th-anniversary cake was sliced by newly-elected FSF president Ian Kelling (who was unanimously confirmed by FSF board members):Kelling, age 43, has held the role of a board member and a voting member since March 2021. The board said of Kelling's confirmation: "His hands-on technical experience resulting from his position as the organization's senior systems administrator proved invaluable for his work on the board of directors... He has the technical knowledge to speak with authority on most free software issues, and he has a strong connection with the community as an active speaker and blogger." Kelling earned a bachelor's degree in computer science and is a continuous user, developer, and advocate for free software. His personal commitment to complete software freedom has been shaped by his past experiences working as a software developer for proprietary software companies while using, learning, and contributing to GNU/Linux on his own time. "Ian has shown good judgment on the board, and a firm commitment to the free software movement," FSF founder and Chief GNUisance Richard Stallman said. Outgoing FSF President and long-time board member Geoff Knauth added: "Since joining the board in 2021, Ian has shown a clear understanding of the free software philosophy in today's technology, and a strong vision. He recognizes threats in upcoming technologies, promotes transparency, has played a significant role in designing and implementing the new board recruitment processes, and has always adhered to ethical principles. He has also given me valuable advice at critical moments, for which I am very grateful..." Kelling will continue to fill the role of senior systems administrator for the FSF, which he has held since 2017, where he leads the FSF's tech team under the direction of Zoe Kooyman, executive director of the FSF. True to the FSF's tradition for this role, he takes on the governance role as a volunteer. Upcoming on the livestream:Free Software Foundation triviaLibreLocal group lightning talksA panel with the FSF, Electronic Frontier Foundation (EFF) , F-Droid, and Sugar LabsRead more of this story at Slashdot.
An anonymous reader shared this report from the Washington Post:Women tend to live longer than men. There are traditional explanations: Men smoke more. They drink more. They tend to engage in riskier behavior. But the fact that this lifespan gap holds true regardless of country or century indicates something deeper is also at play. A growing body of evidence suggests that women's relative longevity may derive, in part, from having double X chromosomes, a redundancy that protects them against harmful mutations. That theory was further bolstered Wednesday with the publication of the most sweeping analysis to date of the lifespan differences between males and females in more than 1,000 mammal and bird species... If a baby has a pair of X chromosomes, she's a girl. If the baby inherits an X chromosome and a Y chromosome, he's a boy. In birds, however, the situation is reversed. Female birds have a pair of unlike sex chromosomes while males have the like pair... For their study, Colchero, Staerk and their colleagues collected data on the lifespans of 528 mammal species and 648 bird species kept in zoos. The team found that most other mammals are like humans, with the females of nearly three-fourths of mammal species outliving their male counterparts. But in birds, 68 percent of species studied showed a bias toward male longevity, as expected from their chromosomal makeup.Read more of this story at Slashdot.
Long-time Slashdot reader theodp writes: CBS News has a TL;DR video report, but Jeremy Stern's earlier epic Class Dismissed [at Collosus.com] offers a deep dive into Alpha School, "the teacherless, homeworkless, K-12 private school in Austin, Texas, where students have been testing in the top 0.1% nationally by self-directing coursework with AI tutoring apps for two hours a day. Alpha students are incentivized to complete coursework to "mastery-level" (i.e., scoring over 90%) in only two hours via a mix of various material and immaterial rewards, including the right to spend the other four hours of the school day in 'workshops,' learning things like how to run an Airbnb or food truck, manage a brokerage account or Broadway production, or build a business or drone." Founder MacKenzie Larson's dream that "kids must love school so much they don't want to go on vacation" drew the attention of - and investments of money and time from - mysterious tech billionaire Joe Liemandt, who sent his own kids to Larson's school and now aims to bring the experience to rest of the world. "When GenAI hit in 2022," Liemandt said, "I took a billion dollars out of my software company. I said, 'Okay, we're going to be able to take MacKenzie's 2x in 2 hours groundwork and get it out to a billion kids.' It's going to cost more than that, but I could start to figure it out. It's going to happen. There's going to be a tablet that costs less than $1,000 that is going to teach every kid on this planet everything they need to know in two hours a day and they're going to love it. "I really do think we can transform education for everybody in the world. So that's my next 20 years. I literally wake up now and I'm like, I'm the luckiest guy in the world. I will work 7 by 24 for the next 20 years to fricking do this. The greatest 20 years of my life are right ahead of me. I don't think I'm going to lose. We're going to win." Of course, Stern writes at Collosus.com, there will be questions about this model of schooling, but asks: "Suppose that from kindergarten through 12th grade, your child's teachers were, in essence, stacks of machines. Suppose those machines unlocked more of your child's academic potential than you knew was possible, and made them love school. Suppose the schooling they loved involved vision monitoring and personal data capture. Suppose that surveillance architecture enabled them to outperform your wildest expectations on standardized tests, and in turn gave them self-confidence and self-esteem, and made their own innate potential seem limitless.... Suppose poor kids had a reason to believe and a way to show they're just as academically capable as rich kids, and that every student on Earth could test in what we now consider the top 10%. Suppose it allowed them to spend two-thirds of their school day on their own interests and passions. Suppose your child's deep love of school minted a new class of education billionaires. "If you shrink from such a future, by which principle would you justify stifling it?"Read more of this story at Slashdot.
The food delivery robots that arrived in Atlanta in June "are not our friends," argues a headline at CNN. The four-wheeled Serve Robotics machines "get confused at crosswalks. They move with the speed and caution of a first-time driver, stilted and shy, until they suddenly speed up without warning. Their four wheels look like they were made for off-roading, but they still get stuck in the cracks of craggy sidewalks. Most times I see the bots, they aren't moving at all... "Cyclists swerve to avoid them like any other obstacle in the road. Patrons of Shake Shack (a national partner of Serve) weave around the mess of robots parked in front of the restaurant to make their way inside and place orders on iPads... The dawn of everyday, "friendly" robots may be here, but they haven't proven themselves useful - or trustworthy - yet. "People think they are your friends, but they're actually cameras and microphones of corporations," said Joanna Bryson, a longtime AI scholar and professor of ethics and technology at the Hertie School in Berlin. "You're right to be nervous..." When robots show up in a city, it's often not because the residents of said city actively wanted them there or had a say in their arrival said Edward Ongweso Jr. [a researcher at the Security in Context initiative, a tech journalist and self-proclaimed "decelerationist" urging a slower rollout for Silicon Valley tech pioneers and civic leaders embracing untested and unregulated technology]... "They're being rolled out without any sort of input from people, and as a result, in ways that are annoying and inconvenient," Ongweso Jr. said. "I suspect that people would feel a lot differently if they had a choice ... 'what kind of robots are we interested in rolling out in our homes, in our workplaces, on our college campuses or in our communities?'" Delivery robots aren't unique to Atlanta. AI-driven companies including Avride and Coco Robotics have sent fleets of delivery robots to big cities like Chicago, Dallas and Jersey City, as well as sleepy college towns... "They're popping up everywhere," Ongweso Jr. continued, "because there's sort of a realization that you have to convince people to view them as inevitable. The way to do that is to just push it into as many places as possible, and have these spectacle demonstrations, get some friendly coverage, try to figure out the ways in which you're selling this as the only alternative.... If you humanize it, you're more willing to entertain it and rationalize it being in your area - 'That's just Jeffrey,' or whatever they name it - instead of seeing it for what it is, which is a bunch of investors privately encroaching on a community or workplace," Ongweso Jr. said. "It's not the future. It's a business model." Serve Robotics CEO Ali Kashani told CNN their goal in Atlanta was reducing traffic - and that the robots' average delivery distance there was under a mile, taking about 18 minutes per delivery. Serve Robotics has also launched their robots in Chicago, Los Angeles, Miami, Dallas-Fort Worth and Atlanta, according to the site Robotics 247, as part of an ongoing collaboration with Uber Eats. (Although after the robots launched in Los Angeles, a man in a mobility scooter complained the slow-moving robot swerved in front of him.) And "residents of other cities have had to rescue them when they've been felled by weather," reports CNN. CNN also spoke to Dylan Losey, an assistant professor of mechanical engineering at Virginia Tech who studies human-robot interaction, who notes that the robots' AI algorithms are "completely unregulated... We don't know if a third party has checked the hardware and software and deemed the system 'safe' - in part because what it means for these systems to be 'safe' is not fully understood or standardized." (CNN's reporter adds that "the last time I got close to a bot, to peer down at a flier someone left on top of it, it revved at me loudly. Perhaps they can sense a hater.") But Serve's CEO says there's one crucial way robot delivery will be cheaper than humans. "You don't have to tip the robots."Read more of this story at Slashdot.
"A small number of researchers are making real progress trying to create computers out of living cells," reports the BBC:Among those leading the way are a group of scientists in Switzerland, who I went to meet. One day, they hope we could see data centres full of "living" servers which replicate aspects of how artificial intelligence (AI) learns - and could use a fraction of the energy of current methods. That is the vision of Dr Fred Jordan, co-founder of the FinalSpark lab I visited. We are all used to the ideas of hardware and software in the computers we currently use. The somewhat eyebrow-raising term Dr Jordan and others in the field use to refer to what they are creating is "wetware". In simple terms, it involves creating neurons which are developed into clusters called organoids, which in turn can be attached to electrodes - at which point the process of trying to use them like mini-computers can begin... For FinalSpark, the process begins with stem cells derived from human skin cells, which they buy from a clinic in Japan. The actual donors are anonymous... In the lab, FinalSpark's cellular biologist Dr Flora Brozzi handed me a dish containing several small white orbs. Each little sphere is essentially a tiny, lab-grown mini-brain, made out of living stem cells which have been cultured to become clusters of neurons and supporting cells - these are the "organoids"... After undergoing a process which can last several months, the organoids are ready to be attached to an electrode and then prompted to respond to simple keyboard commands... Electrical stimulations are important first steps towards the team's bigger goal of triggering learning in the biocomputer's neurons so they can eventually adapt to perform tasks... FinalSpark are not the only scientists working in the biocomputing space. Australian firm Cortical Labs announced in 2022 that it had managed to get artificial neurons to play the early computer game Pong. In the US, researchers at Johns Hopkins University are also building "mini-brains" to study how they process information - but in the context of drug development for neurological conditions like Alzheimer's and autism. Thanks to long-time Slashdot reader fjo3 for sharing the news.Read more of this story at Slashdot.
Amazon will be adding facial recognition to its camera-equipped Ring doorbells for the first time in December, according to the Washington Post. "While the feature will be optional for Ring device owners, privacy advocates say it's unfair that wherever the technology is in use, anyone within sight will have their faces scanned to determine who's a friend or stranger." The Ring feature is "invasive for anyone who walks within range of your Ring doorbell," said Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center. "They are not consenting to this." Ring spokeswoman Emma Daniels said that Ring's features empower device owners to be responsible users of facial recognition and to comply with relevant laws that "may require obtaining consent prior to identifying people..." Other companies, including Google, already offer facial recognition for connected doorbells and cameras. You might use similar technology to unlock your iPhone or tag relatives in digital photo albums. But privacy watchdogs said that Ring's use of facial recognition poses added risks, because the company's products are embedded in our neighborhoods and have a history of raising social, privacy and legal questions... It's typically legal to film in public places, including your doorway. And in most of the United States, your permission is not legally required to collect or use your faceprint. Privacy experts said that Ring's use of the technology risks crossing ethical boundaries because of its potential for widespread use in residential areas without people's knowledge or consent. You choose to unlock your iPhone by scanning your face. A food delivery courier, a child selling candy or someone walking by on the sidewalk is not consenting to have their face captured, stored and compared against Ring's database, said Adam Schwartz, privacy litigation director for the consumer advocacy group Electronic Frontier Foundation. "It's troubling that companies are making a product that by design is taking biometric information from people who are doing the innocent act of walking onto a porch," he said. Ring's spokesperson said facial recognition won't be available some locations, according to the article, including Texas and Illinois, which passed laws fining companies for collecting face information without permission. But the Washington Post heard another possible worst-case scenario from Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center: databases of identified faces being stolen by cyberthieves, misused by Ring employees, or shared with outsiders such as law enforcement. Amazon says they're "reuniting lost dogs through the power of AI," in their announcement this week, thanks to "an AI-powered community feature that enables your outdoor Ring cameras to help reunite lost dogs with their families... When a neighbor reports a lost dog in the Ring app, nearby outdoor Ring cameras automatically begin scanning for potential matches." Amazon calls it an example of their vision for "tools that make it easier for neighbors to look out for each other, and create safer, more connected communities." They're also 10x zoom, enhanced low-light performance, 2K and 4K resolutions, and "advanced AI tuning" for video...Read more of this story at Slashdot.
BrianFagioli shares a report from NERDS.xyz: Signal has introduced the Sparse Post Quantum Ratchet (SPQR), a new upgrade to its encryption protocol that mixes quantum safe cryptography into its existing Double Ratchet. The result, which Signal calls the Triple Ratchet, makes it much harder for even future quantum computers to break private chats. The change happens silently in the background, meaning users do not need to do anything, but once fully rolled out it will make harvested messages useless even to adversaries with quantum power. The company worked with researchers and used formal verification tools to prove the new protocol's security. Signal says the upgrade preserves its guarantees of forward secrecy and post compromise security while adding protection against harvest now, decrypt later attacks. The move raises a bigger question: will this be enough when large scale quantum computers arrive, or will secure messaging need to evolve yet again?Read more of this story at Slashdot.
An anonymous reader quotes a report from Reuters: Indonesia has suspended TikTok's registration to provide electronic systems after it failed to hand over all data relating to the use of its live stream feature, a government official said on Friday. The suspension could in theory prevent access to TikTok, which has more than 100 million accounts based in Indonesia. Alexander Sabar, an official at Indonesia's communications and digital ministry, said in a statement some accounts with ties to online gambling activities used TikTok's live stream feature during national protests. [...] Sabar said the government had asked the company for its traffic, streaming and monetization data. The company, owned by China's ByteDance, did not provide complete data, citing its internal procedures, Sabar said without giving further detail.Read more of this story at Slashdot.
Google will discontinue Gmailify and POP email support in January 2026, forcing users who rely on these features to switch to IMAP. PCWorld reports: These changes only affect future emails. Emails that have already been synchronized in the Gmail account will remain the same. External accounts can still be used in the Gmail app, but only via IMAP. Google also recommends that users with work or education accounts contact their administrators if a Google Workspace migration is needed. For many Gmail users, these changes will likely mean getting used to the new system. Anyone who previously upgraded their external email accounts with Gmailify or integrated them via POP will have to switch to IMAP by January 2026 at the latest and do without some convenient functions, like spam filters and automatic sorting.Read more of this story at Slashdot.
The University of San Francisco issued a campuswide alert after reports of a man using Meta Ray-Ban AI glasses to film students while making "unwanted comments and inappropriate dating questions." Although no violence has been reported, officials said he may be uploading footage to TikTok and Instagram. SFGate reports: University officials said "no threats or acts of violence" have been reported, but they have been unable to identify all students who appear in the videos. They urged any school members affected to alert the app platform and the USF Department of Public Safety. "As a community, we share the responsibility of caring for ourselves, each other, and this place," school officials said in the alert. "By looking out for one another and promptly reporting concerns, we help ensure a safe and supportive environment for all." The glasses feature a small camera that can be used for recording by pressing a button or using voice controls. Meta advises users to act "responsibly" when using the glasses. "Not everyone loves being photographed. Stop recording if anyone expresses that they would rather opt out, and be particularly mindful of others before going live," the company said.Read more of this story at Slashdot.
The SEC has approved the Texas Stock Exchange (TXSE), the first new fully integrated U.S. stock exchange in decades and the only one based in Texas. TXSE is set to launch trading services, as well as exchange-traded products, known as ETPs, and corporate listings, in 2026. CBS News reports: Exchange-traded products are financial instruments that follow the performance of underlying assets such as stocks, indexes or other financial benchmarks. Like stocks, ETPs are traded on public exchanges, allowing investors to buy and sell them throughout the trading day at market prices that fluctuate in real time. TXSE was backed by wealth management giant BlackRock and market maker Citadel Securities, among other firms. The Texas company said in June 2024 that it raised a total of $120 million from more than two dozen investors. TXSE's headquarters in Dallas opened this spring, the group said.Read more of this story at Slashdot.
An anonymous reader quotes a report from TechCrunch: Google is bringing its AI coding agent Jules deeper into developer workflows with a new command-line interface and public API, allowing it to plug into terminals, CI/CD systems, and tools like Slack -- as competition intensifies among tech companies to own the future of software development and make coding more of an AI-assisted task. Until now, Jules -- Google's asynchronous coding agent -- was only accessible via its website and GitHub. On Thursday, the company introduced Jules Tools, a command-line interface that brings Jules directly into the developer's terminal. The CLI lets developers interact with the agent using commands, streamlining workflows by eliminating the need to switch between the web interface and GitHub. It allows them to stay within their environment while delegating coding tasks and validating results. "We want to reduce context switching for developers as much as possible," Kathy Korevec, director of product at Google Labs, told TechCrunch. Jules differs from Gemini CLI in that it focuses on "scoped," independent tasks rather than requiring iterative collaboration. Once a user approves a plan, Jules executes it autonomously, while the CLI needs more step-by-step guidance. Jules also has a public API for workflow and IDE integration, plus features like memory, a stacked diff viewer, PR comment handling, and image uploads -- capabilities not present in the CLI. Gemini CLI is limited to terminals and CI/CD pipelines and is better suited for exploratory, highly interactive use.Read more of this story at Slashdot.
Last month, federal investigators said they dismantled a China-linked plot that aimed to cripple New York City's telecommunications system by overloading cell towers, jamming 911 calls, and disrupting communications. According to law enforcement sources, the plot was even bigger than first thought. "Agents from Homeland Security Investigations found an additional 200,000 SIM cards at a location in New Jersey," according to ABC News. "That's double the 100,000 SIM cards, along with hundreds of servers, that were recently seized at five other vacant offices and apartments in and around the city." From the report: Investigators secured each of those locations, seized the electronics, and are now trying to track down who rented the spaces and filled them with shelves full of gear capable of sending 30 million anonymous text messages every minute, overloading communications and blacking out cellular service in a city that relies on it for emergency response and counterterrorism. According to sources, the investigation began after several high-level people, including at least one with direct access to President Donald Trump, were targeted not only by swatters but also with actual threats received on their private phones. "The potential threat these data centers pose to the public could include shutting down critical resources that the public needs, like the 911 system, or potentially impacting the public's ability to communicate everything, including business transactions," said Don Mihalek, an ABC News contributor who was formerly with the Secret Service.Read more of this story at Slashdot.
OpenAI's valuation has surged to $500 billion after a $6.6 billion secondary stock sale, briefly making it the world's most valuable startup ahead of SpaceX and ByteDance. The Associated Press reports: Current and former OpenAI employees sold $6.6 billion in shares to a group of investors, pushing the privately held artificial intelligence company's valuation to $500 billion, according to a source with knowledge of the deal who was not authorized to discuss it publicly. The investors buying the shares included Thrive Capital, Dragoneer Investment Group and T. Rowe Price, along with Japanese tech giant SoftBank and the United Arab Emirates' MGX, the source said Thursday. The valuation reflects high expectations for the future of AI technology and continues OpenAI's remarkable trajectory from its start as a nonprofit research lab in 2015. But with the San Francisco-based company not yet turning a profit, it could also amplify concerns about an AI bubble if the generative AI products made by OpenAI and its competitors don't meet the expectations of investors pouring billions of dollars into research and development.Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: As we careen toward a future in which Google has final say over what apps you can run, the company has sought to assuage the community's fears with a blog post and a casual "backstage" video. Google has said again and again since announcing the change that sideloading isn't going anywhere, but it's definitely not going to be as easy. The new information confirms app installs will be more reliant on the cloud, and devs can expect new fees, but there will be an escape hatch for hobbyists. Confirming app verification status will be the job of a new system component called the Android Developer Verifier, which will be rolled out to devices in the next major release of Android 16. Google explains that phones must ensure each app has a package name and signing keys that have been registered with Google at the time of installation. This process may break the popular FOSS storefront F-Droid. It would be impossible for your phone to carry a database of all verified apps, so this process may require Internet access. Google plans to have a local cache of the most common sideloaded apps on devices, but for anything else, an Internet connection is required. Google suggests alternative app stores will be able to use a pre-auth token to bypass network calls, but it's still deciding how that will work. The financial arrangement has been murky since the initial announcement, but it's getting clearer. Even though Google's largely automated verification process has been described as simple, it's still going to cost developers money. The verification process will mirror the current Google Play registration fee of $25, which Google claims will go to cover administrative costs. So anyone wishing to distribute an app on Android outside of Google's ecosystem has to pay Google to do so. What if you don't need to distribute apps widely? This is the one piece of good news as developer verification takes shape. Google will let hobbyists and students sign up with only an email for a lesser tier of verification. This won't cost anything, but there will be an unclear limit on how many times these apps can be installed. The team in the video strongly encourages everyone to go through the full verification process (and pay Google for the privilege). We've asked Google for more specifics here.Read more of this story at Slashdot.
Dozens of countries have yet to secure accommodation at next month's COP30 climate summit in Brazil and some delegates are considering staying away as a shortage of hotels has driven prices to hundreds of dollars per night. Reuters: Small island states on the frontline of rising sea levels are confronted with having to consider reducing the size of delegations they send to Belem, while two European nations said they were considering not attending at all. COP30 organisers are racing to convert love motels, cruise ships and churches into lodgings for an anticipated 45,000 delegates. Brazil chose to hold the climate talks at Belem, which typically has 18,000 hotel beds available, in the hope its location on the edge of the Amazon rainforest would focus attention on the threat climate change poses to this ecosystem, and its role in absorbing climate-warming emissions.Read more of this story at Slashdot.
An anonymous reader shares a report from The Verge: Microsoft is getting ready to announce an ad-supported version of Xbox Cloud Gaming. Sources familiar with Microsoft's plans tell The Verge that the software maker has started testing ad-supported games streaming internally, allowing employees to play select titles free without a Game Pass subscription. I understand that the free ad-supported version of Xbox Cloud Gaming will include the ability to stream some games you own, as well as eligible Free Play Days titles, which let Xbox players try games over a weekend. You'll also be able to stream Xbox Retro Classics games. Sources tell me the internal testing includes around two minutes of preroll ads before a game is available to stream for free through Xbox Cloud Gaming. [...] The ad-supported Xbox Cloud Gaming version will be available on PC, Xbox consoles, handheld devices, and via the web.Read more of this story at Slashdot.
The blackout that left Spain without power last April was the most severe incident to hit European networks in two decades and the first of its kind, according to the European Network of Transmission System Operators for Electricity. Damian Cortinas, the organization's chairman, said the April 28 outage was Europe's first blackout linked to cascading voltages. More than 50 million people lost electricity for several hours. A preliminary report published in July attributed the outage to a chain of power generation disconnections and abnormal voltage surges. The final assessment will be released in the first quarter of next year and presented to the European Commission and member states. A government probe in June found that grid operator Red Electrica failed to replace one of 10 planned thermal plants, reducing reserve capacity. Spain spent only $0.3 on its grid for every dollar invested in renewables between 2020 and 2024, the lowest ratio among European countries and well below the $0.7 average.Read more of this story at Slashdot.
theodp writes: From Thursday's Code.org press release announcing the replacement of the annual Hour of Code for K-12 schoolkids with the new Hour of AI: "A decade ago, the Hour of Code ignited a global movement that introduced millions of students to computer science, inspiring a generation of creators. Today, Code.org announced the next chapter: the Hour of AI, a global initiative developed in collaboration with CSforALL and supported by dozens of leading organizations. [...] As artificial intelligence rapidly transforms how we live, work, and learn, the Hour of AI reflects an evolution in Code.org's mission: expanding from computer science education into AI literacy. This shift signals how the education and technology fields are adapting to the times, ensuring that students are prepared for the future unfolding now." "Just as the Hour of Code showed students they could be creators of technology, the Hour of AI will help them imagine their place in an AI-powered world," said Hadi Partovi, CEO and co-founder of Code.org. "Every student deserves to feel confident in their understanding of the technology shaping their future. And every parent deserves the confidence that their child is prepared for it." "Backed by top organizations such as Microsoft, Amazon, Anthropic, Zoom, LEGO Education, Minecraft, Pearson, ISTE, Common Sense Media, American Federation of Teachers (AFT), National Education Association (NEA), and Scratch Foundation, the Hour of AI is designed to bring AI education into the mainstream. New this year, the National Parents Union joins Code.org and CSforALL as a partner to emphasize that AI literacy is not only a student priority but a parent imperative." The announcement of the tech-backed K-12 CS education nonprofit's mission shift into AI literacy comes just days after Code.org's co-founders took umbrage with a NY Times podcast that discussed "how some of the same tech companies that pushed for computer science are now pivoting from coding to pushing for AI education and AI tools in schools" and advancing the narrative that "the country needs more skilled AI workers to stay competitive, and kids who learn to use AI will get better job opportunities."Read more of this story at Slashdot.
Ha Dang, a self-taught accountant from Scunthorpe who trained via YouTube, won the inaugural Microsoft Excel UK Championships on September 30. The victory earned him a spot at the Microsoft Excel World Championships in Las Vegas, a three-day tournament inside a 30,000-square-foot esports arena where players compete for $5,000 and are broadcast on ESPN. Thirty competitors sat shoulder to shoulder through three gruelling rounds of spreadsheet challenges. Each round featured a custom case with seven levels of increasing difficulty. The second round case, Right Royal Battle Part II, took 80 drafts to perfect. Players calculated troop sizes from emoji battalions and army movements across fourteenth-century France. Hadyn Wiseman, who once held the Guinness World Record for most backflips in a minute, placed fourth. Lara Holding-Jones finished thirteenth. Jaq Kennedy founded the UK chapter last year. National chapters have since formed in Germany, Brazil, and Chile.Read more of this story at Slashdot.
Social media usage peaked in 2022 and has been on a steady decline since. An analysis of 250,000 adults across more than 50 countries by the digital audience insights company GWI found that adults aged 16 and older spent an average of two hours and 20 minutes per day on social platforms at the end of 2024. That figure is down almost 10% from 2022. The decline is most pronounced among teenagers and people in their twenties. Usage has traced a smooth curve upward and then downward over the past decade. This is not simply the unwinding of increased screen time during pandemic lockdowns. The data also captured a shift in how people use these platforms. The share of people who report using social media to stay in touch with friends, express themselves or meet new people has fallen by more than a quarter since 2014. Opening the apps reflexively to fill spare time has risen. North America is an exception to the global trend. Social media consumption there continues to climb. By 2024 it reached levels 15% higher than Europe. Meta and OpenAI recently announced new social platforms that will be filled with AI-generated short-form videos.Read more of this story at Slashdot.
Jeff Bezos told an audience on Friday that gigawatt-scale data centers will be built in space within the next ten to twenty years. The Amazon founder said these orbital facilities would eventually outperform their terrestrial counterparts because space offers uninterrupted solar power around the clock. Bezos was speaking in a fireside chat with Ferrari and Stellantis Chairman John Elkann. He said the giant training clusters needed for AI would be better built in space because there are no clouds, rain or weather to interrupt power generation. Bezos predicted that space-based data centers would beat the cost of Earth-based ones within a couple of decades. He described the shift as part of a broader pattern that has already occurred with weather satellites and communication satellites. The next steps would be data centers and then other kinds of manufacturing.Read more of this story at Slashdot.
Air pollution increases the likelihood of people becoming frail in middle and old age, according to an international review of studies. The Guardian: The review team found 10 studies that looked at outdoor air pollution and frailty. The people studied came from 11 countries including China, the UK, Sweden, South Africa and Mexico. Two of the studies showed that men were more vulnerable than woman, with a stronger association between particle pollution and frailty. The risk of frailty increased with outdoor particle pollution. For the UK, this could mean about 10-20% of frailty cases are attributable to air pollution. Exposure to secondhand smoking was the environmental factor that presented the greatest risk of frailty. The risk of frailty was increased by about 60% for people who breathed other people's smoke at home. Using solid fuels for cooking or home heating also carried an extra risk of frailty. This was about half the risk of living with a smoker, based on studies from six countries.Read more of this story at Slashdot.
Pew Research: Public awareness of legal sports betting has grown in recent years -- and so has the perception that it is a bad thing for society and sports, according to a new Pew Research Center survey. Today, 43% of U.S. adults say the fact that sports betting is now legal in much of the country is a bad thing for society. That's up from 34% in 2022. And 40% of adults now say it's a bad thing for sports, up from 33%. Despite these increasingly critical views of legal sports betting, many Americans continue to say it has neither a bad nor good impact on society and on sports. Fewer than one-in-five see positive impacts. Meanwhile, the share of Americans who have bet money on sports in the past year has not changed much since 2022. Today, 22% of adults say they've personally bet money on sports in the past year. That's a slight uptick from 19% three years ago. This figure includes betting in any of three ways:1. With friends or family, such as in a private betting pool, fantasy league or casual bet2. Online with a betting app, sportsbook or casino3. In person at a casino, racetrack or betting kiosk Further reading: Filipinos Are Addicted to Online Gambling. So Is Their Government.Read more of this story at Slashdot.
No automaker has matched Tesla's ability to deliver over-the-air software updates despite years of effort and billions in spending. Tesla introduced the technology in 2012 and issued 42 updates within six months, Jean-Marie Lapeyre, Capgemini's chief technology officer for automotive, told WIRED. Other automakers ship updates "maybe once a year," Lapeyre said. General Motors actually introduced OTA functionality first in 2010, two years before Tesla, but limited it to the OnStar telematics system. Traditional automakers treat software as one bolt-on component among many. Tesla and other digital-native brands like Rivian, Lucid and Chinese companies including BYD and Xpeng treat it as central. There are now 69 million OTA-capable vehicles in the United States, S&P Global estimates. More than 13 million vehicles were recalled in 2024 due to software-related issues, a 35 percent increase over the prior year. OTA updates cost automakers $66.50 per vehicle for each gigabyte of data, Harman Automotive estimates.Read more of this story at Slashdot.
The Cybersecurity Information Sharing Act expired on Wednesday when the federal government shut down. The law had provided legal protections since 2015 for organizations to share cyber threat intelligence with federal agencies. Without these protections, private sector companies that control most U.S. critical infrastructure face potential legal risks when sharing information about threats. Sen. Gary Peters called the lapse "an open invitation to cybercriminals and hostile actors to attack our economy and our critical infrastructure." The intelligence sharing enabled by CISA 2015 helped expose Chinese campaigns including Volt Typhoon in 2023 and Salt Typhoon last year. Several cybersecurity firms pledged to continue sharing threat data despite the law's expiration. Halcyon and CrowdStrike confirmed they would maintain information sharing. Palo Alto Networks said it remained committed to public-private partnerships but did not specify whether it would continue sharing threat data. Multiple bipartisan reauthorization efforts failed before the shutdown. The House Homeland Security Committee had approved a 10-year extension last month.Read more of this story at Slashdot.
James Marriott, writing in a column: The world of print is orderly, logical and rational. In books, knowledge is classified, comprehended, connected and put in its place. Books make arguments, propose theses, develop ideas. "To engage with the written word," the media theorist Neil Postman wrote, "means to follow a line of thought, which requires considerable powers of classifying, inference-making and reasoning." As Postman pointed out, it is no accident, that the growth of print culture in the eighteenth century was associated with the growing prestige of reason, hostility to superstition, the birth of capitalism, and the rapid development of science. Other historians have linked the eighteenth century explosion of literacy to the Enlightenment, the birth of human rights, the arrival of democracy and even the beginnings of the industrial revolution. The world as we know it was forged in the reading revolution. Now, we are living through the counter-revolution. More than three hundred years after the reading revolution ushered in a new era of human knowledge, books are dying. Numerous studies show that reading is in free-fall. Even the most pessimistic twentieth-century critics of the screen-age would have struggled to predict the scale of the present crisis. In America, reading for pleasure has fallen by forty per cent in the last twenty years. In the UK, more than a third of adults say they have given up reading. The National Literacy Trust reports "shocking and dispiriting" falls in children's reading, which is now at its lowest level on record. The publishing industry is in crisis: as the author Alexander Larman writes, "books that once would have sold in the tens, even hundreds, of thousands are now lucky to sell in the mid-four figures." [...] What happened was the smartphone, which was widely adopted in developed countries in the mid-2010s. Those years will be remembered as a watershed in human history. Never before has there been a technology like the smartphone. Where previous entertainment technologies like cinema or television were intended to capture their audience's attention for a period, the smartphone demands your entire life. Phones are designed to be hyper-addictive, hooking users on a diet of pointless notifications, inane short-form videos and social media rage bait.Read more of this story at Slashdot.
Longtime Slashdot reader theodp writes: Big Tech Told Kids to Code. The Jobs Didn't Follow, a New York Times podcast episode discussing how the promise of a six-figure salary for those who study computer science is turning out to be an empty one for recent grads in the age of AI, drew the ire of the co-founders of nonprofit Code.org, which -- ironically -- is pivoting to AI itself with the encouragement of, and millions from, its tech-giant backers. In a LinkedIn post, Code.org CEO and co-founder Hadi Partovi said the paper and its Monday episode of "The Daily" podcast were cherrypicking anecdotes "to stoke populist fears about tech corporations and AI." He also took to X, tweeting: "Today the NYTimes (falsely) claimed CS majors can't find work. The data tells the opposite story: CS grads have the highest median wage and the fifth-lowest underemployment across all majors. [...] Journalism is broken. Do better NYTimes." To which Code.org co-founder Ali Partovi (Hadi's twin), replied: "I agree 100%. That NYTimes Daily piece was deplorable -- an embarrassment for journalism."Read more of this story at Slashdot.