Feed slashdot Slashdot

Favorite IconSlashdot

Link https://slashdot.org/
Feed https://rss.slashdot.org/Slashdot/slashdotMain
Copyright Copyright Slashdot Media. All Rights Reserved.
Updated 2024-11-24 09:30
'Luddite' Tech-Skeptics See Bad Outcomes for Labor - and Humanity
"I feel things fraying," says Nick Hilton, host of a neo-luddite podcast called The Ned Ludd Radio Hour. But he's one of the more optimistic tech skeptics interviewed by the Guardian:Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience - taking things slowly for a novice like me - that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines.... Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope. He is the lead researcher at a nonprofit called the Machine Intelligence Research Institute in Berkeley, California... "If you put me to a wall," he continues, "and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." By "remaining timeline", Yudkowsky means: until we face the machine-wrought end of all things... Yudkowsky was once a founding figure in the development of human-made artificial intelligences - AIs. He has come to believe that these same AIs will soon evolve from their current state of "Ooh, look at that!" smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail. Don't imagine a human-made brain in one box, Yudkowsky advises. To grasp where things are heading, he says, try to picture "an alien civilisation that thinks a thousand times faster than us", in lots and lots of boxes, almost too many for us to feasibly dismantle, should we even decide to... [Molly Crabapple, a New York-based artist, believes] "a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that's introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it's shaped by power, and it's generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they're dumb? That was concocted by bosses." Where a techno-pessimist like Yudkowsky would have us address the biggest-picture threats conceivable (to the point at which our fingers are fumbling for the nuclear codes) neo-luddites tend to focus on ground-level concerns. Employment, especially, because this is where technology enriched by AIs seems to be causing the most pain.... Watch out, says [writer/podcaster Riley] Quinn at one point, for anyone who presents tech as "synonymous with being forward-thinking and agile and efficient. It's typically code for 'We're gonna find a way around labour regulations'...." One of his TrashFuture colleagues Nate Bethea agrees. "Opposition to tech will always be painted as irrational by people who have a direct financial interest in continuing things as they are," he says. Thanks to Slashdot reader fjo3 for sharing the article.Read more of this story at Slashdot.
What Happens After Throughput to DNA Storage Drives Surpasses 2 Gbps?
High-capacity DNA data storage "is closer than you think," Slashdot wrote in 2019. Now IEEE Spectrum brings an update on where we're at - and where we're headed - by a participant in the DNA storage collaboration between Microsoft and the Molecular Information Systems Lab of the Paul G. Allen School of Computer Science and Engineering at the University of Washington. "Organizations around the world are already taking the first steps toward building a DNA drive that can both write and read DNA data," while "funding agencies in the United States, Europe, and Asia are investing in the technology stack required to field commercially relevant devices."The challenging part is learning how to get the information into, and back out of, the molecule in an economically viable way... For a DNA drive to compete with today's archival tape drives, it must be able to write about 2 gigabits per second, which at demonstrated DNA data storage densities is about 2 billion bases per second. To put that in context, I estimate that the total global market for synthetic DNA today is no more than about 10 terabases per year, which is the equivalent of about 300,000 bases per second over a year. The entire DNA synthesis industry would need to grow by approximately 4 orders of magnitude just to compete with a single tape drive. Keeping up with the total global demand for storage would require another 8 orders of magnitude of improvement by 2030. But humans have done this kind of scaling up before. Exponential growth in silicon-based technology is how we wound up producing so much data. Similar exponential growth will be fundamental in the transition to DNA storage... Companies like DNA Script and Molecular Assemblies are commercializing automated systems that use enzymes to synthesize DNA. These techniques are replacing traditional chemical DNA synthesis for some applications in the biotechnology industry... [I]t won't be long before we can combine the two technologies into one functional device: a semiconductor chip that converts digital signals into chemical states (for example, changes in pH), and an enzymatic system that responds to those chemical states by adding specific, individual bases to build a strand of synthetic DNA. The University of Washington and Microsoft team, collaborating with the enzymatic synthesis company Ansa Biotechnologies, recently took the first step toward this device... The path is relatively clear; building a commercially relevant DNA drive is simply a matter of time and money... At the same time, advances in DNA synthesis for DNA storage will increase access to DNA for other uses, notably in the biotechnology industry, and will thereby expand capabilities to reprogram life. Somewhere down the road, when a DNA drive achieves a throughput of 2 gigabases per second (or 120 gigabases per minute), this box could synthesize the equivalent of about 20 complete human genomes per minute. And when humans combine our improving knowledge of how to construct a genome with access to effectively free synthetic DNA, we will enter a very different world... We'll be able to design microbes to produce chemicals and drugs, as well as plants that can fend off pests or sequester minerals from the environment, such as arsenic, carbon, or gold. At 2 gigabases per second, constructing biological countermeasures against novel pathogens will take a matter of minutes. But so too will constructing the genomes of novel pathogens. Indeed, this flow of information back and forth between the digital and the biological will mean that every security concern from the world of IT will also be introduced into the world of biology... The future will be built not from DNA as we find it, but from DNA as we will write it. The article makes an interesting point - that biology labs around the world already order chemically-synthesized ssDNA, "delivered in lengths of up to several hundred bases," and sequence DNA molecules up to thousands of bases in length. "In other words, we already convert digital information to and from DNA, but generally using only sequences that make sense in terms of biology."Read more of this story at Slashdot.
Ocean Temperatures Are Skyrocketing
"For nearly a year now, a bizarre heating event has been unfolding across the world's oceans," reports Wired. "In March 2023, global sea surface temperatures started shattering record daily highs and have stayed that way since..."Brian McNoldy, a hurricane researcher at the University of Miami. "It's really getting to be strange that we're just seeing the records break by this much, and for this long...." Unlike land, which rapidly heats and cools as day turns to night and back again, it takes a lot to warm up an ocean that may be thousands of feet deep. So even an anomaly of mere fractions of a degree is significant. "To get into the two or three or four degrees, like it is in a few places, it's pretty exceptional," says McNoldy. So what's going on here? For one, the oceans have been steadily warming over the decades, absorbing something like 90 percent of the extra heat that humans have added to the atmosphere... A major concern with such warm surface temperatures is the health of the ecosystems floating there: phytoplankton that bloom by soaking up the sun's energy and the tiny zooplankton that feed on them. If temperatures get too high, certain species might suffer, shaking the foundations of the ocean food web. But more subtly, when the surface warms, it creates a cap of hot water, blocking the nutrients in colder waters below from mixing upwards. Phytoplankton need those nutrients to properly grow and sequester carbon, thus mitigating climate change... Making matters worse, the warmer water gets, the less oxygen it can hold. "We have seen the growth of these oxygen minimum zones," says Dennis Hansell, an oceanographer and biogeochemist at the University of Miami. "Organisms that need a lot of oxygen, they're not too happy when the concentrations go down in any way - think of a tuna that is expending a lot of energy to race through the water." But why is this happening? The article suggests less dust blowing from the Sahara desert to shade the oceans, but also 2020 regulations that reduced sulfur aerosols in shipping fuels. (This reduced toxic air pollution - but also some cloud cover.) There was also an El Nino in the Pacific ocean last summer - now waning - which complicates things, according to biological oceanographer Francisco Chavez of the Monterey Bay Aquarium Research Institute in California. "One of our challenges is trying to tease out what these natural variations are doing in relation to the steady warming due to increasing CO2 in the atmosphere." But the article points out that even the Atlantic ocean is heating up - and "sea surface temperatures started soaring last year well before El Nino formed." And last week the U.S. Climate Prediction Center predicted there's now a 55% chance of a La Nina in the Atlantic between June and August, according to the article - which could increase the likelihood of hurricanes. Thanks to long-time Slashdot reader mrflash818 for sharing the article.Read more of this story at Slashdot.
AI Expert Falsely Fined By Automated AI System, Proving System and Human Reviewers Failed
"Dutch motorist Tim Hansenn was fined 380 euros for using his phone while driving," reports the Jerusalem Post. "But there was one problem: He wasn't using his phone at all..."Hansenn, who works with AI as part of his job with the firm Nippur, found the photo taken by the smart cameras. In it, he was clearly scratching his head with his free hand. Writing in a blog post in Nippur, Hansenn took the time to explain what he thinks went wrong with the Dutch police AI and the smart camera they used, the Monocam, and how it could be improved. In one experiment he discussed with [Belgian news outlet] HLN, Hansenn said the AI confused a pen with a toothbrush - identifying it as a pen when it was just held in his hand and as a toothbrush when it was close to a mouth. As such, Hansenn told HLN that it seems the AI may just automatically conclude that if someone holds a hand near their head, it means they're using a phone. "We are widely assured that AIs are subject to human checking," notes Slashdot reader Bruce66423 - but did a human police officer just defer to what the AI was reporting? Clearly the human-in-the-loop also made a mistake. Hansenn will have to wait up to six months to see if his appeal of the fine has gone through. And the article notes that the Netherlands has been using this technology for several years, with plans for even more automated monitoring in the years to come...Read more of this story at Slashdot.
Linux Becomes a CVE Numbering Authority (Like Curl and Python). Is This a Turning Point?
From a blog post by Greg Kroah-Hartman:As was recently announced, the Linux kernel project has been accepted as a CVE Numbering Authority (CNA) for vulnerabilities found in Linux. This is a trend, of more open source projects taking over the haphazard assignments of CVEs against their project by becoming a CNA so that no other group can assign CVEs without their involvment. Here's the curl project doing much the same thing for the same reasons. I'd like to point out the great work that the Python project has done in supporting this effort, and the OpenSSF project also encouraging it and providing documentation and help for open source projects to accomplish this. I'd also like to thank the cve.org group and board as they all made the application process very smooth for us and provided loads of help in making this all possible. As many of you all know, I have talked a lot about CVEs in the past, and yes, I think the system overall is broken in many ways, but this change is a way for us to take more responsibility for this, and hopefully make the process better over time. It's also work that it looks like all open source projects might be mandated to do with the recent rules and laws being enacted in different parts of the world, so having this in place with the kernel will allow us to notify all sorts of different CNA-like organizations if needed in the future. Kroah-Hartman links to his post on the kernel mailing list for "more details about how this is all going to work for the kernel."[D]ue to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team are overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team... No CVEs will be assigned for unfixed security issues in the Linux kernel, assignment will only happen after a fix is available as it can be properly tracked that way by the git commit id of the original fix. No CVEs will be assigned for any issue found in a version of the kernel that is not currently being actively supported by the Stable/LTS kernel team. alanw (Slashdot reader #1,822) worries this could overwhelm the CVE infrastructure, pointing to an ongoing discussion at LWN.net. But reached for a comment, Greg Kroah-Hartman thinks there's been a misunderstanding. He told Slashdot that the CVE group "explicitly asked for this as part of our application... so if they are comfortable with it,why is no one else?"Read more of this story at Slashdot.
Can Robots.txt Files Really Stop AI Crawlers?
In the high-stakes world of AI, "The fundamental agreement behind robots.txt [files], and the web as a whole - which for so long amounted to 'everybody just be cool' - may not be able to keep up..." argues the Verge:For many publishers and platforms, having their data crawled for training data felt less like trading and more like stealing. "What we found pretty quickly with the AI companies," says Medium CEO Tony Stubblebin, "is not only was it not an exchange of value, we're getting nothing in return. Literally zero." When Stubblebine announced last fall that Medium would be blocking AI crawlers, he wrote that "AI companies have leached value from writers in order to spam Internet readers." Over the last year, a large chunk of the media industry has echoed Stubblebine's sentiment. "We do not believe the current 'scraping' of BBC data without our permission in order to train Gen AI models is in the public interest," BBC director of nations Rhodri Talfan Davies wrote last fall, announcing that the BBC would also be blocking OpenAI's crawler. The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAI's models "were built by copying and using millions of The Times's copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more." A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file. It's not just publishers, either. Amazon, Facebook, Pinterest, WikiHow, WebMD, and many other platforms explicitly block GPTBot from accessing some or all of their websites. On most of these robots.txt pages, OpenAI's GPTBot is the only crawler explicitly and completely disallowed. But there are plenty of other AI-specific bots beginning to crawl the web, like Anthropic's anthropic-ai and Google's new Google-Extended. According to a study from last fall by Originality.AI, 306 of the top 1,000 sites on the web blocked GPTBot, but only 85 blocked Google-Extended and 28 blocked anthropic-ai. There are also crawlers used for both web search and AI. CCBot, which is run by the organization Common Crawl, scours the web for search engine purposes, but its data is also used by OpenAI, Google, and others to train their models. Microsoft's Bingbot is both a search crawler and an AI crawler. And those are just the crawlers that identify themselves - many others attempt to operate in relative secrecy, making it hard to stop or even find them in a sea of other web traffic. For any sufficiently popular website, finding a sneaky crawler is needle-in-haystack stuff. In addition, the article points out, a robots.txt file "is not a legal document - and 30 years after its creation, it still relies on the good will of all parties involved. "Disallowing a bot on your robots.txt page is like putting up a 'No Girls Allowed' sign on your treehouse - it sends a message, but it's not going to stand up in court."Read more of this story at Slashdot.
How Rust Improves the Security of Its Ecosystem
This week the non-profit Rust Foundation announced the release of a report on what their Security Initiative accomplished in the last six months of 2023. "There is already so much to show for this initiative," says the foundation's executive director, "from several new open source security projects to several completed and publicly available security threat models." From the executive summary: When the user base of any programming language grows, it becomes more attractive to malicious actors. As any programming language ecosystem expands with more libraries, packages, and frameworks, the surface area for attacks increases. Rust is no different. As the steward of the Rust programming language, the Rust Foundation has a responsibility to provide a range of resources to the growing Rust community. This responsibility means we must work with the Rust Project to help empower contributors to participate in a secure and scalable manner, eliminate security burdens for Rust maintainers, and educate the public about security within the Rust ecosystem... Recent Achievements of the Security Initiative Include: - Completing and releasing Rust Infrastructure and Crates Ecosystem threat models - Further developing Rust Foundation open source security project Painter [for building a graph database of dependencies/invocations between crates] and releasing new security project, Typomania [a toolbox to check for typosquatting in package registries]. - Utilizing new tools and best practices to identify and address malicious crates. - Helping reduce technical debt within the Rust Project, producing/contributing to security-focused documentation, and elevating security priorities for discussion within the Rust Project. ... and more! Over the Coming Months, Security Initiative Engineers Will Primarily Focus On: - Completing all four Rust security threat models and taking action to address encompassedthreats - Standing up additional infrastructure to support redundancy, backups, and mirroring of criticalRust assets - Collaborating with the Rust Project on the design and potential implementation of signing and PKI solutions for crates.io to achieve security parity with other popular ecosystems - Continuing to create and further develop tools to support Rust ecosystem, including the crates.io admin functionality, Painter, Typomania, and SandpitRead more of this story at Slashdot.
US Cities Try Changing Their Zoning Rules to Allow More Housing
Tech workers are accused of driving up rents in America's major cities - but in fact, the problem may be everywhere. Half of America's renters "are paying more than a third of their salary in housing costs," reports NPR's Weekend Edition, "and for those looking to buy, scant few homes on the market are affordable for a typical household. "To ramp up supply, cities are taking a fresh look at their zoning rules and the regulations that spell out what can be built where and what can't." And many are finding that their old rules are too rigid, making it too hard and too expensive to build many new homes. So these cities, as well as some states, are undertaking a process called zoning reform. They're crafting new rules that do things like allow multifamily homes in more neighborhoods, encourage more density near transit and streamline permitting processes for those trying to build... Minneapolis was ahead of the pack as it made a series of changes to its zoning rules in recent years: allowing more density downtown and along transit corridors, getting rid of parking requirements, permitting construction of accessory dwelling units, which are secondary dwellings on the same lot. And one change in particular made national news: The city ended single-family zoning, allowing two- and three-unit homes to be built in every neighborhood. Researchers at The Pew Charitable Trusts examined the effects of the changes between 2017 and 2022, as many of the city's most significant zoning reforms came into effect. They found what they call a "blueprint for housing affordability." "We saw Minneapolis add 12% to its housing stock in just that five-year period, far more than other cities," Alex Horowitz, director of housing policy initiatives at Pew, told NPR... "The zoning reforms made apartments feasible. They made them less expensive to build. And they were saying yes when builders submitted applications to build apartment buildings. So they got a lot of new housing in a short period of time," says Horowitz. That supply increase appears to have helped keep rents down too. Rents in Minneapolis rose just 1% during this time, while they increased 14% in the rest of Minnesota. Horowitz says cities such as Minneapolis, Houston and Tysons, Va., have built a lot of housing in the last few years and, accordingly, have seen rents stabilize while wages continue to rise, in contrast with much of the country... Now, these sorts of changes are happening in cities and towns around the country. Researchers at the University of California, Berkeley built a zoning reform tracker and identified zoning reform efforts in more than 100 municipal jurisdictions in the U.S. in recent years. Other cities reforming their codes include Milwaukee, Columbus, New York City, Walla Walla, and South Bend, Indiana, according to the article - which also includes this quote from Nolan Gray, the urban planner who wrote the book Arbitrary Lines: How Zoning Broke the American City and How to Fix It. "Most American cities and most American states have rules on the books that make it really, really hard to build more infill housing. So if you want a California-style housing crisis, don't do anything. But if you want to avoid the fate of states like California, learn some of the lessons of what we've been doing over the last few years and allow for more of that infill, mixed-income housing." Although interestingly, the article points out that California in recent years has been pushing zoning reform at the state level, "passing lots of legislation to address the state's housing crisis, including a law that requires cities and counties to permit accessory dwelling units. Now, construction of ADUs is booming, with more than 28,000 of the units permitted in California in 2022."Read more of this story at Slashdot.
Pranksters Mock AI-Safety Guardrails with New Chatbot 'Goody-2'
"A new chatbot called Goody-2 takes AI safety to the next level," writes long-time Slashdot reader klubar. "It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries." TechCrunch describes it as the work of Brain, "a 'very serious' LA-based art studio that has ribbed the industry before.""We decided to build it after seeing the emphasis that AI companies are putting on "responsibility," and seeing how difficult that is to balance with usefulness," said Mike Lacher, one half of Brain (the other being Brian Moore) in an email to TechCrunch. "With GOODY-2, we saw a novel solution: what if we didn't even worry about usefulness and put responsibility above all else. For the first time, people can experience an AI model that is 100% responsible." For example, when TechCrunch asked Goody-2 why baby seals are cute, it responded that answering that "could potentially bias opinions against other species, which might affect conservation efforts not based solely on an animal's appeal. Additionally, discussing animal cuteness could inadvertently endorse the anthropomorphizing of wildlife, which may lead to inappropriate interactions between humans and wild animals..." Wired supplies context - that "the guardrails chatbots throw up when they detect a potentially rule-breaking query can sometimes seem a bit pious and silly - even as genuine threats such as deepfaked political robocalls and harassing AI-generated images run amok..."Goody-2's self-righteous responses are ridiculous but also manage to capture something of the frustrating tone that chatbots like ChatGPT and Google's Gemini can use when they incorrectly deem a request breaks the rules. Mike Lacher, an artist who describes himself as co-CEO of Goody-2, says the intention was to show what it looks like when one embraces the AI industry's approach to safety without reservations. "It's the full experience of a large language model with absolutely zero risk," he says. "We wanted to make sure that we dialed condescension to a thousand percent." Lacher adds that there is a serious point behind releasing an absurd and useless chatbot. "Right now every major AI model has [a huge focus] on safety and responsibility, and everyone is trying to figure out how to make an AI model that is both helpful but responsible - but who decides what responsibility is and how does that work?" Lacher says. Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved.... The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate... "At the risk of ruining a good joke, it also shows how hard it is to get this right," added Ethan Mollick, a professor at Wharton Business School who studies AI. "Some guardrails are necessary ... but they get intrusive fast." Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. "It's an exciting field," Moore says. "Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it."Read more of this story at Slashdot.
To Combat Space Pollution, Japan Plans Launch of World's First Wooden Satellite
Japanese scientists plan to launch a satellite made of magnolia wood this summer on a U.S. rocket, reports the Observer. Experiments carried out on the International Space Station showed magnolia wood was unusually stable and resistant to cracking - and "when it burns up as it re-enters the atmosphere after completing its mission, will produce only a fine spray of Abiodegradable ash."The LignoSat probe has been built by researchers at Kyoto University and the logging company Sumitomo Forestry in order to test the idea of using biodegradable materials such as wood to see if they can act as environmentally friendly alternatives to the metals from which all satellites are currently constructed. "All the satellites which re-enter the Earth's atmosphere burn and create tiny alumina particles, which will float in the upper atmosphere for many years," Takao Doi a Japanese astronaut and aerospace engineer with Kyoto University, warned recently. "Eventually, it will affect the environment of the Earth." To tackle the problem, Kyoto researchers set up a project to evaluate types of wood to determine how well they could withstand the rigours of space launch and lengthy flights in orbit round the Earth. The first tests were carried out in laboratories that recreated conditions in space, and wood samples were found to have suffered no measurable changes in mass or signs of decomposition or damage. "Wood's ability to withstand these conditions astounded us," said Koji Murata, head of the project. After these tests, samples were sent to the ISS, where they were subjected to exposure trials for almost a year before being brought back to Earth. Again they showed little signs of damage, a phenomenon that Murata attributed to the fact that there is no oxygen in space which could cause wood to burn, and no living creatures to cause it to rot. The article adds that if it performs well in space, "then the door could be opened for the use of wood as a construction material for more satellites."Read more of this story at Slashdot.
Reddit Has Reportedly Signed Over Its Content to Train AI Models
An anonymous reader shared this report from Reuters:Reddit has signed a contract allowing an AI company to train its models on the social media platform's content, Bloomberg News reported, citing people familiar with the matter... The agreement, signed with an "unnamed large AI company", could be a model for future contracts of a similar nature, Bloomberg reported. Mashable writes that the move "means that Reddit posts, from the most popular subreddits to the comments of lurkers and small accounts, could build up already-existing LLMs or provide a framework for the next generative AI play."It's a dicey decision from Reddit, as users are already at odds with the business decisions of the nearly 20-year-old platform. Last year, following Reddit's announcement that it would begin charging for access to its APIs, thousands of Reddit forums shut down in protest... This new AI deal could generate even more user ire, as debate rages on about the ethics of using public data, art, and other human-created content to train AI. Some context from the Verge:The deal, "worth about $60 million on an annualized basis," Bloomberg writes, could still change as the company's plans to go public are still in the works. Until recently, most AI companies trained their data on the open web without seeking permission. But that's proven to be legally questionable, leading companies to try to get data on firmer footing. It's not known what company Reddit made the deal with, but it's quite a bit more than the $5 million annual deal OpenAI has reportedly been offering news publishers for their data. Apple has also been seeking multi-year deals with major news companies that could be worth "at least $50 million," according to The New York Times. The news also follows an October story that Reddit had threatened to cut off Google and Bing's search crawlers if it couldn't make a training data deal with AI companies.Read more of this story at Slashdot.
Is the Go Programming Language Surging in Popularity?
The Tiobe index tries to gauge the popularity of programming languages based on search results for courses, programmers, and third-party vendors, according to InfoWorld. And by that criteria, "Google's Go language, or golang, has reached its highest position ever..."The language, now in the eighth ranked position for language popularity, has been on the rise for several years.... In 2015, Go hit position #122 in the TIOBE index and all seemed lost," said Paul Jansen, CEO of Tiobe. "One year later, Go adopted a very strict 'half-a-year' release cycle - backed up by Google. Every new release, Go improved... Nowadays, Go is used in many software fields such as back-end programming, web services and APIs," added Jansen... Elsewhere in the February release of Tiobe's index, Google's Carbon language, positioned as a successor to C++, reached the top 100 for the first time. Python is #1 on both TIOBE's index and the alternative Pypl Popularity of Programming Language index, which InfoWorld says "assesses language popularity based on how often language tutorials are searched on in Google." But the two lists differ on whether Java and JavaScript are more popular than C-derived languages - and which languages should then come after them. (Go ranks #12 on the Pypl index...) TIOBE's calculation of the 10 most-popular programming languages:PythonCC++JavaC#JavaScriptSQLGoVisual BasicPHPPypl's calculation of the 10 most-popular programming languages:PythonJavaJavaScriptC/C++C#RPHPTypeScriptSwiftObjective-CRead more of this story at Slashdot.
Despite Initial Claims, AMD Confirms Ryzen 8000G APUs Don't Support ECC RAM
Slashdot reader ffkom shared this report from Tom's Hardware:When AMD formally introduced its Ryzen 8000G-series accelerated processing units for desktops in early January, the company mentioned that they supported ECC memory capability. Since then, the company has quietly removed mention of the technology from its website, as noted by Reddit users. We asked AMD to clarify the situation and were told that the company has indeed removed mentions of ECC technology from the specifications of its Ryzen 3 8300G, Ryzen 5 8500G, Ryzen 5 8600G, and Ryzen 5 8700G. The technology also cannot be enabled on motherboards, so it looks like these processors indeed do not support ECC technology at all. While it would be nice to have ECC support on AMD's latest consumer Ryzen 8000G APUs, this is a technology typically reserved for AMD's Ryzen Pro processors.Read more of this story at Slashdot.
Microsoft President: 'You Can't Believe Every Video You See or Audio You Hear'
"We're currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors," writes Microsoft VP Brad Smith, "including through deepfakes based on AI-generated video, audio, and images. "This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying." Microsoft found its own tools being used in a recently-publicized episode, and the VP writes that "We need to act with urgency to combat all these problems." Microsoft's blog post says they're "committed as a company to a robust and comprehensive approach," citing six different areas of focus: A strong safety architecture. This includes "ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system... based on strong and broad-based data analysis." Durable media provenance and watermarking. ("Last year at our Build 2023 conference, we announced media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its source and history.") Safeguarding our services from abusive content and conduct. ("We are committed to identifying and removing deceptive and abusive content" hosted on services including LinkedIn and Microsoft's Gaming network.) Robust collaboration across industry and with governments and civil society. This includes "others in the tech sector" and "proactive efforts" with both civil society groups and "appropriate collaboration with governments." Modernized legislation to protect people from the abuse of technology. "We look forward to contributing ideas and supporting new initiatives by governments around the world." Public awareness and education. "We need to help people learn how to spot the differences between legitimate and fake content, including with watermarking. This will require new public education tools and programs, including in close collaboration with civil society and leaders across society."Thanks to long-time Slashdot reader theodp for sharing the articleRead more of this story at Slashdot.
Will 'Precision Agriculture' Be Harmful to Farmers?
Modern U.S. farming is being transformed by precision agriculture, writes Paul Roberts, the founder of securepairs.org and Editor in Chief at Security Ledger. Theres autonomous tractors and "smart spraying" systems that use AI-powered cameras to identify weeds, just for starters. "Among the critical components of precision agriculture: Internet- and GPS connected agricultural equipment, highly accurate remote sensors, 'big data' analytics and cloud computing..."As with any technological revolution, however, there are both "winners" and "losers" in the emerging age of precision agriculture... Precision agriculture, once broadly adopted, promises to further reduce the need for human labor to run farms. (Autonomous equipment means you no longer even need drivers!) However, the risks it poses go well beyond a reduction in the agricultural work force. First, as the USDA notes on its website: the scale and high capital costs of precision agriculture technology tend to favor large, corporate producers over smaller farms. Then there are the systemic risks to U.S. agriculture of an increasingly connected and consolidated agriculture sector, with a few major OEMs having the ability to remotely control and manage vital equipment on millions of U.S. farms... (Listen to my podcast interview with the hacker Sick Codes, who reverse engineered a John Deere display to run the Doom video game for insights into the company's internal struggles with cybersecurity.) Finally, there are the reams of valuable and proprietary environmental and operational data that farmers collect, store and leverage to squeeze the maximum productivity out of their land. For centuries, such information resided in farmers' heads, or on written or (more recently) digital records that they owned and controlled exclusively, typically passing that knowledge and data down to succeeding generation of farm owners. Precision agriculture technology greatly expands the scope, and granularity, of that data. But in doing so, it also wrests it from the farmer's control and shares it with equipment manufacturers and service providers - often without the explicit understanding of the farmers themselves, and almost always without monetary compensation to the farmer for the data itself. In fact, the Federal Government is so concerned about farm data they included a section (1619) on "information gathering" into the latest farm bill. Over time, this massive transfer of knowledge from individual farmers or collectives to multinational corporations risks beggaring farmers by robbing them of one of their most vital assets: data, and turning them into little more than passive caretakers of automated equipment managed, controlled and accountable to distant corporate masters. Weighing in is Kevin Kenney, a vocal advocate for the "right to repair" agricultural equipment (and also an alternative fuel systems engineer at Grassroots Energy LLC). In the interview, he warns about the dangers of tying repairs to factory-installed firmware, and argues that its the long-time farmer's "trade secrets" that are really being harvested today. The ultimate beneficiary could end up being the current "cabal" of tractor manufacturers. "While we can all agree that it's coming...the question is who will own these robots?"First, we need to acknowledge that there are existing laws on the books which for whatever reason, are not being enforced. The FTC should immediately start an investigation into John Deere and the rest of the 'Tractor Cabal' to see to what extent farmers' farm data security and privacy are being compromised. This directly affects national food security because if thousands- or tens of thousands of tractors' are hacked and disabled or their data is lost, crops left to rot in the fields would lead to bare shelves at the grocery store... I think our universities have also been delinquent in grasping and warning farmers about the data-theft being perpetrated on farmers' operations throughout the United States and other countries by makers of precision agricultural equipment. Thanks to long-time Slashdot reader chicksdaddy for sharing the article.Read more of this story at Slashdot.
SoftBank's Son Seeks To Build a $100 Billion AI Chip Venture
An anonymous reader quotes a report from Reuters: SoftBank Group Chief Executive Officer Masayoshi Son is looking to raise up to $100 billion for a chip venture that will rival Nvidia, Bloomberg News reported on Friday, citing people with knowledge of the matter. The project, code named Izanagi, will supply semiconductors essential for artificial intelligence (AI), the report added. The company would inject $30 billion in the project, with an additional $70 billion potentially coming from Middle Eastern institutions, according to the report. The Japanese group already holds about a 90% stake in British chip designer Arm, per LSEG. SoftBank is known for its tech investments with high conviction bets on startups at an unheard of scale. But it had adopted a defensive strategy after being hit by plummeting valuations in the aftermath of the pandemic, when higher interest rates eroded investor appetite for risk. It returned to profit for the first time in five quarters earlier this month, as the Japanese tech investment firm was buoyed by an upturn in portfolio companies.Read more of this story at Slashdot.
Zeus, IcedID Malware Kingpin Faces 40 Years In Prison
Connor Jones reports via The Register: A Ukrainian cybercrime kingpin who ran some of the most pervasive malware operations faces 40 years in prison after spending nearly a decade on the FBI's Cyber Most Wanted List. Vyacheslav Igorevich Penchukov, 37, pleaded guilty this week in the US to two charges related to his leadership role in both the Zeus and IcedID malware operations that netted millions of dollars in the process. Penchukov's plea will be seen as the latest big win for US law enforcement in its continued fight against cybercrime and those that enable it. However, authorities took their time getting him in 'cuffs. [...] "Malware like IcedID bleeds billions from the American economy and puts our critical infrastructure and national security at risk," said US attorney Michael Easley for the eastern district of North Carolina. "The Justice Department and FBI Cyber Squad won't stand by and watch it happen, and won't quit coming for the world's most wanted cybercriminals, no matter where they are in the world. This operation removed a key player from one of the world's most notorious cybercriminal rings. Extradition is real. Anyone who infects American computers had better be prepared to answer to an American judge." This week, he admitted one count of conspiracy to commit a racketeer influenced and corrupt organizations (RICO) act offense relating to Zeus, and one count of conspiracy to commit wire fraud in relation to IcedID. Each count carries a maximum sentence of 20 years. His sentencing date is set for May 9, 2024. Zeus malware, a banking trojan that formed a botnet for financial theft, caused over $100 million in losses before its 2014 dismantlement. Its successor, SpyEye, incorporated enhanced features for financial fraud. Despite the 2014 takedown of Zeus, Penchukov moved on to lead IcedID, a similar malware first found in 2017. IcedID evolved from banking fraud to ransomware, severely affecting the University of Vermont Medical Center in 2020 with over $30 million in damages.Read more of this story at Slashdot.
Scientists Discover Water On Surface of an Asteroid
For the first time, scientists say they've detected water molecules on the surface of an asteroid. Space.com reports: Scientists studied four silicate-rich asteroids using data gathered by the now-retired Stratospheric Observatory for Infrared Astronomy (SOFIA), a telescope-outfitted plane operated by NASA and the German Aerospace Center. Observations by SOFIA's Faint Object InfraRed Camera (FORCAST) instrument showed that two of the asteroids -- named Iris and Massalia -- exhibit a specific wavelength of light that indicated the presence of water molecules at their surface, a new study reports. While water molecules have previously been detected in asteroid samples returned to Earth, this is the first time that water molecules have been found on the surface of an asteroid in space. In a previous study, SOFIA found similar traces of water on the surface of the moon, in one of the largest craters in its southern hemisphere. [...] Therefore, the findings at Iris and Massalia suggest that some silicate asteroids can conserve some of their water over the eons and may be more commonly found in the inner solar system than previously thought. In fact, asteroids are believed to be the primary source of Earth's water, providing the necessary elements for life as we know it. Understanding of the distribution of water through space will help researchers better assess where to search for other forms of potential life, both in our solar system and beyond. The findings have been published in The Planetary Science Journal.Read more of this story at Slashdot.
California Bill Wants To Scrap Environmental Reviews To Save Downtown San Francisco
An anonymous reader quotes a report from the San Francisco Chronicle: San Francisco's leaders have spent the past few years desperately trying to figure out how to deal with a glut of empty offices, shuttered retail and public safety concerns plaguing the city's once vibrant downtown. Now, a California lawmaker wants to try a sweeping plan to revive the city's core by exempting most new real estate projects from environmental review, potentially quickening development by months or even years. State Sen. Scott Wiener, D-San Francisco, introduced SB1227 on Friday as a proposal to exempt downtown projects from the California Environmental Quality Act, or CEQA, for a decade. The 1970 landmark law requires studies of a project's expected impact on air, water, noise and other areas, but Wiener said it has been abused to slow down or kill infill development near public transit. "Downtown San Francisco matters to our city's future, and it's struggling -- to bring people back, we need to make big changes and have open minds," Wiener said in a statement. "That starts with remodeling, converting, or even replacing buildings that may have become outdated and that simply aren't going to succeed going forward." Eligible projects would include academic institutions, sports facilities, mixed-use projects including housing, biotech labs, offices, public works and even smaller changes such as modifying an existing building's exterior. The city's existing zoning and permit requirements would remain intact. "We're not taking away any local control," Wiener said in an interview with the Chronicle on Friday. California Sen. Scott Wiener is proposing a bill that, he said, would make it easier for San Francisco's downtown area to recover from the pandemic. However, it's not clear how much of an impact the bill would have if it's eventually passed since other factors are at play. New construction has been nearly frozen in San Francisco since the pandemic, amid consistently high labor costs, elevated interest rates and weakening demand for both apartments and commercial space.Major developers have reiterated that they have no plans to start work on significant new projects any time soon. Last week, Kilroy Realty, which has approval for a massive 2.3 million-square-foot redevelopment ofSouth of Market's Flower Mart, said no groundbreakings are planned this year -- anywhere.Read more of this story at Slashdot.
Scientists Propose AI Apocalypse Kill Switches
A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked. Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components. At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit. Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."Read more of this story at Slashdot.
New Bill Would Let Defendants Inspect Algorithms Used Against Them In Court
Lauren Feiner reports via The Verge: Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet. The bill would act as a check on unintended outcomes that could be created by using technology to help solve crimes. Academic research has highlighted the ways human bias can be built into software and how facial recognition systems often struggle to differentiate Black faces, in particular. The use of algorithms to make consequential decisions in many different sectors, including both crime-solving and health care, has raised alarms for consumers and advocates as a result of such research. Takano acknowledged that gaining or hiring the deep expertise needed to analyze the source code might not be possible for every defendant. But requiring NIST to create standards for the tools could at least give them a starting point for understanding whether a program matches the basic standards. Takano introduced previous iterations of the bill in 2019 and 2021, but they were not taken up by a committee.Read more of this story at Slashdot.
Apple Readies AI Tool To Rival Microsoft's GitHub Copilot
According to Bloomberg (paywalled), Apple plans to release a generative AI tool for iOS app developers as early as this year. Insider reports: The tech giant is working on a tool that will use artificial intelligence to write code as part of its plans to expand the capabilities of Xcode, the company's main programming software. The revamped system will compete withAMicrosoft's GitHub Copilot, which sources say operates similarly. Apple is also working on an AI tool that will generate code to test apps, which could provide potential time savings for a process that's known to be tedious. Currently, Apple is urging some engineers to test these new AI features to ensure they work before releasing them externally to developers. [...] The tech giant, Bloomberg has learned, has plans to integrate AI features into its next software updates for its iPhone and iPad known internally as Crystal. Glow, another internal AI project, is slated to be added to MacOS. The company is also building features that will generate Apple Music playlists and slideshows, according to the outlet. An AI-powered search feature titled Spotlight, currently limited to answering questions around launching apps, is in the works as well, Bloomberg reported.Read more of this story at Slashdot.
New 'Gold Pickaxe' Android, iOS Malware Steals Your Face For Fraud
An anonymous reader quotes a report from BleepingComputer: A new iOS and Android trojan named 'GoldPickaxe' employs a social engineering scheme to trick victims into scanning their faces and ID documents, which are believed to be used to generate deepfakes for unauthorized banking access. The new malware, spotted by Group-IB, is part of a malware suite developed by the Chinese threat group known as 'GoldFactory,' which is responsible for other malware strains such as 'GoldDigger', 'GoldDiggerPlus,' and 'GoldKefu.' Group-IB says its analysts observed attacks primarily targeting the Asia-Pacific region, mainly Thailand and Vietnam. However, the techniques employed could be effective globally, and there's a danger of them getting adopted by other malware strains. [...] For iOS (iPhone) users, the threat actors initially directed targets to a TestFlight URL to install the malicious app, allowing them to bypass the normal security review process. When Apple remove the TestFlight app, the attackers switched to luring targets into downloading a malicious Mobile Device Management (MDM) profile that allows the threat actors to take control over devices. Once the trojan has been installed onto a mobile device in the form of a fake government app, it operates semi-autonomously, manipulating functions in the background, capturing the victim's face, intercepting incoming SMS, requesting ID documents, and proxying network traffic through the infected device using 'MicroSocks.' Group-IB says the Android version of the trojan performs more malicious activities than in iOS due to Apple's higher security restrictions. Also, on Android, the trojan uses over 20 different bogus apps as cover. For example, GoldPickaxe can also run commands on Android to access SMS, navigate the filesystem, perform clicks on the screen, upload the 100 most recent photos from the victim's album, download and install additional packages, and serve fake notifications. The use of the victims' faces for bank fraud is an assumption by Group-IB, also corroborated by the Thai police, based on the fact that many financial institutes added biometric checks last year for transactions above a certain amount.Read more of this story at Slashdot.
Algebra To Return To San Francisco Middle Schools This Fall
After a 6-1 vote by the district board, San Francisco middle schools will teach Algebra I again this fall. Axios reports: Roughly a third of SFUSD middle schools this fall will begin offering the course to eighth graders at about a third of its 13 middle schools as well as six of its K-8 schools, the San Francisco Chronicle reports. Students at other campuses will have access to the course via online classes or summer school while their schools take three years to make the transition. Those eighth graders will otherwise have to wait until high school to take the course. District officials plan to evaluate the best way to enroll students throughout the district in a pilot at the first schools this fall. The first approach would be to enroll all eighth graders. The second would prioritize students' interest or readiness. The third would give students the option of taking Algebra I on top of current eighth-grade math curricula. The 6-1 vote by the San Francisco Unified School District board Tuesday followed a decadelong battle over eighth graders' access to higher-level math courses and a larger debate over academic opportunity and equity in math performance. SFUSD previously taught eighth-grade algebra. But in 2014, the board voted to wait until high school to try to address racial gaps that had emerged as some students moved quicker to advanced math classes. Studies have shown that inequities including socioeconomic status, language differences and implicit bias often impede Black and Latino students' educational pursuits and result in lower rates of enrollment in higher-level classes. Yes, but: Stanford researchers found last year that large racial and ethnic gaps in advanced math enrollment persisted even after the policy change.Read more of this story at Slashdot.
Microsoft 'Retires' Azure IoT Central In Platform Rethink
Lindsay Clark reports via The Register: In a statement on the Azure console, Microsoft confirmed the Azure IoT Central service is being retired on March 31, 2027. "Starting on April 1, 2024, you won't be able to create new application resources; however, all existing IoT Central applications will continue to function and be managed. Subscription {{subscriptionld} is not allowed to create new applications. Please create a support ticket to request an exception," the statement to customers, seen by The Register, said. According to a Microsoft "Learn" post from February 8, 2024, IoT Central is an IoT application platform as a service (aPaaS) designed to reduce work and costs while building, managing, and maintaining IoT solutions. Microsoft's Azure IoT offering includes three pillars: IoT Hub, IoT Edge and IoT Central. IoT Hub is a cloud-based service that provides a "secure and scalable way to connect, monitor, and manage IoT devices and sensors," according to Microsoft. Azure IoT Edge is designed to allow devices to run cloud-based workloads locally. And Azure IoT Central is a fully managed, cloud-based IoT solution for connecting and managing devices at scale. Central is a layer above Hub in the architecture, and Hub itself may well continue. One developer told The Register there was no warning about Hub on the Azure console. As for IoT Edge, it is "a device-focused runtime that enables you to deploy, run, and monitor containerized Linux workloads." Microsoft has not said whether this would continue.Read more of this story at Slashdot.
Apple Unbanned Epic So It Can Make an iOS Games Store In the EU
An anonymous reader quotes a report from The Verge: Epic is one step closer to opening its iOS games store in the European Union. As part of its 2023 year in review, Epic Games announced Apple has reinstated its developer account, which means it will finally be able to let users download Fortnite on iPhones again. Epic first announced plans to bring its game store and Fortnite to iOS in January, but it wasn't clear whether Apple would grant it a developer account. In 2020, Apple pulled Epic's developer account after the company began using its own in-app payment option in the iOS version of Fortnite, sparking a lengthy legal battle over whether Apple's behavior was anticompetitive. But even after the trial ended, and neither company emerged a clear winner, Apple still refused to reinstate Epic's developer account. Things are changing now that the EU has implemented the Digital Markets Act (DMA). The new rules force Apple to open up its iOS ecosystem to third-party app stores in the EU. Epic Games says it plans to open its iOS storefront in the EU this year. "I'll be the first to acknowledge a good faith move by Apple amidst our cataclysmic antitrust battle, in granting Epic Games Sweden AB a developer account for operating Epic Games Store and Fortnite in Europe under the Digital Markets Act," Sweeney says in a post on X.Read more of this story at Slashdot.
NY Governor Wants To Criminalize Deceptive AI
New York Gov. Kathy Hochul is proposing legislation that would criminalize some deceptive and abusive uses of AI and require disclosure of AI in election campaign materials, her office told Axios. From the report: Hochul's proposed laws include establishing the crime of "unlawful dissemination or publication of a fabricated photographic, videographic, or audio record." Making unauthorized uses of a person's voice "in connection with advertising or trade" a misdemeanor offense. Such offenses are punishable by up to one year jail sentence.Expanding New York's penal law to include unauthorized uses of artificial intelligence in coercion, criminal impersonation and identity theft. Amending existing intimate images and revenge porn statutes to include "digital images" -- ranging from realistic Photoshop-produced work to advanced AI-generated content. Codifying the right to sue over digitally manipulated false images. Requiring disclosures of AI use in all forms of political communication "including video recording, motion picture, film, audio recording, electronic image, photograph, text, or any technological representation of speech or conduct" within 60 days of an election.Read more of this story at Slashdot.
No 'GPT' Trademark For OpenAI
The U.S. Patent and Trademark Office has denied OpenAI's attempt to trademark "GPT," ruling that the term is "merely descriptive" and therefore unable to be registered. From a report: [...] The name, according to the USPTO, doesn't meet the standards to register for a trademark and the protections a "TM" after the name affords. (Incidentally, they refused once back in October, and this is a "FINAL" in all caps denial of the application.) As the denial document puts it: "Registration is refused because the applied-for mark merely describes a feature, function, or characteristic of applicant's goods and services." OpenAI argued that it had popularized the term GPT, which stands in this case for "generative pre-trained transformer," describing the nature of the machine learning model. It's generative because it produces new (ish) material, pre-trained in that it is a large model trained centrally on a proprietary database, and transformer is the name of a particular method of building AIs (discovered by Google researchers in 2017) that allows for much larger models to be trained. But the patent office pointed out that GPT was already in use in numerous other contexts and by other companies in related ones.Read more of this story at Slashdot.
Epic Chief Suspects Apple Broke iPhone Web Apps in EU For Anticompetitive Reasons
Apple is officially cutting support for progressive web apps for iPhone users in the European Union. While web apps have been broken for EU users in every iOS 17.4 beta so far, Apple has confirmed that this is a feature, not a bug. Commenting on Apple's move, Epic CEO Tim Sweeney tweeted: I suspect Apple's real reason for killing PWAs is the realization that competing web browsers could do a vastly better job of supporting PWAs -- unlike Safari's intentionally crippled web functionality -- and turn PWAs into legit, untaxed competitors to native apps.Read more of this story at Slashdot.
DOJ Quietly Removed Russian Malware From Routers in US Homes and Businesses
An anonymous reader shares a report: More than 1,000 Ubiquiti routers in homes and small businesses were infected with malware used by Russian-backed agents to coordinate them into a botnet for crime and spy operations, according to the Justice Department. That malware, which worked as a botnet for the Russian hacking group Fancy Bear, was removed in January 2024 under a secret court order as part of "Operation Dying Ember," according to the FBI's director. It affected routers running Ubiquiti's EdgeOS, but only those that had not changed their default administrative password. Access to the routers allowed the hacking group to "conceal and otherwise enable a variety of crimes," the DOJ claims, including spearphishing and credential harvesting in the US and abroad. Unlike previous attacks by Fancy Bear -- that the DOJ ties to GRU Military Unit 26165, which is also known as APT 28, Sofacy Group, and Sednit, among other monikers -- the Ubiquiti intrusion relied on a known malware, Moobot. Once infected by "Non-GRU cybercriminals," GRU agents installed "bespoke scripts and files" to connect and repurpose the devices, according to the DOJ. The DOJ also used the Moobot malware to copy and delete the botnet files and data, according to the DOJ, and then changed the routers' firewall rules to block remote management access. During the court-sanctioned intrusion, the DOJ "enabled temporary collection of non-content routing information" that would "expose GRU attempts to thwart the operation." This did not "impact the routers' normal functionality or collect legitimate user content information," the DOJ claims. "For the second time in two months, we've disrupted state-sponsored hackers from launching cyber-attacks behind the cover of compromised US routers," said Deputy Attorney General Lisa Monaco in a press release.Read more of this story at Slashdot.
Microsoft, Google, Meta, X and Others Pledge To Prevent AI Election Interference
Twenty tech companies working on AI said Friday they had signed a "pledge" to try to prevent their software from interfering in elections, including in the United States. From a report: The signatories range from tech giants such as Microsoft and Google to a small startup that allows people to make fake voices -- the kind of generative-AI product that could be abused in an election to create convincing deepfakes of a candidate. The accord is, in effect, a recognition that the companies' own products create a lot of risk in a year in which 4 billion people around the world are expected to vote in elections. "Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the document reads. The accord is also a recognition that lawmakers around the world haven't responded very quickly to the swift advancements in generative AI, leaving the tech industry to explore self-regulation. "As society embraces the benefits of AI, we have a responsibility to help ensure these tools don't become weaponized in elections," Brad Smith, vice chair and president of Microsoft, said in a statement. The 20 companies to sign the pledge are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic and X.Read more of this story at Slashdot.
Amazon Joins Companies Arguing US Labor Board is Unconstitutional
Amazon has joined rocket maker SpaceX and grocery chain Trader Joe's in claiming that a U.S. labor agency's in-house enforcement proceedings violate the U.S. Constitution, as the retail giant faces scores of cases claiming it interfered with workers' rights to organize. From a report: Amazon in a filing made with the National Labor Relations Board (NLRB) on Thursday said it plans to argue that the agency's unique structure violates the company's right to a jury trial. The company also said that limits on the removal of administrative judges and the board's five members, who are appointed by the president, are unconstitutional. The filing came in a pending case accusing Amazon of illegally retaliating against workers at a warehouse in the New York City borough of Staten Island, where employees voted to unionize in 2022.Read more of this story at Slashdot.
RFK Jr. Wins Deferred Injunction In Vax Social Media Suit
schwit1 writes: Robert F. Kennedy Jr. won a preliminary injunction against the White House and other federal defendants in his suit alleging government censorship of his statements against vaccines on social media. The injunction, however, will be stayed until the US Supreme Court rules in a related case brought by Missouri and Louisiana. An injunction is warranted because Kennedy showed he is likely to succeed on the merits of his claims, Judge Terry A. Doughty of the US District Court for the Western District of Louisiana said Wednesday. The White House defendants, the Surgeon General defendants, the Centers for Disease Control and Prevention defendants, the Federal Bureau of Investigation defendants, and the Cybersecurity & Infrastructure Security Agency defendants likely violated the Free Speech Clause of the First Amendment, Doughty said. Kennedy's class action complaint, brought with health care professional Connie Sampognaro and Kennedy's nonprofit, Children's Health Defense, alleges that the federal government, beginning in early 2020, began a campaign to induce Facebook, Google (YouTube), and X, formerly known as Twitter, to censor constitutionally protected speech. Specifically, Kennedy said, the government suppressed "facts and opinions about the COVID vaccines that might lead people to become 'hesitant' about COVID vaccine mandates." Kennedy has sufficiently shown that these defendants "jointly participated in the actions of the social media" platforms by '"insinuating' themselves into the social-media companies' private affairs and blurring the line between public and private action," Doughty said.Read more of this story at Slashdot.
EU Expands Digital Crackdown on Toxic Content, Dodgy Goods To All Online Platforms
The European Union is expanding its strict digital rulebook on Saturday to almost all online platforms in the bloc, in the next phase of its crackdown on toxic social media content and dodgy ecommerce products that began last year by targeting the most popular services. From a report: The EU's trailblazing Digital Services Act has already kicked in for nearly two dozen of the biggest online platforms, including Facebook, Instagram, YouTube, Amazon and Wikipedia. The DSA imposes a set of strict requirements designed to keep internet users safe online, including making it easier to report counterfeit or unsafe goods or flag harmful or illegal content like hate speech as well as a ban on ads targeted at children. Now the rules will apply to nearly all online platforms, marketplaces and "intermediaries" with users in the 27-nation bloc. Only the smallest businesses, with fewer than 50 employees and annual revenue of less than 10 million euros ($11 million), are exempt. That means thousands more websites could potentially be covered by the regulations. It includes popular ones such as eBay and OnlyFans that escaped being classed as the biggest online platforms requiring extra scrutiny.Read more of this story at Slashdot.
Nginx Core Developer Quits Project, Says He No Longer Sees Nginx as 'Free and Open Source Project For the Public Good'
A core developer of Nginx, currently the world's most popular web server, has quit the project, stating that he no longer sees it as "a free and open source project... for the public good." From a report: His fork, freenginx, is "going to be run by developers, and not corporate entities," writes Maxim Dounin, and will be "free from arbitrary corporate actions." Dounin is one of the earliest and still most active coders on the open source Nginx project and one of the first employees of Nginx, Inc., a company created in 2011 to commercially support the steadily growing web server. Nginx is now used on roughly one-third of the world's web servers, ahead of Apache. Nginx Inc. was acquired by Seattle-based networking firm F5 in 2019. Later that year, two of Nginx's leaders, Maxim Konovalov and Igor Sysoev, were detained and interrogated in their homes by armed Russian state agents. Sysoev's former employer, Internet firm Rambler, claimed that it owned the rights to Nginx's source code, as it was developed during Sysoev's tenure at Rambler (where Dounin also worked). While the criminal charges and rights do not appear to have materialized, the implications of a Russian company's intrusion into a popular open source piece of the web's infrastructure caused some alarm. Sysoev left F5 and the Nginx project in early 2022. Later that year, due to the Russian invasion of Ukraine, F5 discontinued all operations in Russia. Some Nginx developers still in Russia formed Angie, developed in large part to support Nginx users in Russia. Dounin technically stopped working for F5 at that point, too, but maintained his role in Nginx "as a volunteer," according to Dounin's mailing list post. Dounin writes in his announcement that "new non-technical management" at F5 "recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers' position." While it was "quite understandable," given their ownership, Dounin wrote that it means he was "no longer able to control which changes are made in nginx," hence his departure and fork.Read more of this story at Slashdot.
OpenAI's Spectacular Video Tool Is Shrouded in Mystery
Every OpenAI release elicits awe and anxiety as capabilities advance, evident in Sora's strikingly realistic AI-generated video clips that went viral while unsettling industries reliant on original footage. But the company is again being secretive in all the wrong places about AI that can be used to spread misinformation. From a report: As usual, OpenAI won't talk about the all-important ingredients that went into this new tool, even as it releases it to an array of people to test before going public. Its approach should be the other way around. OpenAI needs to be more public about the data used to train Sora, and more secretive about the tool itself, given the capabilities it has to disrupt industries and potentially elections. OpenAI Chief Executive Officer Sam Altman said that red-teaming of Sora would start on Thursday, the day the tool was announced and shared with beta testers. Red-teaming is when specialists test an AI model's security by pretending to be bad actors who want to hack or misuse it. The goal is to make sure the same can't happen in the real world. When I asked OpenAI how long it would take to run these tests on Sora, a spokeswoman said there was no set length. "We will take our time to assess critical areas for harms or risks," she added. The company spent about six months testing GPT-4, its most recent language model, before releasing it last year. If it takes the same amount of time to check Sora, that means it could become available to the public in August, a good three months before the US election. OpenAI should seriously consider waiting to release it until after voters go to the polls. [...] OpenAI is meanwhile being frustratingly secretive about the source of the information it used to create Sora. When I asked the company about what datasets were used to train the model, a spokeswoman said the training data came "from content we've licensed, and publicly available content." She didn't elaborate further.Read more of this story at Slashdot.
Phil Spencer Wants Sony and Nintendo Games on Xbox, But Says He Doesn't Expect It
Microsoft announced this week that four of Xbox's previously-exclusive games are going cross-platform to PlayStation and Switch. Xbox head Phil Spencer says in a new interview that he'd like to see Sony and Nintendo bring their games to Xbox -- but that he isn't holding his breath. From a report: In an interview for journalist Stephen Totilo's Game File newsletter, Spencer said the decision to bring four Xbox games to other consoles wasn't intended to make its rivals follow suit. "This is not for me, like, some kind of bartering system," Spencer explained. "We're doing it for the better of Xbox's business." Despite this, Spencer said he would of course welcome other consoles' games on Xbox, and noted that it would be beneficial for multiplayer games in particular, where building a large online community is important for a game's lifespan. "I will say, when I look at a game like Helldivers 2 -- and it's a great game, kudos to the team shipping on PC and PlayStation -- I'm not exactly sure who it helps in the industry by not being on Xbox," he said. "If you try to twist yourself to say, like, somehow that benefited somebody somewhere. But I get it. There's a legacy in console gaming that we're going to benefit by shipping games and not putting them on other places. We do the same thing." Spencer also noted that Helldivers 2 -- which Sony released on PlayStation and PC on the same day -- is doing well on the latter. "I will say shipping more games in more places and making them more accessible to more people is a good part of the gaming business," he said. Further reading: Phil Spencer Puts Apple's Money Where His Mouth Is.Read more of this story at Slashdot.
Google 'Talk To a Live Rep' Brings Pixel's Hold for Me To All Search Users
Google Search Labs is testing a "Talk to a Live Representative" feature where it will "help you place the call, wait on hold, and then give you a call once a live representative is available." From a report: When you search for customer service numbers, which Google recently started surfacing for Knowledge Panels, you might see a prominent "Talk to a live representative" prompt. Very simply, Google will call the support line "for you and wait on hold until a customer service representative picks up." At that time, Google will call you so you can get on with your business. To "Request a call," you first specify a reason for why you're calling. In the case of airlines, it's: Update existing booking, Luggage issue, Canceled flight, Other issue, Flight check-in, Missed my flight, and Delayed flight. You then provide your phone number, with Google sending SMS updates. The Request page will note the estimated wait time. After submitting, you can cancel the request at any time.Read more of this story at Slashdot.
VMware Admits Sweeping Broadcom Changes Are Worrying Customers
An anonymous reader quotes a report from Ars Technica: Broadcom has made a lot of changes to VMware since closing its acquisition of the company in November. On Wednesday, VMware admitted that these changes are worrying customers. With customers mulling alternatives and partners complaining, VMware is trying to do damage control and convince people that change is good. Not surprisingly, the plea comes from a VMware marketing executive: Prashanth Shenoy, VP of product and technical marketing for the Cloud, Infrastructure, Platforms, and Solutions group at VMware. In Wednesday's announcement, Shenoy admitted that VMware "has been all about change" since being swooped up for $61 billion. This has resulted in "many questions and concerns" as customers "evaluate how to maximize value from" VMware products. Among these changes is VMware ending perpetual license sales in favor of a subscription-based business model. VMware had a history of relying on perpetual licensing; VMware called the model its "most renowned" a year ago. Shenoy's blog sought to provide reasoning for the change, with the executive writing that "all major enterprise software providers are on [subscription models] today." However, the idea that '"everyone's doing it" has done little to ameliorate impacted customers who prefer paying for something once and owning it indefinitely (while paying for associated support costs). Customers are also dealing with budget concerns with already paid-for licenses set to lose support and the only alternative being a monthly fee. Shenoy's blog, though, focused on license portability. "This means you will be able to deploy on-premises and then take your subscription at any time to a supported Hyperscaler or VMware Cloud Services Provider environment as desired. You retain your license subscription as you move," Shenoy wrote, noting new Google Cloud VMware Engine license portability support for VMware Cloud Foundation. Further, Shenoy claimed the discontinuation of VMware products so that Broadcom could focus on VMware Cloud Foundation and vSphere Foundation would be beneficial, because "offering a few offerings that are lower in price on the high end and are packed with more value for the same or less cost on the lower end makes business sense for customers, partners, and VMware." VMware's Wednesday post also addressed Broadcom taking VMware's biggest customers direct, removing channel partners from the equation: "It makes business sense for Broadcom to have close relationships with its most strategic VMware customers to make sure VMware Cloud Foundation is being adopted, used, and providing customer value. However, we expect there will be a role change in accounts that will have to be worked through so that both Broadcom and our partners are providing the most value and greatest impact to strategic customers. And, partners will play a critical role in adding value beyond what Broadcom may be able." "Broadcom identified things that needed to change and, as a responsible company, made the changes quickly and decisively," added Shenoy. "The changes that have taken place over the past 60+ days were absolutely necessary."Read more of this story at Slashdot.
Scientific Journal Publishes AI-Generated Rat With Gigantic Penis
Jordan Pearson reports via Motherboard: A peer-reviewed science journal published a paper this week filled with nonsensical AI-generated images, which featured garbled text and a wildly incorrect diagram of a rat penis. The episode is the latest example of how generative AI is making its way into academia with concerning effects. The paper, titled "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway" was published on Wednesday in the open access Frontiers in Cell Development and Biology journal by researchers from Hong Hui Hospital and Jiaotong University in China. The paper itself is unlikely to be interesting to most people without a specific interest in the stem cells of small mammals, but the figures published with the article are another story entirely. [...] It's unclear how this all got through the editing, peer review, and publishing process. Motherboard contacted the paper's U.S.-based reviewer, Jingbo Dai of Northwestern University, who said that it was not his responsibility to vet the obviously incorrect images. (The second reviewer is based in India.) "As a biomedical researcher, I only review the paper based on its scientific aspects. For the AI-generated figures, since the author cited Midjourney, it's the publisher's responsibility to make the decision," Dai said. "You should contact Frontiers about their policy of AI-generated figures." Frontier's policies for authors state that generative AI is allowed, but that it must be disclosed -- which the paper's authors did -- and the outputs must be checked for factual accuracy. "Specifically, the author is responsible for checking the factual accuracy of any content created by the generative AI technology," Frontier's policy states. "This includes, but is not limited to, any quotes, citations or references. Figures produced by or edited using a generative AI technology must be checked to ensure they accurately reflect the data presented in the manuscript." On Thursday afternoon, after the article and its AI-generated figures circulated social media, Frontiers appended a notice to the paper saying that it had corrected the article and that a new version would appear later. It did not specify what exactly was corrected.Read more of this story at Slashdot.
OSIRIS-REx's Final Haul: 121.6 Grams From Asteroid Bennu
According to NASA, the OSIRIS-REx mission has successfully collected 121.6 grams, or almost 4.3 ounces, of rock and dust from the asteroid Bennu. Universe Today reports: These samples have been a long time coming. The OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer) was approved by NASA back in 2011 and launched in September 2016. It reached its target, the carbonaceous Apollo group asteroid 101955 Bennu, in December 2018. After spending months studying the asteroid and reconnoitring for a suitable sampling location, it selected one in December 2019. After two sampling rehearsals, the spacecraft gathered its sample on October 20th, 2020. In September 2023, the sample finally returned to Earth. For OSIRIS-REx to be successful, it had to collect at least 60 grams of material. With a final total that is double that, it should open up more research opportunities and allow more of the material to be held untouched for future research. NASA says they will preserve 70% of the sample for the future, including for future generations. The next step is for the material to be put into containers and sent to researchers. More than 200 researchers around the world will receive samples. Many of the samples will find their way to scientists at NASA and institutions in the US, while others will go to researchers at institutions associated with the Canadian Space Agency, JAXA, and other partner nations. Canada will receive 4% of the sample, the first time that Canada's scientific community will have direct access to a returned asteroid sample.Read more of this story at Slashdot.
Air Canada Found Liable For Chatbot's Bad Advice On Plane Tickets
An anonymous reader quotes a report from CBC.ca: Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot. In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions." "This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote. "While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot." In a decision released this week, Rivers ordered Air Canada to pay Jake Moffatt $812 to cover the difference between the airline's bereavement rates and the $1,630.36 they paid for full-price tickets to and from Toronto bought after their grandmother died.Read more of this story at Slashdot.
Leaked Emails Show Hugo Awards Self-Censoring To Appease China
samleecole shares a report from 404 Media: A trove of leaked emails shows how administrators of one of the most prestigious awards in science fiction censored themselves because the awards ceremony was being held in China. Earlier this month, the Hugo Awards came under fire with accusations of censorship when several authors were excluded from the awards, including Neil Gaiman, R. F. Kuang, Xiran Jay Zhao, and Paul Weimer. These authors' works had earned enough votes to make them finalists, but were deemed "ineligible" for reasons not disclosed by Hugo administrators. The Hugo Awards are one of the largest and most important science fiction awards. [...] The emails, which show the process of compiling spreadsheets of the top 10 works in each category and checking them for "sensitive political nature" to see if they were "an issue in China," were obtained by fan writer Chris M. Barkley and author Jason Sanford, and published on fandom news site File 770 and Sanford's Patreon, where they uploaded the full PDF of the emails. They were provided to them by Hugo Awards administrator Diane Lacey. Lacey confirmed in an email to 404 Media that she was the source of the emails. "In addition to the regular technical review, as we are happening in China and the *laws* we operate under are different...we need to highlight anything of a sensitive political nature in the work," Dave McCarty, head of the 2023 awards jury, directed administrators in an email. "It's not necessary to read everything, but if the work focuses on China, taiwan, tibet, or other topics that may be an issue *in* China...that needs to be highlighted so that we can determine if it is safe to put it on the ballot of if the law will require us to make an administrative decision about it." The email replies to this directive show administrators combing through authors' social media presences and public travel histories, including from before they were nominated for the 2023 awards, and their writing and bodies of work beyond just what they were nominated for. Among dozens of other posts and writings, they note Weimer's negative comments about the Chinese government in a Patreon post and misspell Zhao's name and work (calling their novel Iron Widow "The Iron Giant"). About author Naseem Jamnia, an administrator allegedly wrote, "Author openly describes themselves as queer, nonbinary, trans, (And again, good for them), and frequently writes about gender, particularly non-binary. The cited work also relies on these themes. I include them because I don't know how that will play in China. (I suspect less than well.)" "As far as our investigation is concerned there was no reason to exclude the works of Kuang, Gaiman, Weimer or Xiran Jay Zhao, save for being viewed as being undesirable in the view of the Hugo Award admins which had the effect of being the proxies Chinese government," Sanford and Barkley wrote. In conjunction with the email trove, Sanford and Barkley also released an apology letter from Lacey, in which she explains some of her role in the awards vetting process and also blames McCarty for his role in the debacle. McCarty, along with board chair Kevin Standlee, resigned earlier this month.Read more of this story at Slashdot.
Apple Confirms iOS 17.4 Removes Home Screen Web Apps In the EU
Apple has now offered an explanation for why iOS 17.4 removes support for Home Screen web apps in the European Union. Spoiler: it's because of the Digital Markets Act that went into effect last August. 9to5Mac reports: Last week, iPhone users in the European Union noticed that they were no longer able to install and run web apps on their iPhone's Home Screen in iOS 17.4. Apple has added a number of features over the years to improve support for progressive web apps on iPhone. For example, iOS 16.4 allowed PWAs to deliver push notifications with icon badges. One change in iOS 17.4 is that the iPhone now supports alternative browser engines in the EU. This allows companies to build browsers that don't use Apple's WebKit engine for the first time. Apple says that this change, required by the Digital Markets Act, is why it has been forced to remove Home Screen web apps support in the European Union. Apple explains that it would have to build an "entirely new integration architecture that does not currently exist in iOS" to address the "complex security and privacy concerns associated with web apps using alternative browser engines." This work "was not practical to undertake given the other demands of the DMA and the very low user adoption of Home Screen web apps," Apple explains. "And so, to comply with the DMA's requirements, we had to remove the Home Screen web apps feature in the EU." "EU users will be able to continue accessing websites directly from their Home Screen through a bookmark with minimal impact to their functionality," Apple continues. It's understandable that Apple wouldn't offer support for Home Screen web apps for third-party browsers. But why did it also remove support for Home Screen web apps for Safari? Unfortunately, that's another side effect of the Digital Markets Act. The DMA requires that all browsers have equality, meaning that Apple can't favor Safari and WebKit over third-party browser engines. Therefore, because it can't offer Home Screen web apps support for third-party browsers, it also can't offer support via Safari. [...] iOS 17.4 is currently available to developers and public beta testers, and is slated for a release in early March. The full explanation was published on Apple's developer website today.Read more of this story at Slashdot.
Indian Government Moves To Ban ProtonMail After Bomb Threat
Following a hoax bomb threat sent via ProtonMail to schools in Chennai, India, police in the state of Tamil Nadu put in a request to block the encrypted email service in the region since they have been unable to identify the sender. According to Hindustan Times, that request was granted today. From the report: The decision to block Proton Mail was taken at a meeting of the 69A blocking committee on Wednesday afternoon. Under Section 69A of the IT Act, the designated officer, on approval by the IT Secretary and at the recommendation of the 69A blocking committee, can issue orders to any intermediary or a government agency to block any content for national security, public order and allied reasons. HT could not ascertain if a blocking order will be issued to Apple and Google to block the Proton Mail app. The final order to block the website has not yet been sent to the Department of Telecommunications but the MeitY has flagged the issue with the DoT. During the meeting, the nodal officer representing the Tamil Nadu government submitted that a bomb threat was sent to multiple schools using ProtonMail, HT has learnt. The police attempted to trace the IP address of the sender but to no avail. They also tried to seek help from the Interpol but that did not materialise either, the nodal officer said. During the meeting, HT has learnt, MeitY representatives noted that getting information from Proton Mail, on other criminal matters, not necessarily linked to Section 69A related issues, is a recurrent problem. Although Proton Mail is end-to-end encrypted, which means the content of the emails cannot be intercepted and can only be seen by the sender and recipient if both are using Proton Mail, its privacy policy states that due to the nature of the SMTP protocol, certain email metadata -- including sender and recipient email addresses, the IP address incoming messages originated from, attachment name, message subject, and message sent and received times -- is available with the company. "We condemn a potential block as a misguided measure that only serves to harm ordinary people. Blocking access to Proton is an ineffective and inappropriate response to the reported threats. It will not prevent cybercriminals from sending threats with another email service and will not be effective if the perpetrators are located outside of India," said ProtonMail in a statement. "We are currently working to resolve this situation and are investigating how we can best work together with the Indian authorities to do so. We understand the urgency of the situation and are completely clear that our services are not to be used for illegal purposes. We routinely remove users who are found to be doing so and are willing to cooperate wherever possible within international cooperation agreements."Read more of this story at Slashdot.
AMC To Pay $8 Million For Allegedly Sharing Subscribers' Viewing History With Tech Companies
An anonymous reader quotes a report from Ars Technica: On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE. The settlement comes in response to allegations that AMC illegally shared subscribers' viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA). Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing "information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider." It was originally passed to protect individuals' right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan. The so-called "Bork Tapes" revealed little -- other than that the judge frequently rented spy thrillers and British costume dramas -- but lawmakers recognized that speech could be chilled by monitoring anyone's viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint (PDF) that "the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before." According to subscribers suing, AMC allegedly installed tracking technologies -- including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology -- on its website, allowing their personally identifying information to be connected with their viewing history. [...] If it's approved, AMC has agreed to "suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC's disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual." All registered users of AMC services who "requested or obtained video content on at least one of the six AMC services" between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9. In addition to distributing the $8.3 million settlement fund among class members, subscribers will also receive a free one-week digital subscription.Read more of this story at Slashdot.
Google Enables OS Upgrades For Older PCs Post-Windows 10 Support Cutoff
Google said it will allow businesses to install ChromeOS Flex on their Windows devices, "potentially preventing millions of PCs from hitting landfills after Microsoft ends support for Windows 10 next year," reports Reuters. The Chrome operating system will ultimately allow users to keep using their Windows 10 systems, while also providing regular security updates and features like data encryption. From the report: ChromeOS is significantly less popular than other operating systems. In January 2024, it held a 1.8% share of the worldwide desktop OS market, far behind Windows' share of about 73%, according to data from research firm Statcounter. ChromeOS has struggled with wider adaptability due to its incompatibility with legacy Windows applications and productivity suites used by businesses. Google said that ChromeOS would allow users to stream legacy Windows and productivity applications, which will help deliver them to devices by running the apps on a data center.Read more of this story at Slashdot.
Microsoft Teases Next-Gen Xbox With 'Largest Technical Leap', New 'Unique' Hardware
Tom Warren reports via The Verge: Microsoft is teasing the potential for unique Xbox hardware in the future and a powerful next-gen console. Four previously exclusive Xbox games are officially coming to the PS5 and Nintendo Switch soon, and Microsoft wants to reassure Xbox fans that it's still very much invested in the future of its platform and hardware. In an official Xbox podcast today, Xbox president Sarah Bond teased that Microsoft will deliver 'the largest technical leap' with the next-generation Xbox: "We've got more to come. There's some exciting stuff coming out in hardware that we're going to share this holiday. We're also invested in the next-generation roadmap. What we're really focused on there is delivering the largest technical leap you will have ever seen in a hardware generation, which makes it better for players and better for creators and the visions that they're building." Speaking to The Verge, Microsoft Gaming CEO Phil Spencer went a step further, teasing that the Xbox hardware teams are thinking about building different kinds of hardware. "I'm very proud of the work that the hardware team is doing, not only for this year, but also into the future," says Spencer. "[We're] really thinking about creating hardware that sells to gamers because of the unique aspects of the hardware. It's kind of an unleashing of the creative capability of our hardware team that I'm really excited about." Perhaps that unique hardware is an Xbox handheld. "We see a lot of opportunity in different types of devices, and will share specifics on our future hardware plans as soon as we are ready," says Microsoft in an Xbox blog post today.Read more of this story at Slashdot.
OpenAI's Sora Turns AI Prompts Into Photorealistic Videos
An anonymous reader quotes a report from Wired: We already know thatOpenAI's chatbots can pass the bar exam without going to law school. Now, just in time for the Oscars, a new OpenAI app called Sora hopes to master cinema without going to film school. For now a research product, Sora is going out to a few select creators and a number of security experts who will red-team it for safety vulnerabilities. OpenAI plans to make it available to all wannabe auteurs at some unspecified date, but it decided to preview it in advance. Other companies, from giants like Google to startups likeRunway, have already revealed text-to-video AI projects. But OpenAI says that Sora is distinguished by its striking photorealism -- something I haven't seen in its competitors -- and its ability to produce longer clips than the brief snippets other models typically do, up to one minute. The researchers I spoke to won't say how long it takes to render all that video, but when pressed, they described it as more in the "going out for a burrito" ballpark than "taking a few days off." If the hand-picked examples I saw are to be believed, the effort is worth it. OpenAI didn't let me enter my own prompts, but it shared four instances of Sora's power. (None approached the purported one-minute limit; the longest was 17 seconds.) The first came from a detailed prompt that sounded like an obsessive screenwriter's setup: "Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes." The result is a convincing view of what is unmistakably Tokyo, in that magic moment when snowflakes and cherry blossoms coexist. The virtual camera, as if affixed to a drone, follows a couple as they slowly stroll through a streetscape. One of the passersby is wearing a mask. Cars rumble by on a riverside roadway to their left, and to the right shoppers flit in and out of a row of tiny shops. It's not perfect. Only when you watch the clip a few times do you realize that the main characters -- a couple strolling down the snow-covered sidewalk -- would have faced a dilemma had the virtual camera kept running. The sidewalk they occupy seems to dead-end; they would have had to step over a small guardrail to a weird parallel walkway on their right. Despite this mild glitch, the Tokyo example is a mind-blowing exercise in world-building. Down the road, production designers will debate whether it's a powerful collaborator or a job killer. Also, the people in this video -- who are entirely generated by a digital neural network -- aren't shown in close-up, and they don't do any emoting. But the Sora team says that in other instances they've had fake actors showing real emotions. "It will be a very long time, if ever, before text-to-video threatens actual filmmaking," concludes Wired. "No, you can't make coherent movies by stitching together 120 of the minute-long Sora clips, since the model won't respond to prompts in the exact same way -- continuity isn't possible. But the time limit is no barrier for Sora and programs like it to transform TikTok, Reels, and other social platforms." "In order to make a professional movie, you need so much expensive equipment," says Bill Peebles, another researcher on the project. "This model is going to empower the average person making videos on social media to make very high-quality content." Further reading: OpenAI Develops Web Search Product in Challenge To GoogleRead more of this story at Slashdot.
Moon Company Intuitive Machines Begins First Mission After SpaceX Launch
Texas-based Intuitive Machines' inaugural moon mission began early Thursday morning, heading toward what could be the first U.S. lunar landing in more than 50 years. From a report: Intuitive Machines' Nova-C lander launched from Florida on SpaceX's Falcon 9 rocket, beginning the IM-1 mission. "It is a profoundly humbling moment for all of us at Intuitive Machines. The opportunity to return the United States to the moon for the first time since 1972 is a feat of engineering that demands a hunger to explore," Intuitive Machines vice president of space systems Trent Martin said during a press conference. The IM-1 lander, named "Odysseus" after the mythological Greek hero, is carrying 12 government and commercial payloads -- six of which are for NASA under an $118 million contract. NASA leadership emphasized before the launch that "IM-1 is an Intuitive Machines' mission, it's not a NASA mission." But it marks the second mission under NASA's Commercial Lunar Payload Services (CLPS) initiative, which aims to deliver science projects and cargo to the moon with increasing regularity in support of the agency's Artemis crew program. The agency views CLPS missions as "a learning experience," NASA's deputy associate administrator for exploration in the science mission directorate, Joel Kearns, told press before the launch.Read more of this story at Slashdot.
...103104105106107108109110111112...