Feed slashdot Slashdot

Favorite IconSlashdot

Link https://slashdot.org/
Feed https://rss.slashdot.org/Slashdot/slashdotMain
Copyright Copyright Slashdot Media. All Rights Reserved.
Updated 2025-06-08 17:02
Thailand Cuts Internet and Power Supply To Some Areas in Myanmar in Blow To Scam Centers
Thailand cut power supply, fuel and internet to some border areas with Myanmar on Wednesday. It's an attempt to choke scam syndicates operating out of there that have become a growing security concern. Reuters: Scam compounds in Southeast Asia are suspected to have entrapped hundreds of thousands of people in illegal online and telecom operations, generating billions of dollars annually, according to a 2023 U.N. report. Thai Interior Minister Anutin Charnvirakul visited the Provincial Electricity Authority headquarters in Bangkok on Wednesday to oversee the effort to fight the crime rings. "They may turn to other sources of power supply or generate their own electricity. In the Thai Security Council orders, it also includes the halt in supplying oil and internet to them, which means that from now on any damage that occurs will have no connection to any resources in Thailand."Read more of this story at Slashdot.
Climate Change Target of 2C Is 'Dead'
An anonymous reader quotes a report from The Guardian: The pace of global heating has been significantly underestimated, according to renowned climate scientist Prof James Hansen, who said the international 2C target is "dead." A new analysis by Hansen and colleagues concludes that both the impact of recent cuts in sun-blocking shipping pollution, which has raised temperatures, and the sensitivity of the climate to increasing fossil fuels emissions are greater than thought. The group's results are at the high end of estimates from mainstream climate science but cannot be ruled out, independent experts said. If correct, they mean even worse extreme weather will come sooner and there is a greater risk of passing global tipping points, such as the collapse of the critical Atlantic ocean currents. Hansen, at Columbia University in the US, sounded the alarm to the general public about climate breakdown in testimony he gave to a UN congressional committee in 1988. "The Intergovernmental Panel on Climate Change (IPPC) defined a scenario which gives a 50% chance to keep warming under 2C -- that scenario is now impossible," he said. "The 2C target is dead, because the global energy use is rising, and it will continue to rise." The new analysis said global heating is likely to reach 2C by 2045, unless solar geoengineering is deployed. [...] In the new study, published in the journal Environment: Science and Policy for Sustainable Development, Hansen's team said: "Failure to be realistic in climate assessment and failure to call out the fecklessness of current policies to stem global warming is not helpful to young people." [...] Hansen said the point of no return could be avoided, based on the growing conviction of young people that they should follow the science. He called for a carbon fee and dividend policy, where all fossil fuels are taxed and the revenue returned to the public. "The basic problem is that the waste products of fossil fuels are still dumped in the air free of charge," he said. He also backed the rapid development of nuclear power. Hansen also supported research on cooling the Earth using controversial geoengineering techniques to block sunlight, which he prefers to call "purposeful global cooling." He said: "We do not recommend implementing climate interventions, but we suggest that young people not be prohibited from having knowledge of the potential and limitations of purposeful global cooling in their toolbox." Political change is needed to achieve all these measures, Hansen said: "Special interests have assumed far too much power in our political systems. In democratic countries the power should be with the voter, not with the people who have the money. That requires fixing some of our democracies, including the US."Read more of this story at Slashdot.
Meta CTO: 2025 Make or Break Year for Metaverse
Meta's metaverse ambitions face a decisive year in 2025, with Chief Technology Officer Andrew Bosworth warning employees that the project could become either "a legendary misadventure" or prove visionary, Business Insider is reporting, citing an internal memo. Bosworth called for increased sales and user engagement for Meta's mixed reality products, noting the company plans to launch several AI-powered wearable devices. The tech giant's Reality Labs division, which develops virtual and augmented reality products, reported record revenue of $1.08 billion in the fourth quarter but posted its largest-ever quarterly loss of $4.97 billion. Meta CEO Mark Zuckerberg told staff the company's AI-powered smart glasses, which sold over 1 million units in 2024, marked a "great start" but would not significantly impact the business. The Reality Labs unit has accumulated losses of approximately $60 billion since 2020.Read more of this story at Slashdot.
Physicists Confirm The Existence of a Third Form of Magnetism
Scientists have demonstrated control over a newly theorized type of magnetism, known as altermagnetism, by manipulating nanoscale magnetic whirlpools in an ultra-thin wafer of manganese telluride. "Our experimental work has provided a bridge between theoretical concepts and real-life realization, which hopefully illuminates a path to developing altermagnetic materials for practical applications," says University of Nottingham physicist Oliver Amin, who led the research with PhD student Alfred Dal Din. From the report: Using a device that accelerates electrons to blinding speeds, a team led by researchers from the University of Nottingham showered an ultra-thin wafer of manganese telluride with X-rays of different polarizations, revealing changes on a nanometer scale reflecting magnetic activity unlike anything seen before. [...] More recently, a third configuration of particles in ferromagnetic materials was theorized. In what's referred to as altermagnetism, particles are arranged in a canceling fashion like antiferromagnetism, yet rotated just enough to allow for confined forces on a nanoscale -- not enough to pin a grocery list to your freezer, but with discrete properties that engineers are keen to manipulate into storing data or channeling energy. "Altermagnets consist of magnetic moments that point antiparallel to their neighbors," explains University of Nottingham physicist Peter Wadley. "However, each part of the crystal hosting these tiny moments is rotated with respect to its neighbors. This is like antiferromagnetism with a twist! But this subtle difference has huge ramifications." Experiments have since confirmed the existence of this in-between 'alter' magnetism. However, none had directly demonstrated it was possible to manipulate its tiny magnetic vortices in ways that might prove useful. Wadley and his colleagues demonstrated that a sheet of manganese telluride just a few nanometers thick could be distorted in ways that intentionally created distinct magnetic whirlpools on the wafer's surface. "Our experimental work has provided a bridge between theoretical concepts and real-life realization, which hopefully illuminates a path to developing altermagnetic materials for practical applications," says University of Nottingham physicist Oliver Amin. This research was published in the journal Nature.Read more of this story at Slashdot.
USPS Halts All Packages From China, Sending the Ecommerce Industry Into Chaos
The United States Postal Service has suspended all package shipments from China and Hong Kong following President Donald Trump's decision to eliminate the de minimis exemption, which previously allowed small packages under $800 to enter the U.S. without import duties. "The move could potentially create chaos and confusion across the online shopping industry, as well as make purchases more expensive for consumers, especially because many global manufacturers and internet sellers are located in China," reports Wired. "Shoppers are now on the hook not only for the additional 10 percent tariff, but also whatever original tax rate their products were exempted from until Tuesday." From the report: Cindy Allen, who has worked in international trade for over 30 years and is the CEO of the consulting firm Trade Force Multiplier, gave WIRED an example of how much additional cost the tariff will incur: A woman's dress made of synthetic fiber shipped from China through de minimis will now be subject to a regular 16 percent tariff, a 7.5 percent Section 301 duty specifically for goods from China, the new 10 percent tariff required by Trump, additional processing fees and customs brokerage fees, and perhaps increased brokering and handling costs due to the sudden change in rules. "Will the dress that was $5 now cost $5.50 or $15?" says Allen. "That we don't know. It depends on how those retailers react and change their business models." In the immediate term, clearing customs will become a challenge for most ecommerce companies. Their long-term concern, though, is the potential impact on profitability. The appeal of Temu and Shein and similar Chinese ecommerce companies is how affordable their products are. If that changes, the ecommerce landscape and consumer behavior in the US may change significantly as well. While the USPS has announced the suspension of accepting any parcels from China and Hong Kong, CBP hasn't elaborated on how the agency will enforce Trump's new tariffs other than saying in an announcement that it will reject de minimis exemption requests from China starting today.Read more of this story at Slashdot.
UK Team Invents Self-Healing Road Surface To Prevent Potholes
An anonymous reader quotes a report from The Guardian: For all motorists, but perhaps the Ferrari-collecting rocker Rod Stewart in particular, it will be music to the ears: researchers have developed a road surface that heals when it cracks, preventing potholes without a need for human intervention. The international team devised a self-healing bitumen that mends cracks as they form by fusing the asphalt back together. In laboratory tests, pieces of the material repaired small fractures within an hour of them first appearing. "When you close the cracks you prevent potholes forming in the future and extend the lifespan of the road," said Dr Jose Norambuena-Contreras, a researcher on the project at Swansea University. "We can extend the surface lifespan by 30%." Potholes typically start from small surface cracks that form under the weight of traffic. These allow water to seep into the road surface, where it causes more damage through cycles of freezing and thawing. Bitumen, the sticky black substance used in asphalt, becomes susceptible to cracking when it hardens through oxidation. To make the self-healing bitumen, the researchers mixed in tiny porous plant spores soaked in recycled oils. When the road surface is compressed by passing traffic, it squeezes the spores, which release their oil into any nearby cracks. The oils soften the bitumen enough for it to flow and seal the cracks. Working with researchers at King's College London and Google Cloud, the scientists used machine learning, a form of artificial intelligence, to model the movement of organic molecules in bitumen and simulate the behaviour of the self-healing material to see how it responded to newly formed cracks. The material could be scaled up for use on British roads in a couple of years, the researchers believe. Google published a blog post with more information about the "self-healing" asphalt.Read more of this story at Slashdot.
OpenAI Partners With California State University System
OpenAI is partnering with the California State University (CSU) system to bring ChatGPT Edu to the 23-campus community of 500,000 students, calling it the "largest implementation of ChatGPT by any single organization or company anywhere in the world." Fortune reports: As part of ChatGPT Edu, members of the CSU community will get special access to ChatGPT-4o and advanced research and analysis capabilities. The partnership allows schools to create customizable AI chatbots for any project, like a campus IT help desk bot, financial aid assistant, chemistry tutor, or orientation buddy. CSU also plans to introduce free AI skills training for its students, faculty, and staff as well as connect students with AI-related apprenticeship programs. CSU joins a number of other schools with ChatGPT Edu partnerships, including Arizona State University (AS), The University of Texas, Austin, University of Oxford, Columbia University, and the Wharton School at the University of Pennsylvania.Read more of this story at Slashdot.
Apple Announces 'Invites' App, Raises AppleCare+ Subscription Prices For iPhone
Apple has announced Apple Invites, a new iPhone app designed to help you manage your social life. Engadget reports: The idea behind Apple Invites is that you can create and share custom invitations for any event or occasion. You can use your own photos or backgrounds in the app as an image for the invite. Image Playground is built into Invites and you can use that to generate an images for the invitation instead. Other Apple Intelligence features such as Writing Tools are baked in as well, in case you need a hand to craft the right message for your invitation. The tech giant also said it was increasing AppleCare+ subscription prices for the iPhone, "raising the cost by 50 cents for all models in the United States," according to MacRumors. From the report: Standard AppleCare+ for the iPhone 16 models is now priced at $10.49 per month, for example, up from the prior $9.99 per month price. The 50 cent price increase applies to all available AppleCare+ plans for Apple's current iPhone lineup, and it includes both the standard plan and the Theft and Loss plan. The two-year AppleCare+ subscription prices have not changed, nor have the service fees and deductibles. The increased prices are only applicable when paying for AppleCare+ on a monthly basis. Apple has not raised the prices of AppleCare+ subscription plans for the iPad, Mac, or Apple Watch.Read more of this story at Slashdot.
Google Removes Pledge To Not Use AI For Weapons From Website
Google has updated its public AI principles page to remove a pledge to not build AI for weapons or surveillance. TechCrunch reports: Asked for comment, the company pointed TechCrunch to a new blog post on "responsible AI." It notes, in part, "we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security." Google's newly updated AI principles note the company will work to "mitigate unintended or harmful outcomes and avoid unfair bias," as well as align the company with "widely accepted principles of international law and human rights." Further reading: Google Removes 'Don't Be Evil' Clause From Its Code of ConductRead more of this story at Slashdot.
AI-Generated Slop Is Already In Your Public Library
An anonymous reader writes: Low quality books that appear to be AI generated are making their way into public libraries via their digital catalogs, forcing librarians who are already understaffed to either sort through a functionally infinite number of books to determine what is written by humans and what is generated by AI, or to spend taxpayer dollars to provide patrons with information they don't realize is AI-generated. Public libraries primarily use two companies to manage and lend ebooks: Hoopla and OverDrive, the latter of which people may know from its borrowing app, Libby. Both companies have a variety of payment options for libraries, but generally libraries get access to the companies' catalog of books and pay for customers to be able to borrow that book, with different books having different licenses and prices. A key difference is that with OverDrive, librarians can pick and choose which books in OverDrive's catalog they want to give their customers the option of borrowing. With Hoopla, librarians have to opt into Hoopla's entire catalog, then pay for whatever their customers choose to borrow from that catalog. The only way librarians can limit what Hoopla books their customers can borrow is by setting a limit on the price of books. For example, a library can use Hoopla but make it so their customers can only borrow books that cost the library $5 per use. On one hand, Hoopla's gigantic catalog, which includes ebooks, audio books, and movies, is a selling point because it gives librarians access to more for cheaper price. On the other hand, making librarians buy into the entire catalog means that a customer looking for a book about how to diet for a healthier liver might end up borrowing Fatty Liver Diet Cookbook: 2000 Days of Simple and Flavorful Recipes for a Revitalized Liver. The book was authored by Magda Tangy, who has no online footprint, and who has an AI-generated profile picture on Amazon, where her books are also for sale. Note the earring that is only on one ear and seems slightly deformed. A spokesperson for deepfake detection company Reality Defender said that according to their platform, the headshot is 85 percent likely to be AI-generated. [...] It is impossible to say exactly how many AI-generated books are included in Hoopla's catalog, but books that appeared to be AI-generated were not hard to find for most of the search terms I tried on the platform."This type of low quality, AI generated content, is what we at 404 Media and others have come to call AI slop," writes Emanuel Maiberg. "Librarians, whose job it is in part to curate what books their community can access, have been dealing with similar problems in the publishing industry for years, and have a different name for it: vendor slurry." "None of the librarians I talked to suggested the AI-generated content needed to be banned from Hoopla and libraries only because it is AI-generated. It might have its place, but it needs to be clearly labeled, and more importantly, provide borrowers with quality information." Sarah Lamdan, deputy director of the American Library Association, told 404 Media: "Platforms like Hoopla should offer libraries the option to select or omit materials, including AI materials, in their collections. AI books should be well-identified in library catalogs, so it is clear to readers that the books were not written by human authors. If library visitors choose to read AI eBooks, they should do so with the knowledge that the books are AI-generated."Read more of this story at Slashdot.
RISC-V Mainboard For the Framework Laptop 13 Is Now Available
The DeepComputing RISC-V Mainboard that Framework announced last year for its 13-inch laptops is now available for $199. Liliputing reports: If you already have a Framework Laptop 13 with an Intel or AMD motherboard, the new board is a drop-in replacement. But if you don't have a Framework Laptop you can also use the mainboard as a standalone computer: Framework sells a $39 Cooler Master case that effectively turns its mainboards into mini desktop computers. The RISC-V Mainboard comes from a partnership between Framework and DeepComputing, the Chinese company behind the DC-ROMA laptops, which were some of the first notebook computers to ship with RISC-V processors. The board features a StarFive JH7110 processor, which is a 1.5 GHz quad-core chip featuring SiFive U74 RISC-V CPU cores and Imagination BXE-4-32 graphics, 8GB of onboard RAM, and a a 64GB SD card for storage (there's also support for an optional eMMC module, but you'll need to bring your own). Since the board is designed to fit in existing laptop frames, it's the same size and shape as AMD or Intel models and has four USB ports in the same locations. But these ports are a little less versatile than the ones you might find on other Framework Laptop 13 Mainboards [...]. There's also a 3.5mm audio jack. You can check out the new board via the Framework Marketplace. Further reading: Late last year, Framework CEO Nirav Patel delivered one of the best live demos we've ever seen at a tech conference -- modifying a Framework Laptop from x86 to RISC-V live on stage.Read more of this story at Slashdot.
$42 Billion Broadband Grant Program May Scrap Biden Admin's Preference For Fiber
An anonymous reader quotes a report from Ars Technica: US Senator Ted Cruz (R-Texas) has been demanding an overhaul of a $42.45 billion broadband deployment program, and now his telecom policy director has been chosen to lead the federal agency in charge of the grant money. "Congratulations to my Telecom Policy Director, Arielle Roth, for being nominated to lead NTIA," Cruz wrote last night, referring to President Trump's pick to lead the National Telecommunications and Information Administration. Roth's nomination is pending Senate approval. Roth works for the Senate Commerce Committee, which is chaired by Cruz. "Arielle led my legislative and oversight efforts on communications and broadband policy with integrity, creativity, and dedication," Cruz wrote. Shortly after Trump's election win, Cruz called for an overhaul of the Broadband Equity, Access, and Deployment (BEAD) program, which was created by Congress in November 2021 and is being implemented by the NTIA. Biden-era leaders of the NTIA developed rules for the program and approved initial funding plans submitted by every state and territory, but a major change in approach could delay the distribution of funds. Cruz previously accused the NTIA of "technology bias" because the agency prioritized fiber over other types of technology. He said Congress would review BEAD for "imposition of statutorily-prohibited rate regulation; unionized workforce and DEI labor requirements; climate change assessments; excessive per-location costs; and other central planning mandates." Roth criticized the BEAD implementation at a Federalist Society event in June 2024. "Instead of prioritizing connecting all Americans who are currently unserved to broadband, the NTIA has been preoccupied with attaching all kinds of extralegal requirements on BEAD and, to be honest, a woke social agenda, loading up all kinds of burdens that deter participation in the program and drive up costs," she said. Municipal broadband networks and fiber networks in general could get less funding under the new plans. Roth is "expected to change the funding conditions that currently include priority access for government-owned networks" and "could revisit decisions like the current preference for fiber," Bloomberg reported, citing people familiar with the matter. Congress defined priority broadband projects under BEAD as those that "ensure that the network built by the project can easily scale speeds over time to meet the evolving connectivity needs of households and businesses; and support the deployment of 5G, successor wireless technologies, and other advanced services." The Biden NTIA determined that only end-to-end fiber-optic architecture meet these criteria. "End-to-end fiber networks can be updated by replacing equipment attached to the ends of the fiber-optic facilities, allowing for quick and relatively inexpensive network scaling as compared to other technologies. Moreover, new fiber deployments will facilitate the deployment and growth of 5G and other advanced wireless services, which rely extensively on fiber for essential backhaul," the Biden NTIA said (PDF).Read more of this story at Slashdot.
Red Hat Plans to Add AI to Fedora and GNOME
In his post about the future of Fedora Workstation, Christian F.K. Schaller discusses how the Red Hat team plans to integrate AI with IBM's open-source Granite engine to enhance developer tools, such as IDEs, and create an AI-powered Code Assistant. He says the team is also working on streamlining AI acceleration in Toolbx and ensuring Fedora users have access to tools like RamaLama. From the post: One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points. "I'm still not sure how I feel about this approach," writes designer/developer and blogger, Bradley Taunt. "While IBM Granite is an open source model, I still don't enjoy so much artificial 'intelligence' creeping into core OS development. This also isn't something optional on the end-users side, like a desktop feature or package. This sounds like it's going to be built directly into the core system." "Red Hat has been pushing hard towards AI and my main concern is having this influence other operating system dev teams. Luckily things seems AI-free in BSD land. For now, at least."Read more of this story at Slashdot.
Amazon, King of Online Retail, Can't Seem To Make Its Physical Stores Work
Amazon's brick-and-mortar expansion has faltered, WSJ reported Tuesday, as the e-commerce giant plans to close its Amazon Go store in Woodland Hills, California, shrinking the cashierless convenience store chain to 16 locations across four states, down from roughly twice that number in early 2023. The company is pivoting to license its "Just Walk Out" technology, now used by more than 200 retailers including colleges and airports, while focusing its physical retail strategy on grocery stores through Whole Foods Market and Amazon Fresh locations. Amazon's other physical retail experiments, including bookstores and "4-star" locations selling popular website items, have also struggled.Read more of this story at Slashdot.
Cruise To Slash Workforce By Nearly 50% After GM Cuts Funding To Robotaxi Operations
Autonomous vehicle company Cruise will lay off about half of its 2,100 employees and remove several top executives, including CEO Marc Whitten, as parent company General Motors shifts away from robotaxi development to focus on personal autonomous vehicles. The cuts come two months after GM said it would stop funding Cruise's robotaxi program to save $1 billion annually. Affected workers will receive severance packages including eight weeks of pay and benefits through April. The restructuring follows an October incident where a Cruise vehicle dragged a pedestrian, leading to the suspension of its permits.Read more of this story at Slashdot.
Panasonic To Cut Costs To Support Shift Into AI
Panasonic will cut its costs, restructure underperforming units and revamp its workforce as it pivots toward AI data centers and away from its consumer electronics roots, the company said on Tuesday. The Japanese conglomerate aims to boost profits by 300 billion yen ($1.93 billion) by March 2029, partly by consolidating production and logistics operations. Bloomberg reports that CEO Yuki Kusumi has declined to confirm if the company would divest its TV business but said alternatives were being considered. The Tesla battery supplier plans to integrate AI across operations through a partnership with Anthropic, targeting growth in components for data centers.Read more of this story at Slashdot.
Americans Kiss Job Hopping Goodbye
Americans quit 39.6 million jobs in 2024, an 11% drop from 2023 and 22% below the 2022 peak, Labor Department data showed Tuesday, signaling an end to the post-pandemic job-switching frenzy. The monthly quit rate fell below pre-pandemic levels as workers faced diminishing options in a cooling labor market. Available positions per unemployed worker dropped to 1.1 from 2 in March 2022, while hiring declined to a monthly average of 3.5% in 2024 from 4.4% in 2021. Total hiring fell to 66 million in 2024 from 71 million in 2023, though the job market remained stable. The unemployment rate held at 4.1%, with economists expecting steady job growth in Friday's upcoming labor report. The Conference Board's latest survey showed fewer respondents viewing jobs as plentiful compared to the early 2020s, with more reporting difficulties finding work.Read more of this story at Slashdot.
Microsoft Quietly Makes It Harder To Install Windows 11 on Old PCs Ahead of Windows 10's End of Support
Microsoft has intensified efforts to block unsupported Windows 11 installations, removing documentation about bypassing system requirements and flagging third-party workaround tools as potential malware. The move comes as Windows 10 approaches end of support in October 2025, when users must either continue without updates, upgrade to Windows 11, or purchase new hardware compatible with Windows 11's TPM 2.0 requirement. Microsoft Defender now identifies Flyby11, a popular tool for installing Windows 11 on incompatible devices, as "PUA:Win32/Patcher." Users are also reporting that unsupported Windows 11 installations are already facing restrictions, with some machines unable to receive major updates. Microsoft has also removed text from its "Ways to install Windows 11" page that had provided instructions for bypassing TPM 2.0 requirements through registry key modifications. The removed section included technical details for users who acknowledged and accepted the risks of installing Windows 11 on unsupported hardware.Read more of this story at Slashdot.
Lung Cancer Diagnoses On the Rise Among Never-Smokers Worldwide
The proportion of people being diagnosed with lung cancer who have never smoked is increasing, with air pollution an "important factor," the World Health Organization's cancer agency has said. From a report: Lung cancer in people who have never smoked cigarettes or tobacco is now estimated to be the fifth highest cause of cancer deaths worldwide, according to the International Agency for Research on Cancer (IARC). Lung cancer in never-smokers is also occurring almost exclusively as adenocarcinoma, which has become the most dominant of the four main subtypes of the disease in both men and women globally, the IARC said. About 200,000 cases of adenocarcinoma were associated with exposure to air pollution in 2022, according to the IARC study published in the Lancet Respiratory Medicine journal. The largest burden of adenocarcinoma attributable to air pollution was found in east Asia, particularly China, the study found.Read more of this story at Slashdot.
Chris Anderson Is Giving TED Away To Whoever Has the Best Idea for Its Future
Chris Anderson, who transformed TED from a small conference into a global platform for sharing ideas, announced today he's stepping down after 25 years at the helm. The nonprofit's leader is seeking new ownership through an unusual open call for proposals. Anderson told WIRED he wants potential buyers -- whether universities, philanthropic organizations, media companies or tech firms -- to demonstrate both vision and financial capacity. The organization, which charges $12,500 for its flagship conference seats, maintains $25 million in cash reserves and reports a $100 million break-even balance sheet. The future owner must commit to keeping the conference running and maintaining TED's practice of sharing talks for free.Read more of this story at Slashdot.
Microsoft Veteran Ponders World Where Toothbrushes Need Reboots
New submitter mastazi writes: In his latest post, veteran Microsoft developer Raymond Chen reflects on what it means living in a world where you might need to reboot your toothbrush, or perform a firmware update to your shoes!Read more of this story at Slashdot.
China Launches Antitrust Investigation Into Google
China said Tuesday it has launched an antitrust investigation into Google, part of a swift retaliation after the U.S. President Donald Trump imposed a 10% tariff on Chinese goods. From a report: The probe by China's State Administration for Market Regulation will examine alleged monopolistic practices by the U.S. tech giant, which has had its search and internet services blocked in China since 2010 but maintains operations there primarily focused on advertising.Read more of this story at Slashdot.
Popular Linux Orgs Freedesktop, Alpine Linux Are Scrambling For New Web Hosting
An anonymous reader quotes a report from Ars Technica: In what is becoming a sadly regular occurrence, two popular free software projects, X.org/Freedesktop.org and Alpine Linux, need to rally some of their millions of users so that they can continue operating. Both services have largely depended on free server resources provided by Equinix (formerly Packet.net) and its Metal division for the past few years. Equinix announced recently that it was sunsetting its bare-metal sales and services, or renting out physically distinct single computers rather than virtualized and shared hardware. As reported by the Phoronix blog, both free software organizations have until the end of April to find and fund new hosting, with some fairly demanding bandwidth and development needs. An issue ticket on Freedesktop.org's GitLab repository provides the story and the nitty-gritty needs of that project. Both the X.org foundation (home of the 40-year-old window system) and Freedesktop.org (a shared base of specifications and technology for free software desktops, including Wayland and many more) used Equinix's donated space. [...] Alpine Linux, a small, security-minded distribution used in many containers and embedded devices, also needs a new home quickly. As detailed in its blog, Alpine Linux uses about 800TB of bandwidth each month and also needs continuous integration runners (or separate job agents), as well as a development box. Alpine states it is seeking co-location space and bare-metal servers near the Netherlands, though it will consider virtual machines if bare metal is not feasible.Read more of this story at Slashdot.
AI Systems With 'Unacceptable Risk' Are Now Banned In the EU
AI systems that pose "unacceptable risk" or harm can now be banned in the European Union. Some of the unacceptable AI activities include social scoring, deceptive manipulation, exploiting personal vulnerabilities, predictive policing based on appearance, biometric-based profiling, real-time biometric surveillance, emotion inference in workplaces or schools, and unauthorized facial recognition database expansion. TechCrunch reports: Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely. Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."Read more of this story at Slashdot.
Salesforce Cutting 1,000 Roles While Hiring Salespeople for AI
Salesforce is cutting jobs as its latest fiscal year gets underway, Bloomberg reported Monday, citing a person familiar with the matter, even as the company simultaneously hires workers to sell new artificial intelligence products. From the report: More than 1,000 roles will be affected, according to the person, who asked not to be identified because the information is private. Displaced workers will be able to apply for other jobs internally, the person added. Salesforce had nearly 73,000 workers as of January 2024, when that fiscal year ended.Read more of this story at Slashdot.
CERN's Mark Thomson: AI To Revolutionize Fundamental Physics
An anonymous reader quotes a report from The Guardian: Advanced artificial intelligence is to revolutionize fundamental physics and could open a window on to the fate of the universe, according to Cern's next director general. Prof Mark Thomson, the British physicist who will assume leadership of Cern on 1 January 2026, says machine learning is paving the way for advances in particle physics that promise to be comparable to the AI-powered prediction of protein structures that earned Google DeepMind scientists a Nobel prize in October. At the Large Hadron Collider (LHC), he said, similar strategies are being used to detect incredibly rare events that hold the key to how particles came to acquire mass in the first moments after the big bang and whether our universe could be teetering on the brink of a catastrophic collapse. "These are not incremental improvements," Thomson said. "These are very, very, very big improvements people are making by adopting really advanced techniques." "It's going to be quite transformative for our field," he added. "It's complex data, just like protein folding -- that's an incredibly complex problem -- so if you use an incredibly complex technique, like AI, you're going to win." The intervention comes as Cern's council is making the case for the Future Circular Collider, which at 90km circumference would dwarf the LHC. Some are skeptical given the lack of blockbuster results at the LHC since the landmark discovery of the Higgs boson in 2012 and Germany has described the $17 billion proposal as unaffordable. But Thomson said AI has provided fresh impetus to the hunt for new physics at the subatomic scale -- and that major discoveries could occur after 2030 when a major upgrade will boost the LHC's beam intensity by a factor of ten. This will allow unprecedented observations of the Higgs boson, nicknamed the God particle, that grants mass to other particles and binds the universe together. Thomson is now confident that the LHC can measure Higgs boson self-coupling, a key factor in understanding how particles gained mass after the Big Bang and whether the Higgs field is in a stable state or could undergo a future transition. According to Thomson: "It's a very deep fundamental property of the universe, one we don't fully understand. If we saw the Higgs self-coupling being different from our current theory, that that would be another massive, massive discovery. And you don't know until you've made the measurement." The report also notes how AI is being used in "every aspect of the LHC operation." Dr Katharine Leney, who works on the LHC's Atlas experiment, said: "When the LHC is colliding protons, it's making around 40m collisions a second and we have to make a decision within a microsecond ... which events are something interesting that we want to keep and which to throw away. We're already now doing better with the data that we've collected than we thought we'd be able to do with 20 times more data ten years ago. So we've advanced by 20 years at least. A huge part of this has been down to AI." Generative AI is also being used to look for and even produce dark matter via the LHC. "You can start to ask more complex, open-ended questions," said Thomson. "Rather than searching for a particular signature, you ask the question: 'Is there something unexpected in this data?'"Read more of this story at Slashdot.
Bonobos Can Tell When They Know Something You Don't
A study found that bonobos can recognize when someone lacks knowledge they possess and take action to help, demonstrating a basic form of theory of mind. This suggests that the ability to understand others' perspectives is evolutionarily older than previously thought and may have existed in our common ancestors to enhance cooperation and coordination. New Scientist reports: [W]e have been missing clear evidence from controlled settings that primates can track a perspective that differs from their own and then act upon it, says Luke Townrow at Johns Hopkins University in Maryland. To investigate this, Townrow and Christopher Krupenye, also at Johns Hopkins University, tested if three male bonobos at the Ape Initiative research centre in Iowa could identify ignorance in someone they were trying to cooperate with, and then gesture to them to help solve the task. On a table between the bonobo and an experimenter were three upturned plastic cups. A second researcher placed a barrier between the experimenter and the cups, then hid a treat, like a juicy grape, under one of them. In one version of the experiment, the "knowledge condition," a window in the barrier allowed the experimenter to watch where the treat was placed. In the "ignorance condition," their view was completely blocked. If the experimenter found the food, they would give it to the bonobo, providing a motivation for the apes to share what they knew. Townrow and Krupenye looked at whether the ape pointed at the cup, and how quickly they pointed, after the barrier had been removed over 24 trials for each condition. They found that, on average, the bonobos took 1.5 seconds less time to point and pointed in approximately 20 per cent more trials in the ignorance condition. "This shows that they can actually take action when they realize that somebody has a different perspective from their own," says Krupenye. It appears that bonobos understand features of what others are thinking that researchers have historically assumed they didn't comprehend, he adds. The findings have been published in the journal PNAS.Read more of this story at Slashdot.
Senator Hawley Proposes Jail Time For People Who Download DeepSeek
Senator Josh Hawley has introduced a bill that would criminalize the import, export, and collaboration on AI technology with China. What this means is that "someone who knowingly downloads a Chinese developed AI model like the now immensely popular DeepSeek could face up to 20 years in jail, a million dollar fine, or both, should such a law pass," reports 404 Media. From the report: Hawley introduced the legislation, titled the Decoupling America's Artificial Intelligence Capabilities from China Act, on Wednesday of last year. "Every dollar and gig of data that flows into Chinese AI are dollars and data that will ultimately be used against the United States," Senator Hawley said in a statement. "America cannot afford to empower our greatest adversary at the expense of our own strength. Ensuring American economic superiority means cutting China off from American ingenuity and halting the subsidization of CCP innovation." Hawley's statement explicitly says that he introduced the legislation because of the release of DeepSeek, an advanced AI model that's competitive with its American counterparts, and which its developers claimed was made for a fraction of the cost and without access to as many and as advanced of chips, though these claims are unverified. Hawley's statement called DeepSeek "a data-harvesting, low-cost AI model that sparked international concern and sent American technology stocks plummeting." Hawley's statement says the goal of the bill is to "prohibit the import from or export to China of artificial intelligence technology, "prohibit American companies from conducting AI research in China or in cooperation with Chinese companies," and "Prohibit U.S. companies from investing money in Chinese AI development."Read more of this story at Slashdot.
TSA's Airport Facial-Recognition Tech Faces Audit Probe
The Department of Homeland Security's Inspector General has launched an audit of the TSA's use of facial recognition technology at U.S. airports following concerns from lawmakers and privacy advocates. The Register reports: Homeland Security Inspector General Joseph Cuffari notified a bipartisan group of US Senators who had asked for such an investigation last year that his office has announced an audit of TSA facial recognition technology in a letter [PDF] sent to the group Friday. "We have reviewed the concerns raised in your letter as part of our work planning process," said Cuffari, a Trump appointee who survived the recent purge of several Inspectors General. "[The audit] will determine the extent to which TSA's facial recognition and identification technologies enhance security screening to identify persons of interest and authenticate flight traveler information while protecting passenger privacy," Cuffari said. The letter from the Homeland Security OIG was addressed to Senator Jeff Merkley (D-OR), who co-led the group of 12 Senators who asked for an inspection of TSA facial recognition in November last year. "Americans don't want a national surveillance state, but right now, more Americans than ever before are having their faces scanned at the airport without being able to exercise their right to opt-out," Merkley said in a statement accompanying Cuffari's letter. "I have long sounded the alarm about the TSA's expanding use of facial recognition ... I'll keep pushing for strong Congressional oversight." [...] While Cuffari's office was light on details of what would be included in the audit, the November letter from the Senators was explicit in its list of requests. They asked for the systems to be evaluated via red team testing, with a specific investigation into effectiveness - whether it reduced screening delays, stopped known terrorists, led to workforce cuts, or amounted to little more than security theater with errors.Read more of this story at Slashdot.
Judge Denies Apple's Attempt To Intervene In Google Search Antitrust Trial
A US District Court judge denied Apple's emergency request to halt the Google Search monopoly trial, ruling that Apple failed to show sufficient grounds for a stay. The Verge reports: Apple said last week that it needs to be involved in the Google trial because it does not want to lose "the ability to defend its right to reach other arrangements with Google that could benefit millions of users and Apple's entitlement to compensation for distributing Google search to its users." The remedies phase of the trial is set for April, and lawyers for the Department of Justice have argued that Google should be forced to sell Chrome, with a possibility of spinning off Android if necessary. While Google will still appeal the decision, the company's proposed remedies focus on undoing its licensing deals that bundle apps and services together. "Because Apple has not satisfied the 'stringent requirements' for obtaining the 'extraordinary relief' of a stay pending appeal, its motion is denied," states Judge Mehta's order. Mehta explains that Apple "has not established a likelihood of success on the merits" for the stay. That includes a lack of clear evidence on how Apple will suffer "certain and great" harm.Read more of this story at Slashdot.
Anthropic Asks Job Applicants Not To Use AI In Job Applications
An anonymous reader quotes a report from 404 Media: Anthropic, the company that made one of the most popular AI writing assistants in the world, requires job applicants to agree that they won't use an AI assistant to help write their application. "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," the applications say. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree." Anthropic released Claude, an AI assistant that's especially good at conversational writing, in 2023. This question is in almost all of Anthropic's nearly 150 currently-listed roles, but is not in some technical roles, like mobile product designer. It's included in everything from software engineer roles to finance, communications, and sales jobs at the company. The field was spotted by Simon Willison, an open source developer. The question shows Anthropic trying to get around a problem it's helping create: people relying so heavily on AI assistants that they struggle to form opinions of their own. It's also a moot question, as Anthropic and its competitors have created AI models so indistinguishable from human speech as to be nearly undetectable.Read more of this story at Slashdot.
Microsoft Paint Gets a Copilot Button For Gen AI Features
A new update is being rolled out to Windows 11 insiders (Build 26120.3073) that introduces a Copilot button in Microsoft Paint. PCWorld reports: Clicking the Copilot button will expand a drop-down menu with all the generative AI features: Cocreator and Image Creator (AI art based on what you've drawn or text prompts), Generative Erase (AI removal of unwanted stuff from images), and Remove Background. Note that these generative AI features have been in Microsoft Paint for some time, but this quick-access Copilot button is a nice time-saver and productivity booster if you use them a lot.Read more of this story at Slashdot.
NetChoice Sues To Block Maryland's Kids Code, Saying It Violates the First Amendment
NetChoice has filed (PDF) its 10th lawsuit challenging state internet regulations, this time opposing Maryland's Age-Appropriate Design Code Act. The Verge's Lauren Feiner reports: NetChoice has become one of the fiercest -- and most successful -- opponents of age verification, moderation, and design code laws, all of which would put new obligations on tech platforms and change how users experience the internet. [...] NetChoice's latest suit opposes the Maryland Age-Appropriate Design Code Act, a rule that echoes a California law of a similar name. In the California litigation, NetChoice notched a partial win in the Ninth Circuit Court of Appeals, which upheld the district court's decision to block a part of the law requiring platforms to file reports about their services' impact on kids. (It sent another part of the law back to the lower court for further review.) A similar provision in Maryland's law is at the center of NetChoice's complaint. The group says that Maryland's reporting requirement lets regulators subjectively determine the "best interests of children," inviting "discriminatory enforcement." The reporting requirement on tech companies essentially mandates them "to disparage their services and opine on far-ranging and ill-defined harms that could purportedly arise from their services' 'design' and use of information," NetChoice alleges. NetChoice points out that both California and Maryland have passed separate online privacy laws, which NetChoice Litigation Center director Chris Marchese says shows that "lawmakers know how to write laws to protect online privacy when what they want to do is protect online privacy." Supporters of the Maryland law say legislators learned from California's challenges and "optimized" their law to avoid questions about speech, according to Tech Policy Press. In a blog analyzing Maryland's approach, Future of Privacy Forum points out that the state made some significant changes from California's version -- such as avoiding an "express obligationa to determine users' ages and defining the "best interests of children." The NetChoice challenge will test how well those changes can hold up to First Amendment scrutiny. NetChoice has consistently maintained that even well-intentioned attempts to protect kids online are likely to backfire. Though the Maryland law does not explicitly require the use of specific age verification tools, Marchese says it essentially leaves tech platforms with a no-win decision: collect more data on users to determine their ages and create varied user experiences or cater to the lowest common denominator and self-censor lawful content that might be considered inappropriate for its youngest users. And similar to its arguments in other cases, Marchese worries that collecting more data to identify users as minors could create a "honey pot" of kids' information, creating a different problem in attempting to solve another.Read more of this story at Slashdot.
Air Force Documents On Gen AI Test Are Just Whole Pages of Redactions
An anonymous reader quotes a report from 404 Media: The Air Force Research Laboratory (AFRL), whose tagline is "Win the Fight," has paid more than a hundred thousand dollars to a company that is providing generative AI services to other parts of the Department of Defense. But the AFRL refused to say what exactly the point of the research was, and provided page after page of entirely blacked out, redacted documents in response to a Freedom of Information Act (FOIA) request from 404 Media related to the contract. [...] "Ask Sage: Generative AI Acquisition Accelerator," a December 2023 procurement record reads, with no additional information on the intended use case. The Air Force paid $109,490 to Ask Sage, the record says. Ask Sage is a company focused on providing generative AI to the government. In September the company announced that the Army was implementing Ask Sage's tools. In October it achieved "IL5" authorization, a DoD term for the necessary steps to protect unclassified information to a certain standard. 404 Media made an account on the Ask Sage website. After logging in, the site presents a list of the models available through Ask Sage. Essentially, they include every major model made by well-known AI companies and open source ones. Open AI's GPT-4o and DALL-E-3; Anthropic's Claude 3.5; and Google's Gemini are all included. The company also recently added the Chinese-developed DeepSeek R1, but includes a disclaimer. "WARNING. DO NOT USE THIS MODEL WITH SENSITIVE DATA. THIS MODEL IS BIASED, WITH TIES TO THE CCP [Chinese Communist Party]," it reads. Ask Sage is a way for government employees to access and use AI models in a more secure way. But only some of the models in the tool are listed by Ask Sage as being "compliant" with or "capable" of handling sensitive data. [...] [T]he Air Force declined to provide any real specifics on what it paid Ask Sage for. 404 Media requested all procurement records related to the Ask Sage contract. Instead, the Air Force provided a 19 page presentation which seemingly would have explained the purpose of the test, while redacting 18 of the pages. The only available page said "Ask Sage, Inc. will explore the utilization of Ask Sage by acquisition Airmen with the DAF for Innovative Defense-Related Dual Purpose Technologies relating to the mission of exploring LLMs for DAF use while exploring anticipated benefits, clearly define needed solution adaptations, and define clear milestones and acceptance criteria for Phase II efforts."Read more of this story at Slashdot.
Why Even Physicists Still Don't Understand Quantum Theory 100 Years On
A century after quantum mechanics revolutionized physics, scientists still cannot agree on how the theory fundamentally works, despite its tremendous success in explaining natural phenomena and enabling modern technologies. The theory's central puzzle remains unresolved: the way quantum systems are described mathematically differs from what scientists observe when measuring them. This has led to competing interpretations about whether quantum states represent physical reality or are merely tools for calculating probabilities. As researchers debate these foundational questions, quantum mechanics has enabled breakthroughs in particle physics, chemistry, and computing. It accurately predicts phenomena from the behavior of atoms to the properties of the Higgs boson, and underlies technologies like quantum computers and ultra-precise measurement devices. The field's inability to reach consensus on its foundations hasn't hindered its practical applications. Scientists continue to develop new quantum technologies even as they grapple with deep questions about measurement, locality, and the nature of reality that have persisted since Einstein and Bohr's famous debates in the 1920s and 1930s.Read more of this story at Slashdot.
Trump Orders Creation of US Sovereign Wealth Fund, Says It Could Buy TikTok
U.S. President Donald Trump signed an executive order on Monday ordering the U.S. Treasury and Commerce Departments to create a sovereign wealth fund and said it may purchase TikTok. From a report: "We're going to stand this thing up within the next 12 months. We're going to monetize the asset side of the U.S. balance sheet for the American people," Treasury Secretary Scott Bessent told reporters. "There'll be a combination of liquid assets, assets that we have in this country as we work to bring them out for the American people." Trump had previously floated such a government investment vehicle as a presidential candidate, saying it could fund "great national endeavors" like infrastructure projects such as highways and airports, manufacturing, and medical research. Details on how exactly the fund would operate and be financed were not immediately available, but Trump previously said it could be funded by "tariffs and other intelligent things." Typically such funds rely on a country's budget surplus to make investments, but the U.S. operates at a deficit.Read more of this story at Slashdot.
Anthropic Makes 'Jailbreak' Advance To Stop AI Models Producing Harmful Results
AI startup Anthropic has demonstrated a new technique to prevent users from eliciting harmful content from its models, as leading tech groups including Microsoft and Meta race to find ways that protect against dangers posed by the cutting-edge technology. From a report: In a paper released on Monday, the San Francisco-based startup outlined a new system called "constitutional classifiers." It is a model that acts as a protective layer on top of large language models such as the one that powers Anthropic's Claude chatbot, which can monitor both inputs and outputs for harmful content. The development by Anthropic, which is in talks to raise $2 billion at a $60 billion valuation, comes amid growing industry concern over "jailbreaking" -- attempts to manipulate AI models into generating illegal or dangerous information, such as producing instructions to build chemical weapons. Other companies are also racing to deploy measures to protect against the practice, in moves that could help them avoid regulatory scrutiny while convincing businesses to adopt AI models safely. Microsoft introduced "prompt shields" last March, while Meta introduced a prompt guard model in July last year, which researchers swiftly found ways to bypass but have since been fixed.Read more of this story at Slashdot.
Cloudflare Rolls Out Digital Tracker To Combat Fake Images
Cloudflare, a major web infrastructure company, will now track and verify the authenticity of images across its network through Content Credentials, a digital signature system that documents an image's origin and editing history. The technology, developed by Adobe's Content Authenticity Initiative, embeds metadata showing who created an image, when it was taken, and any subsequent modifications - including those made by AI tools. Major news organizations including the BBC, Wall Street Journal and New York Times have already adopted the system. The feature is available immediately through a single toggle in Cloudflare Images settings. Users can verify an image's authenticity through Adobe's web tool or Chrome extension.Read more of this story at Slashdot.
Levels of Microplastics in Human Brains May Be Rapidly Rising, Study Suggests
The exponential rise in microplastic pollution over the past 50 years may be reflected in increasing contamination in human brains, according to a new study. From a report: It found a rising trend in micro- and nanoplastics in brain tissue from dozens of postmortems carried out between 1997 and 2024. The researchers also found the tiny particles in liver and kidney samples. The human body is widely contaminated by microplastics. They have also been found in blood, semen, breast milk, placentas and bone marrow. The impact on human health is largely unknown, but they have been linked to strokes and heart attacks. The scientists also found that the concentration of microplastics was about six times higher in brain samples from people who had dementia. However, the damage dementia causes in the brain would be expected to increase concentrations, the researchers said, meaning no causal link should be assumed. "Given the exponentially rising environmental presence of micro- and nanoplastics, this data compels a much larger effort to understand whether they have a role in neurological disorders or other human health effects," said the researchers, who were led by Prof Matthew Campen at the University of New Mexico in the US.Read more of this story at Slashdot.
OpenAI's New Trademark Application Hints at Humanoid Robots, Smart Jewelry, and More
OpenAI has filed an application with the U.S. Patent and Trademark Office to trademark hardware products under its brand name, signaling potential expansion into consumer devices. The filing covers AI-assisted headsets, smart wearables and humanoid robots with communication capabilities. CEO Sam Altman told The Elect on Sunday that OpenAI plans to develop AI hardware through multiple partnerships, though he estimated prototypes would take "several years" to complete.Read more of this story at Slashdot.
New Bill Aims To Block Foreign Pirate Sites in the US
U.S. Representative Zoe Lofgren has introduced a bill that would allow courts to block access to foreign websites primarily engaged in copyright infringement. The Foreign Anti-Digital Piracy Act would enable rightsholders to obtain injunctions requiring large Internet service providers and DNS resolvers to block access to pirate sites. The bill marks a shift from previous site-blocking proposals, notably including DNS providers like Google and Cloudflare with annual revenues above $100 million. Motion Picture Association CEO Charles Rivkin backed the measure, while consumer group Public Knowledge criticized it as "censorious." The legislation requires court review and due process before any blocking orders can be issued. Sites would have 30 days to contest preliminary orders.Read more of this story at Slashdot.
AI Won The Beatles a Grammy 55 Years After They Broke Up
The Beatles' final song "Now and Then," featuring John Lennon's AI-restored vocals from a 1970s demo, has won the Grammy for Best Rock Performance. Paul McCartney and Ringo Starr completed the track in 2023 using machine learning to isolate Lennon's voice from the original piano recording.Read more of this story at Slashdot.
Meta's Investment in Virtual Reality on Track To Top $100 Billion
Meta's investment in virtual and augmented reality is set to exceed $100 billion this year as CEO Mark Zuckerberg declares 2025 a "defining year" for its smart glasses ambitions. The company invested $19.9 billion in its Reality Labs division last year, according to its annual report, bringing total spending on VR and AR development to over $80 billion since 2014. The unit, which develops Ray-Ban Meta smart glasses and Quest VR headsets, sold 1 million pairs of glasses in 2024 but continues to post losses, according to Financial Times.Read more of this story at Slashdot.
Ubuntu's Dev Discussions Will Move From IRC to Matrix
The blog OMG Ubuntu reports:Ubuntu's key developers have agreed to switch to Matrix as the primary platform for real-time development communications involving the distro. From March, Matrix will replace IRC as the place where critical Ubuntu development conversations, requests, meetings, and other vital chatter must take place... Only the current #ubuntu-devel and #ubuntu-release Libera IRC channels are moving to Matrix, but other Ubuntu development-related channels can choose to move - officially, given some projects were using Matrix over IRC already. As a result, any major requests to/of the key Ubuntu development teams with privileged access can only be actioned if requests are made on Matrix. Canonical-employed Ubuntu developers will be expected to be present on Matrix during working hours... The aim is to streamline organisation, speed up decision making, ensure key developers are reliably reachable, and avoid discussions and conversations from fragmenting across multiple platforms... It's hoped that in picking one platform as the 'chosen one' the split in where the distro's development discourse takes place can be reduced and greater transparency in how and when decisions are made restored. IRC remains popular with many Ubuntu developers but its old-school, lo-fi nature is said to be off-putting to newer contributors. They're used to richer real-time chat platforms with more features (like discussion history, search, offline messaging, etc). It's felt this is why many newer developers employed by Canonical prefer to discuss and message through the company's internal Mattermost instance - which isn't publicly accessible. Many Ubuntu teams, flavours, and community chats already take place on Matrix... "End-users aren't directly affected, of course," they point out. But an earlier post on the same blog notes that Matrix "is increasingly ubiquitous in open-source circles. GNOME uses it, KDE embraces it, Linux Mint migrated last year, Mozilla a few years before, and it's already widely used by Ubuntu community members and developers."IRC remains unmatched in many areas but is, rightly or wrongly, viewed as an antiquated communication platform. IRC clients aren't pretty or plentiful, the syntax is obtuse, and support for 'modern' comforts like media sending, read receipts, etc., is lacking.To newer, younger contributors IRC could feel ancient or cumbersome to learn. Though many of IRC's real and perceived shortcomings are surmountable with workarounds, clients, bots, scripts, and so on, support for those varies between channels, clients, servers, and user configurations. Unlike IRC, which is a centralised protocol relying on individual servers, Matrix is federated. It lets users on different servers to communicate without friction. Plus, Matrix features encryption, message history, media support, and so, meeting modern expectations.Read more of this story at Slashdot.
Will Cryptomining Facilities Change Into AI Data Centers?
To capitalize on the AI boom, many crypto miners "have begun to repurpose parts of their operations into data centers," reports Reuters, "given they already have most of the infrastructure" (including landing and "significant" power resources...)Toronto-based bitcoin miner Bitfarms has enlisted two consultants to explore how it can transform some of its facilities to meet the growing demand for artificial intelligence data centers, it said on Friday... Earlier this month, Riot Platforms launched a review of the potential AI and computing uses for parts of its facility in Navarro County, Texas.Read more of this story at Slashdot.
Google Stops Malicious Apps With 'AI-Powered Threat Detection' and Continuous Scanning
Android and Google Play have billions of users, Google wrote in its security blog this week. "However, like any flourishing ecosystem, it also attracts its share of bad actors... That's why every year, we continue to invest in more ways to protect our community." Google's tactics include industry-wide alliances, stronger privacy policies, and "AI-powered threat detection." "As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. "To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google's advanced AI to improve our systems' ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage. Starting in 2024 Google also "required apps to be more transparent about how they handle user information by launching new developer requirements and a new 'Data deletion' option for apps that support user accounts and data collection.... We're also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK." And once an app is installed, "Google Play Protect, Android's built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior." Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect's real-time scanning identified more than 13 million new malicious apps from outside Google Play [based on Google Play Protect 2024 internal data]... According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off... Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls... Google Play Protect's enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions - Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam. In 2024, Google Play Protect's enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.Read more of this story at Slashdot.
Boeing Acquires Spirit AeroSystems, While Boeing's 'Starliner' Unit Gets a New VP
Spirit Aerosystems builds aircraft components, including fuselages and flight deck sections for Boeing, according to Wikipedia. But now Boeing is set to acquire Spirit AeroSystems. The aviation blog called Aviation Source News says the price tag was $4.7 billion, and opines that Boeing's move signals "a renewed focus on quality and supply chain stability" as Boeing "addresses lingering concerns surrounding its 737 program." Spirit's recent struggles with quality control and production delays have had a fallout effect for Boeing... By integrating Spirit's operations, Boeing can implement more stringent oversight and ensure consistent manufacturing processes. This move is a direct response to past quality lapses that have plagued the company and damaged its reputation. Beyond quality control, the acquisition also offers Boeing greater control over its supply chain. By bringing a key supplier in-house, Boeing can streamline production, improve coordination, and reduce the risk of future disruptions... Spirit AeroSystems also supplies parts to Airbus, Boeing's main competitor. To address this, a separate agreement is being negotiated for Airbus to acquire Spirit's Airbus-related business. This strategic move ensures that Airbus maintains control over its own supply chain and prevents Boeing from gaining undue influence over its competitor's production. Meanwhile, the vice president leading Boeing's Starliner spacecraft unit "has left his role in the program and been replaced by the company's International Space Station program manager, John Mulholland," Reuters reports, citing a Boeing spokesperson.In its first test mission last summer flying astronauts, Starliner was forced by NASA to leave its crew aboard the ISS and return empty in September over problems with its propulsion system. A panel of senior NASA officials in August had voted to have a Crew Dragon capsule from Elon Musk's SpaceX bring them back instead, deeming Starliner too risky for the astronauts. Paul Hill, a veteran NASA flight director and member of the agency's Aerospace Safety Advisory Panel, said during a quarterly panel meeting on Thursday that NASA and Boeing continue to investigate Starliner's propulsion system. A Boeing spokesperson said on Thursday that the company and NASA have not yet determined what Starliner's next mission will look like, such as whether it will need to repeat its crewed flight test before receiving NASA certification for routine flights.Read more of this story at Slashdot.
OpenAI Holds Surprise Livestream to Announce Multi-Step 'Deep Research' Capability
Just three hours ago, OpenAI made a surprise announcement to their 3.9 million followers on X.com. "Live from Tokyo," they'd be livestreaming... something. Their description of the event was just two words. "Deep Research" UPDATE: The stream has begun, and it's about OpenAI's next "agent-ic offering". ("OpenAI cares about agents because we believe they're going to transform knowlege work...") "We're introducing a capability called Deep Research... a model that does multi-step research. It discovers content, it synthesizes content, and it reasons about this content." It even asks "clarifying" questions to your prompt to make sure its multi-step research stays on track. Deep Research will be launching in ChatGPT Pro later today, rolling out into other OpenAI products... And OpenAI's site now has an "Introducing Deep Research" page. Its official description? "An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you. Available to Pro users today, Plus and Team next." Before the livestream began, X.com users shared their reactions to the coming announcement: "It's like DeepSeek, but cleaner""Deep do do if things don't work out""Live from Tokyo? Hope this research includes the secret to waking up early!""Stop trying, we don't trust u" But one X.com user had presciently pointed out OpenAI has used the phrase "deep research" before. In July of 2024, Reuters reported on internal documentation (confirmed with "a person familiar with the matter") code-named "Strawberry" which suggested OpenAI was working on "human-like reasoning skills."How Strawberry works is a tightly kept secret even within OpenAI, the person said. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers. Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry. The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough... OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets. Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence... OpenAI CEO Sam Altman said earlier this year that in AI "the most important areas of progress will be around reasoning ability.Read more of this story at Slashdot.
Mozilla Adapts 'Fakespot' Into an AI-Detecting Firefox Add-on
An anonymous reader shared this post from the blog OMG Ubuntu Want to find out if the text you're reading online was written by an real human or spat out by a large language model trying to sound like one? Mozilla's Fakespot Deepfake Detector Firefox add-on may help give you an indication. Similar to online AI detector tools, the add-on can analyse text (of 32 words or more) to identify patterns, traits, and tells common in AI generated or manipulated text. It uses Mozilla's proprietary ApolloDFT engine and a set of open-source detection models. But unlike some tools, Mozilla's Fakespot Deepfake Detector browser extension is free to use, does not require a signup, nor an app download. "After installing the extension, it is simple to highlight any text online and request an instant analysis. Our Detector will tell you right away if the words are likely to be written by a human or if they show AI patterns," Mozilla says. Fakespot, acquired by Mozilla in 2023, is best known for its fake product review detection tool which grades user-submitted reviews left on online shopping sites. Mozilla is now expanding the use of Fakespot's AI tech to cover other kinds of online content. At present, Mozilla's Fakespot Deepfake Detector only works with highlighted text on websites but the company says it image and video analysis is planned for the future. The Fakespot web site will also analyze the reviews on any product-listing pages if you paste in its URL.Read more of this story at Slashdot.
Should We Sing the Praises of Agile, or Bury It?
"Stakeholders must be included" throughout an agile project "to ensure the evolving deliverables meet their expectations," according to an article this week in Communications of the ACM. But long-time Slashdot reader theodp complains it's a "gushing how-to-make-Agile-even-better opinion piece."Like other pieces by Agile advocates, it's long on accolades for Agile, but short on hard evidence justifying why exactly Agile project management "has emerged as a critical component for firms looking to improve project delivery speed and flexibility" and the use of Agile approaches is being expanded across other departments beyond software development. Indeed, among the three examples of success offered in the piece to "highlight the effectiveness of agile methods in navigating complex stakeholder dynamics and achieving project success" is Atlassian's use of agile practices to market and develop its products, many of which are coincidentally designed to support Agile practices and teams (including Jira). How meta. Citing "recent studies," the piece concludes its call for stakeholder engagement by noting that "59% of organizations measure Agile success by customer or user satisfaction." But that is one of those metrics that can create perverse incentives. Empirical studies of user satisfaction and engagement have been published since the 1970's, and sadly one of the cruel lessons learned from them is that the easiest path to having satisfied users is to avoid working on difficult problems. Keep that in mind when you ponder why difficult user stories seem to languish forever in the Kanban and Scrum Board "Ice Box" column, while the "Complete" column is filled with low-hanging fruit. Sometimes success does come easy! So, are you in the Agile-is-Heaven or Agile-is-Hell camp?Read more of this story at Slashdot.
...42434445464748495051...