Feed slashdot Slashdot

Favorite IconSlashdot

Link https://slashdot.org/
Feed https://rss.slashdot.org/Slashdot/slashdotMain
Copyright Copyright Slashdot Media. All Rights Reserved.
Updated 2025-07-01 18:33
NPM Users Download 2.1B Deprecated Packages Weekly, Say Security Researchers
The cybersecurity site SC Media reports that NPM registry users "download deprecated packages an estimated 2.1 billion times weekly, according to a statistical analysis of the top 50,000 most-downloaded packages in the registry."Deprecated, archived and "orphaned" NPM packages can contain unpatched and/or unreported vulnerabilities that pose a risk to the projects that depend on them, warned the researchers from Aqua Security's Team Nautilus, who published their findings in a blog post on Sunday... In conjunction with their research, Aqua Nautilus has released an open-source tool that can help developers identify deprecated dependencies in their projects. Open-source software may stop receiving updates for a variety of reasons, and it is up to developers/maintainers to communicate this maintenance status to users. As the researchers pointed out, not all developers are transparent about potential risks to users who download or depend on their outdated NPM packages. Aqua Nautilus researchers kicked off their analysis after finding that one open-source software maintainer responded to a report about a vulnerability Nautilus discovered by archiving the vulnerable repository the same day. By archiving the repository without fixing the security flaw or assigning it a CVE, the owner leaves developers of dependent projects in the dark about the risks, the researchers said... Taking into consideration both deprecated packages and active packages that have a direct dependency on deprecated projects, the researchers found about 4,100 (8.2%) of the top 50,000 most-downloaded NPM packages fell under the category of "official" deprecation. However, adding archived repositories to the definition of "deprecated" increased the number of packages affected by deprecation and deprecated dependencies to 6,400 (12.8%)... Including packages with linked repositories that are shown as unavailable (404 error) on GitHub increases the deprecation rate to 15% (7,500 packages), according to the Nautilus analysis. Encompassing packages without any linked repository brings the final number of deprecated packages to 10,600, or 21.2% of the top 50,000. Team Nautilus estimated that under this broader understanding of package deprecation, about 2.1 billion downloads of deprecated packages are made on the NPM registry weekly.Read more of this story at Slashdot.
Billy Mitchell and Twin Galaxies Settle Lawsuits On Donkey Kong World Records
"What happens when a loser who needs to win faces a winner who refuses to lose?" That was the tagline for the iconic 2007 documentary The King of Kong: A Fistful of Quarters, chronicling a middle-school teacher's attempts to take the Donkey Kong record from reigning world champion Billy Mitchell. "Billy Mitchell always has a plan," says Billy Mitchell in the movie (who is also shown answering his phone, "World Record Headquarters. Can I help you?") By 1985, 30-year-old Mitchell was already listed in the "Guinness Book of World Records" for having the world's highest scores for Pac-Man, Ms. Pac-Man, Donkey Kong, Donkey Kong, Jr., Centipede, and Burger Time. But then, NME reports...In 2018, a number of Mitchell's Donkey Kong high-scores were called into question by a fellow gamer, who supplied a string of evidence on the Twin Galaxies forums suggesting Mitchell had used an emulator to break the records, rather than the official, unmodified hardware that's typically required to keep things fair. [Twin Galaxies is Guiness World Records' official source for videogame scores.] Following "an independent investigation," Mitchell's hi-scores were removed from video game database Twin Galaxies as well as the Guinness Book Of Records, though the latter reversed the decision in 2020. Forensic analysts also accused him of cheating in 2022 but Mitchell has fought the accusations ever since. This week, 58-year-old Billy Mitchell posted an announcement on X. "Twin Galaxies has reinstated all of my world records from my videogame career... I am relieved and satisfied to reach this resolution after an almost six-year ordeal and look forward to pursuing my unfinished business elsewhere. Never Surrender, Billy Mitchell." X then wrote below the announcement, "Readers added context they thought people might want to know... Twin Galaxies has only reinstated Michell's scores on an archived leaderboard, where rules were different prior to TG being acquired in 2014. His score remains removed from the current leaderboard where he continues to be ineligible by today's rules." The statement from Twin Galaxies says they'd originally believed they'd seen "a demonstrated impossibility of original, unmodified Donkey Kong arcade hardware" in a recording of one of Billy's games. As punishment they'd then invalidated every record he'd ever set in his life. But now an engineer (qualified as an expert in federal courts) says aging components in the game board could've produced the same visual artifacts seen in the videotape of the disputed game.Consistent with Twin Galaxies' dedication to the meticulous documentation and preservation of video game score history, Twin Galaxies shall heretofore reinstate all of Mr. Mitchell's scores as part of the official historical database on Twin Galaxies' website. Additionally, upon closing of the matter, Twin Galaxies shall permanently archive and remove from online display the dispute thread... as well as all related statements and articles. NME adds:Twin Galaxies' lawyer David Tashroudian told Ars Technica that the company had all its "ducks in a row" for a legal battle with Mitchell but "there were going to be an inordinate amount of costs involved, and both parties were facing a lot of uncertainty at trial, and they wanted to get the matter settled on their own terms." And the New York Times points out that while Billy scored 1,062,800 in that long-ago game, "The vigorous long-running and sometimes bitter dispute was over marks that have long since been surpassed. The current record, as reported by Twin Galaxies, belongs to Robbie Lakeman. It's 1,272,800." Thanks to long-time Slashdot reader UnknowingFool for sharing the news.Read more of this story at Slashdot.
Billy Mitchell and Twin Galaxies Settle Lawsuits On Donkey Kong World Records
"What happens when a loser who needs to win faces a winner who refuses to lose?" That was the tagline for the iconic 2007 documentary The King of Kong: A Fistful of Quarters, chronicling a middle-school teacher's attempts to take the Donkey Kong record from reigning world champion Billy Mitchell. "Billy Mitchell always has a plan," says Billy Mitchell in the movie (who is also shown answering his phone, "World Record Headquarters. Can I help you?") By 1985, 30-year-old Mitchell was already listed in the "Guinness Book of World Records" for having the world's highest scores for Pac-Man, Ms. Pac-Man, Donkey Kong, Donkey Kong, Jr., Centipede, and Burger Time. But then, NME reports...In 2018, a number of Mitchell's Donkey Kong high-scores were called into question by a fellow gamer, who supplied a string of evidence on the Twin Galaxies forums suggesting Mitchell had used an emulator to break the records, rather than the official, unmodified hardware that's typically required to keep things fair. [Twin Galaxies is Guiness World Records' official source for videogame scores.] Following "an independent investigation," Mitchell's hi-scores were removed from video game database Twin Galaxies as well as the Guinness Book Of Records, though the latter reversed the decision in 2020. Forensic analysts also accused him of cheating in 2022 but Mitchell has fought the accusations ever since. This week, 58-year-old Billy Mitchell posted an announcement on X. "Twin Galaxies has reinstated all of my world records from my videogame career... I am relieved and satisfied to reach this resolution after an almost six-year ordeal and look forward to pursuing my unfinished business elsewhere. Never Surrender, Billy Mitchell." X then wrote below the announcement, "Readers added context they thought people might want to know... Twin Galaxies has only reinstated Michell's scores on an archived leaderboard, where rules were different prior to TG being acquired in 2014. His score remains removed from the current leaderboard where he continues to be ineligible by today's rules." The statement from Twin Galaxies says they'd originally believed they'd seen "a demonstrated impossibility of original, unmodified Donkey Kong arcade hardware" in a recording of one of Billy's games. As punishment they'd then invalidated every record he'd ever set in his life. But now an engineer (qualified as an expert in federal courts) says aging components in the game board could've produced the same visual artifacts seen in the videotape of the disputed game.Consistent with Twin Galaxies' dedication to the meticulous documentation and preservation of video game score history, Twin Galaxies shall heretofore reinstate all of Mr. Mitchell's scores as part of the official historical database on Twin Galaxies' website. Additionally, upon closing of the matter, Twin Galaxies shall permanently archive and remove from online display the dispute thread... as well as all related statements and articles. NME adds:Twin Galaxies' lawyer David Tashroudian told Ars Technica that the company had all its "ducks in a row" for a legal battle with Mitchell but "there were going to be an inordinate amount of costs involved, and both parties were facing a lot of uncertainty at trial, and they wanted to get the matter settled on their own terms." And the New York Times points out that while Billy scored 1,062,800 in that long-ago game, "The vigorous long-running and sometimes bitter dispute was over marks that have long since been surpassed. The current record, as reported by Twin Galaxies, belongs to Robbie Lakeman. It's 1,272,800." Thanks to long-time Slashdot reader UnknowingFool for sharing the news.Read more of this story at Slashdot.
Rust-Written Linux Scheduler Continues Showing Promising Results For Gaming
"A Canonical engineer has been experimenting with implementing a Linux scheduler within the Rust programming language..." Phoronix reported Monday, "that works via sched_ext for implementing a scheduler using eBPF that can be loaded during run-time." The project was started "just for fun" over Christmas, according to a post on X by Canonical-based Linux kernel engineer Andrea Righi, adding "I'm pretty shocked to see that it doesn't just work, but it can even outperform the default Linux scheduler (EEVDF) with certain workloads (i.e., gaming)." Phoronix notes the a YouTube video accompanying the tweet shows "a game with the scx_rustland scheduler outperforming the default Linux kernel scheduler while running a parallel kernel build in the background." "For sure the build takes longer," Righi acknowledged in a later post. "This scheduler doesn't magically makes everything run faster, it simply prioritizes more the interactive workloads vs CPU-intensive background jobs." Righi followed up by adding "And the whole point of this demo was to prove that, despite the overhead of running a scheduler in user-space, we can still achieve interesting performance, while having the advantages of being in user-space (ease of experimentation/testing, reboot-less updates, etc.)" Wednesday Righi added some improvements, posting that "Only 19 lines of code (comments included) for ~2x performance improvement on SMT isn't bad... and I spent my lunch break playing Counter Strike 2 to test this patch..." And work seems to be continuing, judging by a fresh post from Righi on Thursday. "I fixed virtme-ng to run inside Docker and used it to create a github CI workflow for sched-ext that clones the latest kernel, builds it and runs multiple VMs to test all the scx schedulers. And it does that in only ~20min. I'm pretty happy about virtme-ng now."Read more of this story at Slashdot.
Revolutionary 'LEGO-Like' Photonic Chip Paves Way For Semiconductor Breakthroughs
"Researchers at the University of Sydney Nano Institute have developed a small silicon semiconductor chip that combines electronic and photonic (light-based) elements," reports SciTechDaily. "This innovation greatly enhances radio-frequency (RF) bandwidth and the ability to accurately control information flowing through the unit."Expanded bandwidth means more information can flow through the chip and the inclusion of photonics allows for advanced filter controls, creating a versatile new semiconductor device. Researchers expect the chip will have applications in advanced radar, satellite systems, wireless networks, and the roll-out of 6G and 7G telecommunications and also open the door to advanced sovereign manufacturing. It could also assist in the creation of high-tech value-add factories at places like Western Sydney's Aerotropolis precinct. The chip is built using an emerging technology in silicon photonics that allows the integration of diverse systems on semiconductors less than 5 millimeters wide. Pro-Vice-Chancellor (Research) Professor Ben Eggleton, who guides the research team, likened it to fitting together Lego building blocks, where new materials are integrated through advanced packaging of components, using electronic 'chiplets'.... Dr Alvaro Casas Bedoya, Associate Director for Photonic Integration in the School of Physics, who led the chip design, said the unique method of heterogeneous materials integration has been 10 years in the making. "The combined use of overseas semiconductor foundries to make the basic chip wafer with local research infrastructure and manufacturing has been vital in developing this photonic integrated circuit," he said. "This architecture means Australia could develop its own sovereign chip manufacturing without exclusively relying on international foundries for the value-add process...." The photonic circuit in the chip means a device with an impressive 15 gigahertz bandwidth of tunable frequencies with spectral resolution down to just 37 megahertz, which is less than a quarter of one percent of the total bandwidth.Read more of this story at Slashdot.
'For Truckers Driving EVs, There's No Going Back'
The Washington Post looks at "a small but growing group of commercial medium-to-heavy-duty truck drivers who use electric trucks." "These drivers - many of whom operate local or regional routes that don't require hundreds of miles on the road in a day - generally welcome the transition to electric, praising their new trucks' handling, acceleration, smoothness and quiet operation."Everyone who has had an EV has no aspirations to go back to diesel at this point," said Khari Burton, who drives an electric Volvo VNR in the Los Angeles area for transport company IMC. "We talk about it and it's all positivity. I really enjoy the smoothness ... and just the quietness as well." Mike Roeth, the executive director of the North American Council for Freight Efficiency, said many drivers have reported that the new vehicles are easier on their bodies - thanks to both less rocking off the cab, assisted steering and the quiet motor. "Part of my hypothesis is that it will help truck driver retention," he said. "We're seeing people who would retire driving a diesel truck now working more years with an electric truck." Most of the electric trucks on the road today are doing local or regional routes, which are easier to manage with a truck that gets only up to 250 miles of range... Trucking advocates say electric has a long way to go before it can take on longer routes. "If you're running very local, very short mileage, there may be a vehicle that can do that type of route," said Mike Tunnell, the executive director of environmental affairs for the American Trucking Association. "But for the average haul of 400 miles, there's just nothing that's really practical today." There's other concerns, according to the article. "[S]ome companies and trucking associations worry this shift, spurred in part by a California law mandating a switch to electric or emissions-free trucks by 2042, is happening too fast. While electric trucks might work well in some cases, they argue, the upfront costs of the vehicles and their charging infrastructure are often too heavy a lift." But this is probably the key sentence in the article: For the United States to meet its climate goals, virtually all trucks must be zero-emissions by 2050. While trucks are only 4 percent of the vehicles on the road, they make up almost a quarter of the country's transportation emissions. The article cites estimates that right now there's 12.2 million trucks on America's highways - and barely more than 1% (13,000) are electric. "Around 10,000 of those trucks were just put on the road in 2023, up from 2,000 the year before." (And they add that Amazon alone has thousands of Rivian's electric delivery vans, operating in 1,800 cities.) But the article's overall message seems to be that when it comes to the trucks, "the drivers operating them say they love driving electric." And it includes comments from actual truckers:49-year-old Frito-Lay trucker Gary LaBush: "I was like, 'What's going on?' There was no noise - and no fumes... it's just night and day."66-year-old Marty Boots: Diesel was like a college wrestler. And the electric is like a ballet dancer... You get back into diesel and it's like, 'What's wrong with this thing?' Why is it making so much noise? Why is it so hard to steer?"Read more of this story at Slashdot.
James Webb Telescope Detects Earliest Known Black Hole
The Hubble Space Telescope's discovery of GN-z11 in 2016 marked it as the most distant galaxy known at that time, notable for its unexpected luminosity despite its ancient formation just 400 million years after the Big Bang. Now, in a paper published in Nature, astrophysicist Roberto Maiolino proposes that this brightness could be due to a supermassive black hole, challenging current understanding of early black hole formation and growth. NPR reports: This wasn't just any black hole. First -- assuming that the black hole started out small -- it could be devouring matter at a ferocious rate. And it would have needed to do so to reach its massive size. "This black hole is essentially eating the [equivalent of] an entire Sun every five years," says Maiolino. "It's actually much higher than we thought could be feasible for these black holes." Hence the word "vigorous" in the paper's title. Second, the black hole is 1.6 million times the mass of our Sun, and it was in place just 400 million years after the dawn of the universe. "It is essentially not possible to grow such a massive black hole so fast so early in the universe," Maiolino says. "Essentially, there is not enough time according to classical theories. So one has to invoke alternative scenarios." Here's scenario one -- rather than starting out small, perhaps supermassive black holes in the early universe were simply born big due to the collapse of vast clouds of primordial gas. Scenario two is that maybe early stars collapsed to form a sea of smaller black holes, which could have then merged or swallowed matter way faster than we thought, causing the resulting black hole to grow quickly. Or perhaps it's some combination of both. In addition, it's possible that this black hole is harming the growth of the galaxy GN-z11. That's because black holes radiate energy as they feed. At such a high rate of feasting, this energy could sweep away the gas of the host galaxy. And since stars are made from gas, it could quench star formation, slowly strangling the galaxy. Not to mention that without gas, the black hole wouldn't have anything to feed on and it too would die.Read more of this story at Slashdot.
Ceph: a Journey To 1 TiB/s
It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation".The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE. And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers - a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness... Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery.... The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full... For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track. It's a long blog post, but here's where it ends up: Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states." Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests."In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU." Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s - and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes - achieving 950GiB/s - then tried some more performance optimizations...Read more of this story at Slashdot.
S&P 500 Index Sets Record High, Thanks to 'AI-Driven Frenzy' and Tech Stocks
The S&P 500 index tracks 500 of the largest companies listed on U.S. stock exchanges, according to Wikipedia. And Friday that index "hit an all-time closing high," reports the Washington Post, "reflecting the staggering gains of a coterie of Big Tech firms against the backdrop of a surprisingly stable economy."The broad-based index closed at 4,839.81 - up more than 1 percent for the day - surpassing the previous closing record set in January of 2022. The stock market surged upward in the final quarter of 2023 as evidence gathered that the [U.S.] economy has not tipped into recession territory, despite the Federal Reserve's campaign to raise interest rates. At the same time analysts point to an AI-driven frenzy on Wall Street that rivals the dot-com boom of the late '90s, when investors sought to capitalize on the transformative gains brought by the early internet. A booming S&P 500 is a welcome sign for the millions of Americans who invest in the index through retirement accounts. Investors in 2022 had about $5.7 trillion in assets passively indexed to the S&P 500 and another $5.7 trillion in funds that use it as a benchmark comparison, according to S&P Global. Voters' feelings about the stock market and economy could affect the 2024 election... Tech companies, including a few names heavily associated with artificial intelligence work, led the S&P 500's gains. Seven of the largest tech stocks known as the "Magnificent Seven" - Apple, Microsoft, Alphabet, Amazon, Nvidia, Tesla and Meta - increased 75 percent on average in 2023 and represented 30 percent of the index's total market value at the end of 2023. "AI is the new dot-com," said Michael Farr of Farr, Miller and Washington. "It's the new magic that is going to change the world that we don't really understand yet. But we all understand it's very powerful." Those seven stocks made up around half of the S&P 500's growth last year. Nvidia, whose high-performance chips have become popular for AI uses, had the best year of the bunch, at one point gaining nearly $190 billion in value overnight, a 24 percent gain. In the last 12 months, the index has risen 21.83%. The article notes that "Although the rest of the market has lagged Big Tech, analysts say promising economic data from recent months has boosted optimism about the broader economy."Read more of this story at Slashdot.
Can an AI Become Its Own CEO After Creating a Startup? Google DeepMind Co-Founder Thinks So
An anonymous reader quotes a report from Inc. Magazine: Google's DeepMind division has long led the way on all sorts of AI breakthroughs, grabbing headlines in 2016, when one of its systems beat a world champion at the strategy game Go, then seen as an unlikely feat. So when one of DeepMind's co-founders makes a pronouncement about the future of AI, it's worth listening, especially if you're a startup entrepreneur. AI might be coming for your job! Mustafa Suleyman, co-founder of DeepMind and now CEO of Inflection AI -- a small, California-based machine intelligence company -- recently suggested this possibility could be reality in a half-decade or so. At the World Economic Forum meeting at Davos this week, Suleyman said he thinks AI tech will soon reach the point where it could dream up a company, project-manage it, and successfully sell products. This still-imaginary AI-ntrepreneur will certainly be able to do so by 2030. He's also sure that these AI powers will be "widely available" for "very cheap" prices, potentially even as open-source systems, meaning some aspects of these super smart AIs would be free. Whether an AI entrepreneur could actually beat a human at the startup game is something we'll have to wait to find out, but the mere fact that Suleyman is saying an AI could carry out the role is stunning. It's also controversial, and likely tangled in a forest of thorny legal matters. For example, there's the tricky issue of whether an AI can own or patent intellectual property. A recent ruling in the U.K. argues that an AI definitively cannot be a patent holder. Underlining how much of all of this is theoretical, Suleyman's musings about AI entrepreneurs came from an answer to a question about whether AIs can pass the famous Turing test. This is sometimes considered a gold standard for AI: If a real artificial general intelligence (AGI) can fool a human into thinking that it too is a human. Cunningly, Suleyman twisted the question around, and said the traditional Turing test wasn't good enough. Instead, he argued a better test would be to see if an AGI could perform sophisticated tasks like acting as an entrepreneur. No matter how theoretical Suleyman's thinking is, it will unsettle critics who worry about the destructive potential of AI, and it may worry some in the venture capital world, too. How exactly would one invest in a startup with a founder that's just a pile of silicon chips? Even Suleyman said he thinks that this sort of innovation would cause a giant economic upset.Read more of this story at Slashdot.
SpaceX's 'Dragon' Capsule Carries Four Private Astronauts to the ISS for Axiom Space
"It's the third all-private astronaut mission to the International Space Station," writes NASA - and they're expected to start boarding within the next hour! Watch it all on the official stream of NASA TV. More details from Ars Technica:The four-man team lifted off from NASA's Kennedy Space Center in Florida aboard a SpaceX Falcon 9 rocket Thursday, kicking off a 36-hour pursuit of the orbiting research laboratory. Docking is scheduled for Saturday morning. This two-week mission is managed by Houston-based Axiom Space, which is conducting private astronaut missions to the ISS as a stepping stone toward building a fully commercial space station in low-Earth orbit by the end of this decade. Axiom's third mission, called Ax-3, launched at 4:49 pm EST (21:49 UTC) Thursday. The four astronauts were strapped into their seats inside SpaceX's Dragon Freedom spacecraft atop the Falcon 9 rocket. This is the 12th time SpaceX has launched a human spaceflight mission, and could be the first of five Dragon crew missions this year. NASA reports that the crew "will spend about two weeks conducting microgravity research, educational outreach, and commercial activities aboard the space station."NASA Administrator Bill Nelson said "During their time aboard the International Space Station, the Ax-3 astronauts will carry out more than 30 scientific experiments that will help advance research in low-Earth orbit. As the first all-European commercial astronaut mission to the space station, the Ax-3 crew is proof that the possibility of space unites us all...." The Dragon spacecraft will dock autonomously to the forward port of the station's Harmony module as early as 4:19 a.m. [EST] Saturday. Hatches between Dragon and the station are expected to open after 6 a.m. [EST], allowing the Axiom crew to enter the complex for a welcoming ceremony and start their stay aboard the orbiting laboratory....The Ax-3 astronauts are expected to depart the space station Saturday, February 3, pending weather, for a return to Earth and splashdown at a landing site off the coast of Florida.Read more of this story at Slashdot.
Water Ice Buried At Mars' Equator Is Over 2 Miles Thick
Keith Cooper reports via Space.com: A European Space Agency (ESA) probe has found enough water to cover Mars in an ocean between 4.9 and 8.9 feet (1.5 and 2.7 meters) deep, buried in the form of dusty ice beneath the planet's equator. The finding was made by ESA's Mars Express mission, a veteran spacecraft that has been engaged in science operations around Mars for 20 years now. While it's not the first time that evidence for ice has been found near the Red Planet's equator, this new discovery is by far the largest amount of water ice detected there so far and appears to match previous discoveries of frozen water on Mars. "Excitingly, the radar signals match what we expect to see from layered ice and are similar to the signals we see from Mars' polar caps, which we know to be very ice rich," said lead researcher Thomas Watters of the Smithsonian Institution in the United States in an ESA statement. The deposits are thick, extended 3.7km (2.3) miles underground, and topped by a crust of hardened ash and dry dust hundreds of meters thick. The ice is not a pure block but is heavily contaminated by dust. While its presence near the equator is a location more easily accessible to future crewed missions, being buried so deep means that accessing the water-ice would be difficult.Read more of this story at Slashdot.
Microsoft Executive Emails Hacked By Russian Intelligence Group, Company Says
In a regulatory filing today, Microsoft said that a Russian intelligence group hacked into some of the company's top executives' email accounts. CNBC reports: Nobelium, the same group that breached government supplier SolarWinds in 2020, carried out the attack, which Microsoft detected last week, according to the company. The announcement comes after new U.S. requirements for disclosing cybersecurity incidents went into effect. A Microsoft spokesperson said that while the company does not believe the attack had a material impact, it still wanted to honor the spirit of the rules. In late November, the group accessed "a legacy non-production test tenant account," Microsoft's Security Response Center wrote in the blog post. After gaining access, the group "then used the account's permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents," the corporate unit wrote. The company's senior leadership team, including finance chief Amy Hood and president Brad Smith, regularly meets with CEO Satya Nadella. Microsoft said it has not found signs that Nobelium had accessed customer data, production systems or proprietary source code. The U.S. government and Microsoft consider Nobelium to be part of the Russian foreign intelligence service SVR. The hacking group was responsible for one of the most prolific breaches in U.S. history when it added malicious code to updates to SolarWinds' Orion software, which some U.S. government agencies were using. Microsoft itself was ensnared in the hack. Nobelium, also known as APT29 or Cozy Bear, is a sophisticated hacking group that has attempted to breach the systems of U.S. allies and the Department of Defense. Microsoft also uses the name Midnight Blizzard to identify Nobelium. It was also implicated alongside another Russian hacking group in the 2016 breach of the Democratic National Committee's systems.Read more of this story at Slashdot.
The Rabbit R1 Will Offer Up-To-Date Answers Powered By Perplexity's AI
Despite many questions going unanswered, a startup called Rabbit sold out of its pocket AI companion a day after it was debuted at CES 2024 last week. Now, the company finally shared more details about which large language model (LLM) will be powering the device. According to Engadget, the provider in question is Perplexity, "a San Francisco-based startup with ambitions to overtake Google in the AI space." From the report: Perplexity will be providing up-to-date search results via Rabbit's $199 orange brick -- without the need of any subscription. That said, the first 100,000 R1 buyers will receive one year of Perplexity Pro subscription -- normally costing $200 -- for free. This advanced service adds file upload support, a daily quota of over 300 complex queries and the ability to switch to other AI models (GPT-4, Claude 2.1 or Gemini), though these don't necessarily apply to the R1's use case.Read more of this story at Slashdot.
OpenAI Ceo Sam Altman Is Still Chasing Billions To Build AI Chips
According to Bloomberg (paywalled), OpenAI CEO Sam Altman is reportedly raising billions to develop a global network of chip fabrication factories, collaborating with leading chip manufacturers to address the high demand for chips required for advanced AI models. The Verge reports: A major cost and limitation for running AI models is having enough chips to handle the computations behind bots like ChatGPT or DALL-E that answer prompts and generate images. Nvidia's value rose above $1 trillion for the first time last year, partly due to a virtual monopoly it has as GPT-4, Gemini, Llama 2, and other models depend heavily on its popular H100 GPUs. Accordingly, the race to manufacture more high-powered chips to run complex AI systems has only intensified. The limited number of fabs capable of making high-end chips is driving Altman or anyone else to bid for capacity years before you need it in order to produce the new chips. And going against the likes of Apple requires deep-pocketed investors who will front costs that the nonprofit OpenAI still can't afford. SoftBank Group and Abu Dhabi-based AI holding company G42 have reportedly been in talks about raising money for Altman's project.Read more of this story at Slashdot.
Researchers Claim First Functioning Graphene-Based Chip
An anonymous reader quotes a report from IEEE Spectrum: Researchers at Georgia Tech, in Atlanta, have developed what they are calling the world's first functioning graphene-based semiconductor. This breakthrough holds the promise to revolutionize the landscape of electronics, enabling faster traditional computers and offering a new material for future quantum computers. The research, published on January 3 in Nature and led by Walt de Heer, a professor of physics at Georgia Tech, focuses on leveraging epitaxial graphene, a crystal structure of carbon chemically bonded to silicon carbide (SiC). This novel semiconducting material, dubbed semiconducting epitaxial graphene (SEC) -- or alternatively, epigraphene -- boasts enhanced electron mobility compared with that of traditional silicon, allowing electrons to traverse with significantly less resistance. The outcome is transistors capable of operating at terahertz frequencies, offering speeds 10 times as fast as that of the silicon-based transistors used in current chips. De Heer describes the method used as a modified version of an extremely simple technique that has been known for over 50 years. "When silicon carbide is heated to well over 1,000C, silicon evaporates from the surface, leaving a carbon-rich surface which then forms into graphene," says de Heer. This heating step is done with an argon quartz tube in which a stack of two SiC chips are placed in a graphite crucible, according to de Heer. Then a high-frequency current is run through a copper coil around the quartz tube, which heats the graphite crucible through induction. The process takes about an hour. De Heer added that the SEC produced this way is essentially charge neutral, and when exposed to air, it will spontaneously be doped by oxygen. This oxygen doping is easily removed by heating it at about 200C in vacuum. "The chips we use cost about [US] $10, the crucible about $1, and the quartz tube about $10," said de Heer. [...] De Heer and his research team concede, however, that further exploration is needed to determine whether graphene-based semiconductors can surpass the current superconducting technology used in advanced quantum computers. The Georgia Tech team do not envision incorporating graphene-based semiconductors with standard silicon or compound semiconductor lines. Instead, they are aiming for a paradigm shift beyond silicon, utilizing silicon carbide. They are developing methods, such as coating SEC with boron nitride, to protect and enhance its compatibility with conventional semiconductor lines. Comparing their work with commercially available graphene field-effect transistors (GFETs), de Heer explains that there is a crucial difference: "Conventional GFETs do not use semiconducting graphene, making them unsuitable for digital electronics requiring a complete transistor shutdown." He says that the SEC developed by his team allows for a complete shutdown, meeting the stringent requirements of digital electronics. De Heer says that it will take time to develop this technology. "I compare this work to the Wright brothers' first 100-meter flight. It will mainly depend on how much work is done to develop it."Read more of this story at Slashdot.
US To Ban Pentagon From Buying Batteries From China's CATL, BYD
U.S. lawmakers have banned the Defense Department from buying batteries produced by China's biggest manufacturers. "The rule implemented as part of the latest National Defense Authorization Act that passed on Dec. 22 will prevent procuring batteries from Contemporary Amperex Technology Co. Ltd., BYD Co. and four other Chinese companies beginning in October 2027," reports Bloomberg. From the report: The measure doesn't extend to commercial purchases by companies such as Ford, which is licensing technology from CATL to build electric-vehicle batteries in Michigan. Tesla also sources some of its battery cells from BYD, which became the new top-selling EV maker globally in the fourth quarter. The four other manufacturers whose batteries will be banned are Envision Energy Ltd., EVE Energy Co., Gotion High Tech Co. and Hithium Energy Storage Technology Co. The decision still requires Pentagon officials to more clearly define the reach of the new rule. It adds to previous provisions outlined by the NDAA that decoupled the Defense Department's supply chain from China, including restrictions on use of Chinese semiconductors. While the Defense Department bans apply strictly to defense procurement, industries and lawmakers closely follow the rules as a guide for what materials, products and companies to trust in their own course of business.Read more of this story at Slashdot.
Plex To Launch a Store For Movies and TV Shows
Jay Peters reports via The Verge: Plex, known for its media server software and as a place to watch ad-supported content, is going to launch a store for to buy and rent movies and TV shows in early February, executives told Lowpass' Janko Roettgers. "Most studios" are lined up for the store's launch, and there are "plans to complete the catalog soon after," Roettgers says. The store will also integrate with Plex features like its watchlists for movies. Roettgers points out that that Plex has announced plans in both 2020 and 2023 to launch a movie / TV store -- hopefully Plex is truly ready to do so this time. Plex chief product officer Scott Olechowski told Roettgers that more changes are coming to Plex down the line, including a "pretty major UX refresh" and more social features like public profiles.Read more of this story at Slashdot.
Japan's SLIM Probe Lands On Moon, But Suffers Power Problem
Geoffrey.landis writes: The Japan SLIM spacecraft has successfully landed on moon, but power problems mean it may be short mission. The good news is that the landing was successful, making Japan only the fifth nation to successfully make a lunar landing, and the ultra-miniature rover and the hopper both deployed. The bad news is that the solar arrays aren't producing power, and unless they can fix the problem in the next few hours, the batteries will be depleted and it will die. But, short mission or long, hurrah for Japan for being the fifth country to successfully land a mission on the surface of the moon (on their third try; two previous missions didn't make it). It's a rather amazing mission. I've never seen a spacecraft concept that lands under rocket power vertically but then rotates over to rest horizontally on the surface.Read more of this story at Slashdot.
Why Every Coffee Shop Looks the Same
An anonymous reader shares a report: These cafes had all adopted similar aesthetics and offered similar menus, but they hadn't been forced to do so by a corporate parent, the way a chain like Starbucks replicated itself. Instead, despite their vast geographical separation and total independence from each other, the cafes had all drifted toward the same end point. The sheer expanse of sameness was too shocking and new to be boring. Of course, there have been examples of such cultural globalisation going back as far as recorded civilisation. But the 21st-century generic cafes were remarkable in the specificity of their matching details, as well as the sense that each had emerged organically from its location. They were proud local efforts that were often described as "authentic," an adjective that I was also guilty of overusing. When travelling, I always wanted to find somewhere "authentic" to have a drink or eat a meal. If these places were all so similar, though, what were they authentic to, exactly? What I concluded was that they were all authentically connected to the new network of digital geography, wired together in real time by social networks. They were authentic to the internet, particularly the 2010s internet of algorithmic feeds. In 2016, I wrote an essay titled Welcome to AirSpace, describing my first impressions of this phenomenon of sameness. "AirSpace" was my coinage for the strangely frictionless geography created by digital platforms, in which you could move between places without straying beyond the boundaries of an app, or leaving the bubble of the generic aesthetic. The word was partly a riff on Airbnb, but it was also inspired by the sense of vaporousness and unreality that these places gave me. They seemed so disconnected from geography that they could float away and land anywhere else. When you were in one, you could be anywhere. My theory was that all the physical places interconnected by apps had a way of resembling one another. In the case of the cafes, the growth of Instagram gave international cafe owners and baristas a way to follow one another in real time and gradually, via algorithmic recommendations, begin consuming the same kinds of content. One cafe owner's personal taste would drift toward what the rest of them liked, too, eventually coalescing. On the customer side, Yelp, Foursquare and Google Maps drove people like me -- who could also follow the popular coffee aesthetics on Instagram -- toward cafes that conformed with what they wanted to see by putting them at the top of searches or highlighting them on a map. To court the large demographic of customers moulded by the internet, more cafes adopted the aesthetics that already dominated on the platforms. Adapting to the norm wasn't just following trends but making a business decision, one that the consumers rewarded. When a cafe was visually pleasing enough, customers felt encouraged to post it on their own Instagram in turn as a lifestyle brag, which provided free social media advertising and attracted new customers. Thus the cycle of aesthetic optimisation and homogenisation continued.Read more of this story at Slashdot.
30TB Hard Drives Are Nearly Here
Seagate this week unveiled the industry's first hard disk drive platform that uses heat-assisted media recording (HAMR). Tom's Hardware: The new Mozaic 3+ platform relies on several all-new technologies, including new media, new write and read heads, and a brand-new controller. The platform will be used for Seagate's upcoming Exos hard drives for cloud datacenters with a 30TB capacity and higher. Heat-assisted magnetic recording is meant to radically increase areal recording density of magnetic media by making writes while the recording region is briefly heated to a point where its magnetic coercivity drops significantly. Seagate's Mozaic 3+ uses 10 glass disks with a magnetic layer consisting of an iron-platinum superlattice structure that ensures both longevity and smaller media grain size compared to typical HDD platters. To record the media, the platform uses a plasmonic writer sub-system with a vertically integrated nanophotonic laser that heats the media before writing. Because individual grains are so small with the new media, their individual magnetic signatures are lower, whereas magnetic inter-track interference (ITI) effect is somewhat higher. As a result, Seagate had to introduce its new Gen 7 Spintronic Reader, which features the "world's smallest and most sensitive magnetic field reading sensors," according to the company. Because Seagate's new Mozaic 3+ platform deals with new media with a very small grain size, an all-new writer, and a reader that features multiple tiny magnetic field readers, it also requires a lot of compute horsepower to orchestrate the drive's work. Therefore, Seagate has equipped with Mozaic 3+ platform with an all-new controller made on a 12nm fabrication process.Read more of this story at Slashdot.
Huawei Makes a Break From Android With Next Version of Harmony OS
China's Huawei will not support Android apps on the latest iteration of its in-house Harmony operating system, domestic financial media Caixin reported, as the company looks to bolster its own software ecosystem. From a report: The company plans to roll out a developer version of its HarmonyOS Next platform in the second quarter of this year followed by a full commercial version in the fourth quarter, it said in a company statement highlighting the launch event for the platform in its home city of Shenzhen on Thursday. Huawei first unveiled its proprietary Harmony system in 2019 and prepared to launch it on some smartphones a year later after U.S. restrictions cut its access to Google's technical support for its Android mobile OS. However, earlier versions of Harmony allowed apps built for Android to be used on the system, which will no longer be possible, according to Caixin.Read more of this story at Slashdot.
Crime Rings Are Trafficking in an Unlikely Treasure: Sand
Organized crime is mining sand from rivers and coasts to feed demand worldwide, ruining ecosystems and communities. Can it be stopped? Scientific American reports: Very few people are looking closely at the illegal sand system or calling for changes, however, because sand is a mundane resource. Yet sand mining is the world's largest extraction industry because sand is a main ingredient in concrete, and the global construction industry has been soaring for decades. Every year the world uses up to 50 billion metric tons of sand, according to a United Nations Environment Program report. The only natural resource more widely consumed is water. A 2022 study by researchers at the University of Amsterdam concluded that we are dredging river sand at rates that far outstrip nature's ability to replace it, so much so that the world could run out of construction-grade sand by 2050. The U.N. report confirms that sand mining at current rates is unsustainable. The greatest demand comes from China, which used more cement in three years (6.6 gigatons from 2011 through 2013) than the U.S. used in the entire 20th century (4.5 gigatons), notes Vince Beiser, author of The World in a Grain. Most sand gets used in the country where it is mined, but with some national supplies dwindling, imports reached $1.9 billion in 2018, according to Harvard's Atlas of Economic Complexity. Companies large and small dredge up sand from waterways and the ocean floor and transport it to wholesalers, construction firms and retailers. Even the legal sand trade is hard to track. Two experts estimate the global market at about $100 billion a year, yet the U.S. Geological Survey Mineral Commodity Summaries indicates the value could be as high as $785 billion. Sand in riverbeds, lake beds and shorelines is the best for construction, but scarcity opens the market to less suitable sand from beaches and dunes, much of it scraped illegally and cheaply. With a shortage looming and prices rising, sand from Moroccan beaches and dunes is sold inside the country and is also shipped abroad, using organized crime's extensive transport networks, Abderrahmane has found. More than half of Morocco's sand is illegally mined, he says.Read more of this story at Slashdot.
Viasat Tries To Stop Citizen Effort To Revive FCC Funding for Starlink
A resident in Virginia has urged the Federal Communications Commission to reconsider canceling $886 million in federal funding for SpaceX's Starlink system. But rival satellite company Viasat has gone out of its way to oppose the citizen-led petition.APCMag: On Jan. 1, the FCC received a petition from the Virginia resident Greg Weisiger asking the commission to reconsider denying the $886 million to SpaceX. "Petitioner is at an absolute loss to understand the Commission's logic with these denials," wrote Weisiger, who lives in Midlothian, Virginia. "It is abundantly clear that Starlink has a robust, reliable, affordable service for rural and insular locations in all states and territories." The petition arrived a few weeks after the FCC denied SpaceX's appeal to receive $886 million from the commission's Rural Digital Opportunity Fund, which is designed to subsidize 100Mbps to gigabit broadband across the US. SpaceX wanted to use the funds to expand Starlink access in rural areas. But the FCC ruled that "Starlink is not reasonably capable of offering the required high-speed, low latency service throughout the areas where it won auction support."Weisiger disagrees. In his petition, he writes that the FCC's decision will deprive him of federal support to bring high-speed internet to his home. "Thousands of other Virginia locations were similarly denied support," he added.Read more of this story at Slashdot.
Microsoft Bringing Teams Meeting Reminders To Windows 11 Start Menu
Microsoft is getting ready to place Teams meeting reminders on the Start menu in Windows 11. From a report: The software giant has started testing a new build of Windows 11 with Dev Channel testers that includes a Teams meeting reminder in the recommended section of the Start menu. Microsoft is also testing an improved way to instantly access new photos and screenshots from Android devices. [...] The Teams meeting reminders will be displayed alongside the regular recently used and recommended file list on the Start menu, and they won't be displayed for non-business users of Windows 11.Read more of this story at Slashdot.
Game Developer Survey: 50% Work at a Studio Already Using Generative AI Tools
A new survey of thousands of game development professionals finds a near-majority saying generative AI tools are already in use at their workplace. But a significant minority of developers say their company has no interest in generative AI tools or has outright banned their use. From a report: The Game Developers Conference's 2024 State of the Industry report, released Thursday, aggregates the thoughts of over 3,000 industry professionals as of last October. While the annual survey (conducted in conjunction with research partner Omdia) has been running for 12 years, this is the first time respondents were asked directly about their use of generative AI tools such as ChatGPT, DALL-E, GitHub Copilot, and Adobe Generative Fill. Forty-nine percent of the survey's developer respondents said that generative AI tools are currently being used in their workplace. That near-majority includes 31 percent (of all respondents) that say they use those tools themselves and 18 percent that say their colleagues do. The survey also found that different studio departments showed different levels of willingness to embrace AI tools. Forty-four percent of employees in business and finance said they were using AI tools, for instance, compared to just 16 percent in visual arts and 13 percent in "narrative/writing."Read more of this story at Slashdot.
Cop28 Deal Will Fail Unless Rich Countries Quit Fossil Fuels, Says Climate Negotiator
The credibility of the Cop28 agreement to "transition away" from fossil fuels rides on the world's biggest historical polluters like the US, UK and Canada rethinking current plans to expand oil and gas production, according to the climate negotiator representing 135 developing countries. The Guardian: In an exclusive interview with the Guardian, Pedro Pedroso, the outgoing president of the G77 plus China bloc of developing countries, warned that the landmark deal made at last year's climate talks in Dubai risked failing. "We achieved some important outcomes at Cop28 but the challenge now is how we translate the deal into meaningful action for the people," Pedroso said. "As we speak, unless we lie to ourselves, none of the major developed countries, who are the most important historical emitters, have policies that are moving away from fossil fuels, on the contrary, they are expanding," said Pedroso. These countries must also deliver adequate finance for poorer nations to transition -and adapt to the climate crisis. In Dubai, Sultan Al Jaber, Cop28 president and chief of the Emirates national oil company, was subject to widespread scrutiny -- understandable given that the UAE is the world's seventh biggest oil producer with the fifth largest gas reserves. Yet the US was by far the biggest oil and gas producer in the world last year -- setting a new record, during a year that was the hottest ever recorded. The US, UK, Canada, Australia and Norway account for 51% of the total planned oil and gas expansion by 2050, according to research by Oil Change International. "It's very easy to label some emerging economies, especially the Gulf states, as climate villains, but this is very unfair by countries with historic responsibilities -- who keep trying to scapegoat and deviate the attention away from themselves. Just look at US fossil fuel plans and the UK's new drilling licenses for the North Sea, and Canada which has never met any of its emission reduction goals, not once," said Pedroso, a Cuban diplomat.Read more of this story at Slashdot.
How Much of the World Is It Possible to Model?
Dan Rockmore, the director of the Neukom Institute for Computational Sciences at Dartmouth College, writing for The New Yorker: Recently, statistical modelling has taken on a new kind of importance as the engine of artificial intelligence -- specifically in the form of the deep neural networks that power, among other things, large language models, such as OpenAI's G.P.T.s. These systems sift vast corpora of text to create a statistical model of written expression, realized as the likelihood of given words occurring in particular contexts. Rather than trying to encode a principled theory of how we produce writing, they are a vertiginous form of curve fitting; the largest models find the best ways to connect hundreds of thousands of simple mathematical neurons, using trillions of parameters.They create a vast data structure akin to a tangle of Christmas lights whose on-off patterns attempt to capture a chunk of historical word usage. The neurons derive from mathematical models of biological neurons originally formulated by Warren S. McCulloch and Walter Pitts, in a landmark 1943 paper, titled "A Logical Calculus of the Ideas Immanent in Nervous Activity." McCulloch and Pitts argued that brain activity could be reduced to a model of simple, interconnected processing units, receiving and sending zeros and ones among themselves based on relatively simple rules of activation and deactivation. The McCulloch-Pitts model was intended as a foundational step in a larger project, spearheaded by McCulloch, to uncover a biological foundation of psychiatry. McCulloch and Pitts never imagined that their cartoon neurons could be trained, using data, so that their on-off states linked to certain properties in that data. But others saw this possibility, and early machine-learning researchers experimented with small networks of mathematical neurons, effectively creating mathematical models of the neural architecture of simple brains, not to do psychiatry but to categorize data. The results were a good deal less than astonishing. It wasn't until vast amounts of good data -- like text -- became readily available that computer scientists discovered how powerful their models could be when implemented on vast scales. The predictive and generative abilities of these models in many contexts is beyond remarkable. Unfortunately, it comes at the expense of understanding just how they do what they do. A new field, called interpretability (or X-A.I., for "explainable" A.I.), is effectively the neuroscience of artificial neural networks. This is an instructive origin story for a field of research. The field begins with a focus on a basic and well-defined underlying mechanism -- the activity of a single neuron. Then, as the technology scales, it grows in opacity; as the scope of the field's success widens, so does the ambition of its claims. The contrast with climate modelling is telling. Climate models have expanded in scale and reach, but at each step the models must hew to a ground truth of historical, measurable fact. Even models of covid or elections need to be measured against external data. The success of deep learning is different. Trillions of parameters are fine-tuned on larger and larger corpora that uncover more and more correlations across a range of phenomena. The success of this data-driven approach isn't without danger. We run the risk of conflating success on well-defined tasks with an understanding of the underlying phenomenon -- thought -- that motivated the models in the first place. Part of the problem is that, in many cases, we actually want to use models as replacements for thinking. That's the raison detre of modelling -- substitution. It's useful to recall the story of Icarus. If only he had just done his flying well below the sun. The fact that his wings worked near sea level didn't mean they were a good design for the upper atmosphere. If we don't understand how a model works, then we aren't in a good position to know its limitations until something goes wrong. By then it might be too late. Eugene Wigner, the physicist who noted the "unreasonable effectiveness of mathematics," restricted his awe and wonder to its ability to describe the inanimate world. Mathematics proceeds according to its own internal logic, and so it's striking that its conclusions apply to the physical universe; at the same time, how they play out varies more the further that we stray from physics. Math can help us shine a light on dark worlds, but we should look critically, always asking why the math is so effective, recognizing where it isn't, and pushing on the places in between.Read more of this story at Slashdot.
Airbus Is Pulling Ahead as Boeing's Troubles Mount
Airbus cemented its position last week as the world's biggest plane maker for the fifth straight year, announcing that it had delivered more aircraft and secured more orders than Boeing in 2023. At the same time, Boeing was trying to put out a huge public-relations and safety crisis caused by a harrowing near disaster involving its 737 Max line of airliners. In the long-running duel between the two aviation rivals, Airbus has pulled far ahead. The New York Times: "What used to be a duopoly has become two-thirds Airbus, one-third Boeing," said Richard Aboulafia, the managing director of AeroDynamic Advisory in Washington, D.C. "A lot of people, whether investors, financiers or customers, are looking at Airbus and seeing a company run by competent people," he said. "The contrast with Boeing is fairly profound." The incident involving the 737 Max 9, in which a hole blew open in the fuselage of an Alaska Airlines flight in midair, was the latest in a string of safety lapses in Boeing's workhorse aircraft -- including two fatal crashes in 2018 and 2019 -- that are indirectly helping propel the fortunes of the European aerospace giant. As the Federal Aviation Administration widens its scrutiny of Max 9 production, Airbus's edge is likely to sharpen. Airlines are embarking on massive expansions of their fleets to meet a postpandemic surge in the demand for global air travel, and are considering which company to turn to.Read more of this story at Slashdot.
Apple Offers To Open Mobile Payments To Third Parties Amid EU Antitrust Case
Apple committed to address antitrust concerns posed by the European Commission surrounding its popular Apple Pay app, including allowing access to third-party mobile wallet and payment services. WSJ: The U.S. tech giant has agreed to allow companies' apps to make contactless payments on devices that use the iOS system, such as iPhones, for free without the need to use Apple Pay or Apple Wallet, the EU's executive arm said Friday.Read more of this story at Slashdot.
'Where Have All the Websites Gone?'
An anonymous reader shares an essay: No one clicks a webpage hoping to learn which cat can haz cheeseburger. Weirdos, maybe. Sickos. No, we get our content from a For You Page now -- algorithmically selected videos and images made by our favorite creators, produced explicitly for our preferred platform. Which platform doesn't matter much. So long as it's one of the big five. Creators churn out content for all of them. It's a technical marvel, that internet. Something so mindblowingly impressive that if you showed it to someone even thirty years ago, their face would melt the fuck off. So why does it feel like something's missing? Why are we all so collectively unhappy with the state of the web? A tweet went viral this Thanksgiving when a Twitter user posed a question to their followers. (The tweet said: "It feels like there are no websites anymore. There used to be so many websites you could go on. Where did all the websites go?") A peek at the comments, and I could only assume the tweet struck a nerve. Everyone had their own answer. Some comments blamed the app-ification of the web. "Everything is an app now!," one user replied. Others point to the death of Adobe Flash and how so many sites and games died along with it. Everyone agrees that websites have indeed vanished, and we all miss the days we were free to visit them.Read more of this story at Slashdot.
Sam Altman Says AI Depends On Energy Breakthrough
An anonymous reader quotes a report from Reuters: OpenAI's CEO Sam Altman on Tuesday said an energy breakthrough is necessary for future artificial intelligence, which will consume vastly more power than people have expected. Speaking at a Bloomberg event on the sidelines of the World Economic Forum's annual meeting in Davos, Altman said the silver lining is that more climate-friendly sources of energy, particularly nuclear fusion or cheaper solar power and storage, are the way forward for AI. "There's no way to get there without a breakthrough," he said. "It motivates us to go invest more in fusion." In 2021, Altman personally provided $375 million to private U.S. nuclear fusion company Helion Energy, which since has signed a deal to provide energy to Microsoft in future years. Microsoft is OpenAI's biggest financial backer and provides it computing resources for AI. Altman said he wished the world would embrace nuclear fission as an energy source as well. Further reading: Microsoft Needs So Much Power to Train AI That It's Considering Small Nuclear ReactorsRead more of this story at Slashdot.
Boeing Cargo Plane Makes Emergency Landing in Miami After 'Engine Malfunction'
A Boeing cargo plane headed for Puerto Rico was diverted Thursday night after taking off from Miami International Airport because of engine trouble, according to an official and flight data. From a report: Atlas Air Flight 5Y095 landed safely after experiencing an "engine malfunction" shortly after departure, the airline said early Friday. It was unclear what kind of cargo the plane was carrying. Data collected by FlightAware, a flight tracking company, showed the aircraft was a Boeing 747-8 that left its gate at Miami International at 10:11 p.m. on Thursday and returned to the airport about 50 minutes later. The website also showed that the plane traveled 60 miles in total. Reuters adds: The Atlas Air Flight 5Y095 was on its way to San Juan, Puerto Rico from Miami International Airport on late Thursday evening. The pilot made a Mayday call around 0333 GMT to report an engine fire and requested to return back to the airport, according to multi-channel recordings of conversations between the air traffic control and the plane available on liveatc.net. "We have a engine fire," one of the plane crew said, disclosing that there were five people on board.Read more of this story at Slashdot.
David Mills, an Internet Pioneer, Has Died
David Mills, the man who invented NTP and wrote the implementation, has passed away. He also created the Fuzzballs and EGP, and helped make global-scale internetworking possible. Vint Cerf, sharing the news on the Internet Society mail group: His daughter, Leigh, just sent me the news that Dave passed away peacefully on January 17, 2024. He was such an iconic element of the early Internet. Network Time Protocol, the Fuzzball routers of the early NSFNET, INARGtaskforce lead, COMSAT Labs and University of Delaware and so much more. R.I.P.Read more of this story at Slashdot.
Physicists Design a Way to Detect Quantum Behavior in Large Objects, Like Us
Researchers have developed a way to apply quantum measurement to something no matter its mass or energy. "Our proposed experiment can test if an object is classical or quantum by seeing if an act of observation can lead to a change in its motion," says physicist Debarshi Das from UCL. ScienceAlert reports: Quantum physics describes a Universe where objects aren't defined by a single measurement, but as a range of possibilities. An electron can be spinning up and down, or have a high chance of existing in some areas more than others, for example. In theory, this isn't limited to tiny things. Your own body can in effect be described as having a very high probability of sitting in that chair and a very (very!) low probability of being on the Moon. There is just one fundamental truth to remember -- you touch it, you've bought it. Observing an object's quantum state, whether an electron, or a person sitting in a chair, requires interactions with a measuring system, forcing it to have a single measurement. There are ways to catch objects with their quantum pants still down, but they require keeping the object in a ground state -- super-cold, super-still, completely cut off from its environment. That's tricky to do with individual particles, and it gets a lot more challenging as the size of the scale goes up. The new proposal uses an entirely novel approach, one that uses a combination of assertions known as Leggett-Garg Inequalities and No-Signaling in Time conditions. In effect, these two concepts describe a familiar Universe, where a person on a chair is sitting there even if the room is dark and you can't see them. Switching on the light won't suddenly reveal they're actually under the bed. Should an experiment find evidence that somehow conflicts with these assertions, we just might be catching a glimpse of quantum fuzziness on a larger scale. The team proposes that objects can be observed as they oscillate on a pendulum, like a ball at the end of a piece of string. Light would then be flashed at the two halves of the experimental setup at different times -- counting as the observation -- and the results of the second flash would indicate if quantum behavior was happening, because the first flash would affect whatever was moving. We're still talking about a complex setup that would require some sophisticated equipment, and conditions akin to a ground state -- but through the use of motion and two measurements (light flashes), some of the restrictions on mass are removed. [...] "The next step is to try this proposed setup in an actual experiment," concludes the reports. "The mirrors at the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the US have already been proposed as suitable candidates for examination." "Those mirrors act as a single 10-kilogram (22-pound) object, quite a step up from the typical size of objects analyzed for quantum effects -- anything up to about a quintillionth of a gram." The findings have been published in the journal Physical Review Letters.Read more of this story at Slashdot.
Greenland's Ice Sheet Melting Faster Than Scientists Previously Estimated, Study Finds
Scientists have underestimated recent mass loss from Greenland by as much as 20%, finds a new study published in the journal Nature. CBS News reports: Since 1985, Greenland's ice sheet has lost approximately 5,091 square kilometers of ice researchers found using satellite imagery. Scientists said earlier estimates did not track melting at the edges of the ice sheets, known as calving, which measures ice breaking off at the terminus of a glacier. Greenland's ice sheet loses about 193 square kilometers of ice per year, researchers found. Study co-author Chad Greene and his colleagues said they qualified the extent of calving, which increased the scope of ice mass lost. They combined "236,328 observations of glacier terminus positions" compiled from various public data sets to capture monthly ice melt. Their measurements found that between 1985 and 2022, almost every glacier in Greenland experienced some level of loss. [...] Researchers in the study noted that "this retreat does not appear to substantially contribute to sea level rise" because most of the glacier margins the scientists measured were already underwater. The loss, however, may play a part in ocean circulation patterns, and how heat energy is distributed across the planet.Read more of this story at Slashdot.
80 Years Later, GCHQ Releases New Images of Nazi Code-Breaking Computer
An anonymous reader quotes a report from Ars Technica: On Thursday, UK's Government Communications Headquarters (GCHQ) announced the release of previously unseen images and documents related to Colossus, one of the first digital computers. The release marks the 80th anniversary of the code-breaking machines that significantly aided the Allied forces during World War II. While some in the public knew of the computers earlier (PDF), the UK did not formally acknowledge the project's existence until the 2000s. Colossus was not one computer but a series of computers developed by British scientists between 1943 and 1945. These 2-meter-tall electronic beasts played an instrumental role in breaking the Lorenz cipher, a code used for communications between high-ranking German officials in occupied Europe. The computers were said to have allowed allies to "read Hitler's mind," according to The Sydney Morning Herald. The technology behind Colossus was highly innovative for its time. Tommy Flowers, the engineer behind its construction, used over 2,500 vacuum tubes to create logic gates, a precursor to the semiconductor-based electronic circuits found in modern computers. While 1945's ENIAC was long considered the clear front-runner in digital computing, the revelation of Colossus' earlier existence repositioned it in computing history. (However, it's important to note that ENIAC was a general-purpose computer, and Colossus was not.) GCHQ's public sharing of archival documents includes several photos of the computer at different periods and a letter discussing Tommy Flowers' groundbreaking work that references the interception of "rather alarming German instructions." Following the war, the UK government issued orders for the destruction of most Colossus machines, and Flowers was required to turn over all related documentation. The GCHQ claims that the Colossus tech "was so effective, its functionality was still in use by us until the early 1960s." In the GCHQ press release, Director Anne Keast-Butler paid tribute to Colossus' place in the UK's lineage of technological innovation: "The creativity, ingenuity and dedication shown by Tommy Flowers and his team to keep the country safe were as crucial to GCHQ then as today."Read more of this story at Slashdot.
BMW Will Employ Figure's Humanoid Robot At South Carolina Plant
Figure's first humanoid robot will be coming to a BMW manufacturing facility in South Carolina. TechCrunch reports: BMW has not disclosed how many Figure 01 models it will deploy initially. Nor do we know precisely what jobs the robot will be tasked with when it starts work. Figure did, however, confirm with TechCrunch that it is beginning with an initial five tasks, which will be rolled out one at a time. While folks in the space have been cavalierly tossing out the term "general purpose" to describe these sorts of systems, it's important to temper expectations and point out that they will all arrive as single- or multi-purpose systems, growing their skillset over time. Figure CEO Brett Adcock likens the approach to an app store -- something that Boston Dynamics currently offers with its Spot robot via SDK. Likely initial applications include standard manufacturing tasks such as box moving, pick and place and pallet unloading and loading -- basically the sort of repetitive tasks for which factory owners claim to have difficulty retaining human workers. Adcock says that Figure expects to ship its first commercial robot within a year, an ambitious timeline even for a company that prides itself on quick turnaround times. The initial batch of applications will be largely determined by Figure's early partners like BMW. The system will, for instance, likely be working with sheet metal to start. Adcock adds that the company has signed up additional clients, but declined to disclose their names. It seems likely Figure will instead opt to announce each individually to keep the news cycle spinning in the intervening 12 months.Read more of this story at Slashdot.
Bing Gained Less Than 1% Market Share Since Adding Bing Chat, Report Finds
According to StatCounter, Bing's market share grew less than 1% since launching Bing Chat (now known as Copilot) roughly a year ago. From a report: Bloomberg reported (paywalled) on the StatCounter data, saying, "But Microsoft's search engine ended 2023 with just 3.4% of the global search market, according to data analytics firm StatCounter, up less than 1 percentage point since the ChatGPT announcement." Google still dominates the global search market with a 91.6% market share, followed by Bing's 3.4%, Yandex's 1.6% and Yahoo's 1.1%. "Other" search engines accounted for a total of just 2.2% of the global search market. You can view the raw chart and data from StatCounter here.Read more of this story at Slashdot.
Google To Invest $1 Billion In UK Data Center
Google announced today that it will invest $1 billion building a data center near London. Reuters reports: The data centre, located on a 33-acre (13-hectare) site bought by Google in 2020, will be located in the town of Waltham Cross, about 15 miles north of central London, the Alphabet-owned company said in a statement. The British government, which is pushing for investment by businesses to help fund new infrastructure, particularly in growth industries like technology and artificial intelligence, described Google's investment as a "huge vote of confidence" in the UK. "Google's $1 billion investment is testament to the fact that the UK is a centre of excellence in technology and has huge potential for growth," Prime Minister Rishi Sunak said in the Google statement. The investment follows Google's $1 billion purchase of a central London office building in 2022, close to Covent Garden, and another site in nearby King's Cross, where it is building a new office and where its AI company DeepMind is also based. In November, Microsoft announced plans to pump $3.2 billion into Britain over the next three years.Read more of this story at Slashdot.
Remote Work Doesn't Seem To Affect Productivity, Fed Study Finds
An anonymous reader quotes a report released Tuesday (Jan. 16th) by the Federal Reserve Bank of San Francisco: The U.S. labor market experienced a massive increase in remote and hybrid work during the COVID-19 pandemic. At its peak, more than 60% of paid workdays were done remotely -- compared with only 5% before the pandemic. As of December 2023, about 30% of paid workdays are still done remotely (Barrero, Bloom, and Davis 2021). Some reports have suggested that teleworking might either boost or harm overall productivity in the economy. And certainly, overall productivity statistics have been volatile. In 2020, U.S. productivity growth surged. This led to optimistic views in the media about the gains from forced digital innovation and the productivity benefits of remote work. However, the surge ended, and productivity growth has retreated to roughly its pre-pandemic trend. Fernald and Li (2022) find from aggregate data that this pattern was largely explained by a predictable cyclical effect from the economy's downturn and recovery. In aggregate data, it thus appears difficult to see a large cumulative effect -- either positive or negative -- from the pandemic so far. But it is possible that aggregate data obscure the effects of teleworking. For example, factors beyond telework could have affected the overall pace of productivity growth. Surveys of businesses have found mixed effects from the pandemic, with many businesses reporting substantial productivity disruptions. In this Economic Letter, we ask whether we can detect the effects of remote work in the productivity performance of different industries. There are large differences across sectors in how easy it is to work off-site. Thus, if remote work boosts productivity in a substantial way, then it should improve productivity performance, especially in those industries where teleworking is easy to arrange and widely adopted, such as professional services, compared with those where tasks need to be performed in person, such as restaurants. After controlling for pre-pandemic trends in industry productivity growth rates, we find little statistical relationship between telework and pandemic productivity performance. We conclude that the shift to remote work, on its own, is unlikely to be a major factor explaining differences across sectors in productivity performance. By extension, despite the important social and cultural effects of increased telework, the shift is unlikely to be a major factor explaining changes in aggregate productivity. [...] The shift to remote and hybrid work has reshaped society in important ways, and these effects are likely to continue to evolve. For example, with less time spent commuting, some people have moved out of cities, and the lines between work and home life have blurred. Despite these noteworthy effects, in this Letter we find little evidence in industry data that the shift to remote and hybrid work has either substantially held back or boosted the rate of productivity growth. Our findings do not rule out possible future changes in productivity growth from the spread of remote work. The economic environment has changed in many ways during and since the pandemic, which could have masked the longer-run effects of teleworking. Continuous innovation is the key to sustained productivity growth. Working remotely could foster innovation through a reduction in communication costs and improved talent allocation across geographic areas. However, working off-site could also hamper innovation by reducing in-person office interactions that foster idea generation and diffusion. The future of work is likely to be a hybrid format that balances the benefits and limitations of remote work.Read more of this story at Slashdot.
IBM Scraps Rewards Program For Staff Inventions, Wipes Away Cash Points
Thomas Claburn reports via The Register: IBM has canceled a program that rewarded inventors at Big Blue for patents or publications, leaving some angry that they are missing out on potential bonuses. By cancelling the scheme, a source told The Register, IBM has eliminated a financial liability by voiding the accrued, unredeemed credits issued to program participants which could have been converted into potential cash awards. For years, IBM has sponsored an "Invention Achievement Award Plan" to incentivize employee innovation. In exchange for filing patents, or for publishing articles that served as defense against rival patents, IBM staff were awarded points that led to recognition and potentially cash bonuses. According to documentation seen by The Register, "Invention points are awarded to all inventors listed on a successful disclosure submission." One point was awarded for publishing. Three points were awarded for filing a patent or four if the filing was deemed high value. For accruing 12 points, program participants would get a payout. "Inventors reach an invention plateau for every 12 points they achieve -- which must include at least one file decision," the rules state. And for each plateau achieved, IBM would pay its inventors $1,200 in recognition of their efforts. No longer, it seems. IBM canceled the program at the end of 2023 and replaced it with a new one that uses a different, incompatible point system called BluePoints. "The previous Invention Achievement Award Plan will be sunset at midnight (eastern time) on December 31st, 2023," company FAQs explain. "Since Plateau awards are one of the items being sunset, plateau levels must be obtained on or before December 31, 2023 to be eligible for the award. Any existing plateau points that have not been applied will not be converted to BluePoints." We're told that IBM's invention review process could take months, meaning that employees just didn't have time between the announcement and the program sunset to pursue the next plateau and cash out. Those involved in the program evidently were none too pleased by the points grab. "My opinion...the invention award program was buggered a long time [ago]," said a former IBM employee. "It rewarded words on a page instead of true innovation. [Former CEO] Ginni [Rometty] made it worse by advocating the program to fluff up young egos."Read more of this story at Slashdot.
Google Is Rolling Out WebGPU For Next-Gen Gaming On Android
In a blog post today, Google announced that WebGPU is "now enabled by default in Chrome 121 on devices running Android 12 and greater powered by Qualcomm and ARM GPUs," with support for more Android devices rolling out gradually. Previously, the API was only available on Windows PCs that support Direct3D 12, macOS, and ChromeOS devices that support Vulkan. Google says WebGPU "offers significant benefits such as greatly reduced JavaScript workload for the same graphics and more than three times improvements in machine learning model inferences." With lower-level access to a device's GPU, developers are able to enable richer and more complex visual content in web applications. This will be especially apparent with games, as you can see in this demo. Next up: WebGPU for Chrome on Linux.Read more of this story at Slashdot.
Reddit Seeks To Launch IPO In March
According to Reuters, Reddit plans to launch its initial public offering (IPO) in March, "moving forward with a listing it has been eyeing for more than three years." From the report: It would be the first IPO of a major social media company since Pinterest's, opens new tab debut in 2019, and would come as Reddit and its peers face stiff competition for advertising dollars from the likes of TikTok and Facebook. The offering would also test the willingness of some Reddit users to back the company's stock market debut. Reddit, which filed confidentially for its IPO in December 2021, is planning to make its public filing in late February, launch its roadshow in early March, and complete the IPO by the end of March, two of the sources said. The San Francisco-based company, which was valued at about $10 billion in a funding round in 2021, is seeking to sell about 10% of its shares in the IPO, the sources added. It will decide on what IPO valuation it will pursue closer to the time of the listing, according to the sources.Read more of this story at Slashdot.
'Stablecoins' Enabled $40 Billion In Crypto Crime Since 2022
An anonymous reader quotes a report from Wired: Stablecoins, cryptocurrencies pegged to a stable value like the US dollar, were created with the promise of bringing the frictionless, border-crossing fluidity of Bitcoin to a form of digital money with far less volatility. That combination has proved to be wildly popular, rocketing the total value of stablecoin transactions since 2022 past even that of Bitcoin itself. It turns out, however, that as stablecoins have become popular among legitimate users over the past two years, they were even more popular among a different kind of user: those exploiting them for billions of dollars of international sanctions evasion and scams. As part of itsannual crime report, cryptocurrency-tracing firm Chainalysis today released new numbers on the disproportionate use of stablecoins for both of those massive categories of illicit crypto transactions over the last year. By analyzing blockchains, Chainalysis determined that stablecoins were used in fully 70 percent of crypto scam transactions in 2023, 83 percent of crypto payments to sanctioned countries like Iran and Russia, and 84 percent of crypto payments to specifically sanctioned individuals and companies. Those numbers far outstrip stablecoins' growing overall use -- including for legitimate purposes -- which accounted for 59 percent of all cryptocurrency transaction volume in 2023. In total, Chainalysis measured $40 billion in illicit stablecoin transactions in 2022 and 2023 combined. The largest single category of that stablecoin-enabled crime was sanctions evasion. In fact, across all cryptocurrencies, sanctions evasion accounted for more than half of the $24.2 billion in criminal transactions Chainalysis observed in 2023, with stablecoins representing the vast majority of those transactions. [...] Chainalysis concedes that the analysis in its report excludes some cryptocurrencies like Monero and Zcash that are designed to be harder or impossible to trace with blockchain analysis. It also says it based its numbers on the type of cryptocurrency sent directly to an illicit actor, which may leave out other currencies used in money laundering processes that repeatedly swap one type of cryptocurrency for another to make tracing more difficult. "Whether it's an individual located in Iran or a bad guy trying to launder money -- either way, there's a benefit to the stability of the US dollar that people are looking to obtain," says Andrew Fierman, Chainalysis' head of sanctions strategy. "If you're in a jurisdiction where you don't have access to the US dollar due to sanctions, stablecoins become an interesting play." Fierman points to Nobitex, the largest cryptocurrency exchange operating in the sanctioned country of Iran, as well as Garantex, a notorious exchange based in Russia that has been specifically sanctioned for its widespread criminal use. According to Chainalysis, "Stablecoin usage on Nobitex outstrips bitcoin by a 9:1 ratio, and on Garantex by a 5:1 ratio," reports Wired. "That's a stark difference from the roughly 1:1 ratio between stablecoins and bitcoins on a few nonsanctioned mainstream exchanges that Chainalysis checked for comparison."Read more of this story at Slashdot.
Coursera Saw Signups For AI Courses Every Minute in 2023
U.S. edutech platform Coursera added a new user every minute on average for its AI courses in 2023, CEO Jeff Maggioncalda said on Thursday, in a clear sign of people upskilling to tap a potential boom in generative AI. Reuters: The technology behind OpenAI's ChatGPT has taken the world by a storm and sparked a race among companies to roll out their own versions of the viral chatbot. "I'd say the real hotspot is generative AI because it affects so many people," he told Reuters in an interview at the World Economic Forum in Davos. Coursera is looking to offer AI courses along with companies that are the frontrunners in the AI race, including OpenAI and Google's DeepMind, Maggioncalda said. Investors had earlier feared that apps based on generative AI might replace ed-tech firms, but on the contrary the technology has encouraged more people to upskill, benefiting companies such as Coursera. The company has more than 800 AI courses and saw more than 7.4 million enrollments last year. Every student on the platform gets access to a ChatGPT-like AI assistant called "Coach" that provides personalized tutoring.Read more of this story at Slashdot.
Mark Zuckerberg's New Goal is Creating AGI
OpenAI's stated mission is to create the artificial general intelligence, or AGI. Demis Hassabis, the leader of Google's AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race. From a report: While he doesn't have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he's shaking things up by moving Meta's AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta's apps. The goal is for Meta's AI breakthroughs to more directly reach its billions of users. "We've come to this view that, in order to build the products that we want to build, we need to build for general intelligence," Zuckerberg tells me in an exclusive interview. "I think that's important to convey because a lot of the best researchers want to work on the more ambitious problems." [...] No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive. "I don't have a one-sentence, pithy definition," he tells me. "You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition." He sees its eventual arrival as being a gradual process, rather than a single moment. "I'm not actually that sure that some specific threshold will feel that profound." As Zuckerberg explains it, Meta's new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn't think that the ability for it to generate code made sense for how people would use a LLM in Meta's apps. But it's still an important skill to develop for building smarter AI, so Meta built it anyway. External research has pegged Meta's H100 shipments for 2023 at 150,000, a number that is tied only with Microsoft's shipments and at least three times larger than everyone else's. When its Nvidia A100s and other AI chips are accounted for, Meta will have a stockpile of almost 600,000 GPUs by the end of 2024, according to Zuckerberg.Read more of this story at Slashdot.
Microsoft Makes Its AI-Powered Reading Tutor Free
Microsoft today made Reading Coach, its AI-powered tool that provides learners with personalized reading practice, available at no cost to anyone with a Microsoft account. From a report: As of this morning, Reading Coach is accessible on the web in preview -- a Windows app is forthcoming. And soon (in late spring), Reading Coach will integrate with learning management systems such as Canva, Microsoft says. Reading Coach builds on Reading Progress, a plug-in for the education-focused version of Microsoft Teams, Teams for Education, designed to help teachers foster reading fluency in their students. Inspired by the success of Reading Progress (evidently), Microsoft launched Reading Coach in 2022 as a part of Teams for Education and Immersive Reader, the company's cross-platform assistive service for language and reading comprehension.Read more of this story at Slashdot.
Coinbase Compares Buying Crypto To Collecting Beanie Babies
Coinbase said buying cryptocurrency on an exchange was more like collecting Beanie Babies than investing in a stock or bond. From a report: The biggest US crypto exchange made the comparison Wednesday in a New York federal court hearing. Coinbase was arguing for the dismissal of a Securities and Exchange Commission lawsuit accusing it of selling unregistered securities. William Savitt, a lawyer for Coinbase, told US District Judge Katherine Polk Failla that tokens trading on the exchange aren't securities subject to SEC jurisdiction because buyers don't gain any rights as a part of their purchases, as they do with stocks or bonds. "It's the difference between buying Beanie Babies Inc and buying Beanie Babies," Savitt said. The question of whether digital tokens are securities has divided courts.Read more of this story at Slashdot.
Hospitals Owned By Private Equity Are Harming Patients, Reports Find
Private equity firms are increasingly buying hospitals across the US, and when they do, patients suffer, according to two separate reports. Specifically, the equity firms cut corners, slash services, lay off staff, lower quality of care, take on substantial debt, and reduce charity care, leading to lower ratings and more medical errors, the reports collectively find. ArsTechnica: Last week, the financial watchdog organization Private Equity Stakeholder Project (PESP) released a report delving into the state of two of the nation's largest hospital systems, Lifepoint and ScionHealth -- both owned by private equity firm Apollo Global Management. Through those two systems, Apollo runs 220 hospitals in 36 states, employing around 75,000 people. The report found that some of Apollo's hospitals were among the worst in their respective states, based on a ranking by The Lown Institute Hospital Index. The index ranks hospitals and health systems based on health equity, value, and outcomes, PESP notes. The hospitals also have dismal readmission rates and government rankings. The Center for Medicare and Medicaid Services (CMS) ranks hospitals on a one- to five-star system, with the national average of 3.2 stars overall and about 30 percent of hospitals at two stars or below. Apollo's overall average is 2.8 stars, with nearly 40 percent of hospitals at two stars or below. The other report, a study published in JAMA late last month, found that the rate of serious medical errors and health complications increases among patients in the first few years after private equity firms take over. The study examined Medicare claims from 51 private equity-run hospitals and 259 matched control hospitals. Specifically, the study, led by researchers at Harvard University, found that patients admitted to private equity-owned hospitals had a 25 percent increase in developing hospital-acquired conditions compared with patients in the control hospitals. In private equity hospitals, patients experienced a 27 percent increase in falls, a 38 percent increase in central-line bloodstream infections (despite placing 16 percent fewer central lines than control hospitals), and surgical site infections doubled.Read more of this story at Slashdot.
...193194195196197198199200201202...