In a regulatory filing today, Microsoft said that a Russian intelligence group hacked into some of the company's top executives' email accounts. CNBC reports: Nobelium, the same group that breached government supplier SolarWinds in 2020, carried out the attack, which Microsoft detected last week, according to the company. The announcement comes after new U.S. requirements for disclosing cybersecurity incidents went into effect. A Microsoft spokesperson said that while the company does not believe the attack had a material impact, it still wanted to honor the spirit of the rules. In late November, the group accessed "a legacy non-production test tenant account," Microsoft's Security Response Center wrote in the blog post. After gaining access, the group "then used the account's permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents," the corporate unit wrote. The company's senior leadership team, including finance chief Amy Hood and president Brad Smith, regularly meets with CEO Satya Nadella. Microsoft said it has not found signs that Nobelium had accessed customer data, production systems or proprietary source code. The U.S. government and Microsoft consider Nobelium to be part of the Russian foreign intelligence service SVR. The hacking group was responsible for one of the most prolific breaches in U.S. history when it added malicious code to updates to SolarWinds' Orion software, which some U.S. government agencies were using. Microsoft itself was ensnared in the hack. Nobelium, also known as APT29 or Cozy Bear, is a sophisticated hacking group that has attempted to breach the systems of U.S. allies and the Department of Defense. Microsoft also uses the name Midnight Blizzard to identify Nobelium. It was also implicated alongside another Russian hacking group in the 2016 breach of the Democratic National Committee's systems.Read more of this story at Slashdot.
Despite many questions going unanswered, a startup called Rabbit sold out of its pocket AI companion a day after it was debuted at CES 2024 last week. Now, the company finally shared more details about which large language model (LLM) will be powering the device. According to Engadget, the provider in question is Perplexity, "a San Francisco-based startup with ambitions to overtake Google in the AI space." From the report: Perplexity will be providing up-to-date search results via Rabbit's $199 orange brick -- without the need of any subscription. That said, the first 100,000 R1 buyers will receive one year of Perplexity Pro subscription -- normally costing $200 -- for free. This advanced service adds file upload support, a daily quota of over 300 complex queries and the ability to switch to other AI models (GPT-4, Claude 2.1 or Gemini), though these don't necessarily apply to the R1's use case.Read more of this story at Slashdot.
According to Bloomberg (paywalled), OpenAI CEO Sam Altman is reportedly raising billions to develop a global network of chip fabrication factories, collaborating with leading chip manufacturers to address the high demand for chips required for advanced AI models. The Verge reports: A major cost and limitation for running AI models is having enough chips to handle the computations behind bots like ChatGPT or DALL-E that answer prompts and generate images. Nvidia's value rose above $1 trillion for the first time last year, partly due to a virtual monopoly it has as GPT-4, Gemini, Llama 2, and other models depend heavily on its popular H100 GPUs. Accordingly, the race to manufacture more high-powered chips to run complex AI systems has only intensified. The limited number of fabs capable of making high-end chips is driving Altman or anyone else to bid for capacity years before you need it in order to produce the new chips. And going against the likes of Apple requires deep-pocketed investors who will front costs that the nonprofit OpenAI still can't afford. SoftBank Group and Abu Dhabi-based AI holding company G42 have reportedly been in talks about raising money for Altman's project.Read more of this story at Slashdot.
An anonymous reader quotes a report from IEEE Spectrum: Researchers at Georgia Tech, in Atlanta, have developed what they are calling the world's first functioning graphene-based semiconductor. This breakthrough holds the promise to revolutionize the landscape of electronics, enabling faster traditional computers and offering a new material for future quantum computers. The research, published on January 3 in Nature and led by Walt de Heer, a professor of physics at Georgia Tech, focuses on leveraging epitaxial graphene, a crystal structure of carbon chemically bonded to silicon carbide (SiC). This novel semiconducting material, dubbed semiconducting epitaxial graphene (SEC) -- or alternatively, epigraphene -- boasts enhanced electron mobility compared with that of traditional silicon, allowing electrons to traverse with significantly less resistance. The outcome is transistors capable of operating at terahertz frequencies, offering speeds 10 times as fast as that of the silicon-based transistors used in current chips. De Heer describes the method used as a modified version of an extremely simple technique that has been known for over 50 years. "When silicon carbide is heated to well over 1,000C, silicon evaporates from the surface, leaving a carbon-rich surface which then forms into graphene," says de Heer. This heating step is done with an argon quartz tube in which a stack of two SiC chips are placed in a graphite crucible, according to de Heer. Then a high-frequency current is run through a copper coil around the quartz tube, which heats the graphite crucible through induction. The process takes about an hour. De Heer added that the SEC produced this way is essentially charge neutral, and when exposed to air, it will spontaneously be doped by oxygen. This oxygen doping is easily removed by heating it at about 200C in vacuum. "The chips we use cost about [US] $10, the crucible about $1, and the quartz tube about $10," said de Heer. [...] De Heer and his research team concede, however, that further exploration is needed to determine whether graphene-based semiconductors can surpass the current superconducting technology used in advanced quantum computers. The Georgia Tech team do not envision incorporating graphene-based semiconductors with standard silicon or compound semiconductor lines. Instead, they are aiming for a paradigm shift beyond silicon, utilizing silicon carbide. They are developing methods, such as coating SEC with boron nitride, to protect and enhance its compatibility with conventional semiconductor lines. Comparing their work with commercially available graphene field-effect transistors (GFETs), de Heer explains that there is a crucial difference: "Conventional GFETs do not use semiconducting graphene, making them unsuitable for digital electronics requiring a complete transistor shutdown." He says that the SEC developed by his team allows for a complete shutdown, meeting the stringent requirements of digital electronics. De Heer says that it will take time to develop this technology. "I compare this work to the Wright brothers' first 100-meter flight. It will mainly depend on how much work is done to develop it."Read more of this story at Slashdot.
U.S. lawmakers have banned the Defense Department from buying batteries produced by China's biggest manufacturers. "The rule implemented as part of the latest National Defense Authorization Act that passed on Dec. 22 will prevent procuring batteries from Contemporary Amperex Technology Co. Ltd., BYD Co. and four other Chinese companies beginning in October 2027," reports Bloomberg. From the report: The measure doesn't extend to commercial purchases by companies such as Ford, which is licensing technology from CATL to build electric-vehicle batteries in Michigan. Tesla also sources some of its battery cells from BYD, which became the new top-selling EV maker globally in the fourth quarter. The four other manufacturers whose batteries will be banned are Envision Energy Ltd., EVE Energy Co., Gotion High Tech Co. and Hithium Energy Storage Technology Co. The decision still requires Pentagon officials to more clearly define the reach of the new rule. It adds to previous provisions outlined by the NDAA that decoupled the Defense Department's supply chain from China, including restrictions on use of Chinese semiconductors. While the Defense Department bans apply strictly to defense procurement, industries and lawmakers closely follow the rules as a guide for what materials, products and companies to trust in their own course of business.Read more of this story at Slashdot.
Jay Peters reports via The Verge: Plex, known for its media server software and as a place to watch ad-supported content, is going to launch a store for to buy and rent movies and TV shows in early February, executives told Lowpass' Janko Roettgers. "Most studios" are lined up for the store's launch, and there are "plans to complete the catalog soon after," Roettgers says. The store will also integrate with Plex features like its watchlists for movies. Roettgers points out that that Plex has announced plans in both 2020 and 2023 to launch a movie / TV store -- hopefully Plex is truly ready to do so this time. Plex chief product officer Scott Olechowski told Roettgers that more changes are coming to Plex down the line, including a "pretty major UX refresh" and more social features like public profiles.Read more of this story at Slashdot.
Geoffrey.landis writes: The Japan SLIM spacecraft has successfully landed on moon, but power problems mean it may be short mission. The good news is that the landing was successful, making Japan only the fifth nation to successfully make a lunar landing, and the ultra-miniature rover and the hopper both deployed. The bad news is that the solar arrays aren't producing power, and unless they can fix the problem in the next few hours, the batteries will be depleted and it will die. But, short mission or long, hurrah for Japan for being the fifth country to successfully land a mission on the surface of the moon (on their third try; two previous missions didn't make it). It's a rather amazing mission. I've never seen a spacecraft concept that lands under rocket power vertically but then rotates over to rest horizontally on the surface.Read more of this story at Slashdot.
An anonymous reader shares a report: These cafes had all adopted similar aesthetics and offered similar menus, but they hadn't been forced to do so by a corporate parent, the way a chain like Starbucks replicated itself. Instead, despite their vast geographical separation and total independence from each other, the cafes had all drifted toward the same end point. The sheer expanse of sameness was too shocking and new to be boring. Of course, there have been examples of such cultural globalisation going back as far as recorded civilisation. But the 21st-century generic cafes were remarkable in the specificity of their matching details, as well as the sense that each had emerged organically from its location. They were proud local efforts that were often described as "authentic," an adjective that I was also guilty of overusing. When travelling, I always wanted to find somewhere "authentic" to have a drink or eat a meal. If these places were all so similar, though, what were they authentic to, exactly? What I concluded was that they were all authentically connected to the new network of digital geography, wired together in real time by social networks. They were authentic to the internet, particularly the 2010s internet of algorithmic feeds. In 2016, I wrote an essay titled Welcome to AirSpace, describing my first impressions of this phenomenon of sameness. "AirSpace" was my coinage for the strangely frictionless geography created by digital platforms, in which you could move between places without straying beyond the boundaries of an app, or leaving the bubble of the generic aesthetic. The word was partly a riff on Airbnb, but it was also inspired by the sense of vaporousness and unreality that these places gave me. They seemed so disconnected from geography that they could float away and land anywhere else. When you were in one, you could be anywhere. My theory was that all the physical places interconnected by apps had a way of resembling one another. In the case of the cafes, the growth of Instagram gave international cafe owners and baristas a way to follow one another in real time and gradually, via algorithmic recommendations, begin consuming the same kinds of content. One cafe owner's personal taste would drift toward what the rest of them liked, too, eventually coalescing. On the customer side, Yelp, Foursquare and Google Maps drove people like me -- who could also follow the popular coffee aesthetics on Instagram -- toward cafes that conformed with what they wanted to see by putting them at the top of searches or highlighting them on a map. To court the large demographic of customers moulded by the internet, more cafes adopted the aesthetics that already dominated on the platforms. Adapting to the norm wasn't just following trends but making a business decision, one that the consumers rewarded. When a cafe was visually pleasing enough, customers felt encouraged to post it on their own Instagram in turn as a lifestyle brag, which provided free social media advertising and attracted new customers. Thus the cycle of aesthetic optimisation and homogenisation continued.Read more of this story at Slashdot.
Seagate this week unveiled the industry's first hard disk drive platform that uses heat-assisted media recording (HAMR). Tom's Hardware: The new Mozaic 3+ platform relies on several all-new technologies, including new media, new write and read heads, and a brand-new controller. The platform will be used for Seagate's upcoming Exos hard drives for cloud datacenters with a 30TB capacity and higher. Heat-assisted magnetic recording is meant to radically increase areal recording density of magnetic media by making writes while the recording region is briefly heated to a point where its magnetic coercivity drops significantly. Seagate's Mozaic 3+ uses 10 glass disks with a magnetic layer consisting of an iron-platinum superlattice structure that ensures both longevity and smaller media grain size compared to typical HDD platters. To record the media, the platform uses a plasmonic writer sub-system with a vertically integrated nanophotonic laser that heats the media before writing. Because individual grains are so small with the new media, their individual magnetic signatures are lower, whereas magnetic inter-track interference (ITI) effect is somewhat higher. As a result, Seagate had to introduce its new Gen 7 Spintronic Reader, which features the "world's smallest and most sensitive magnetic field reading sensors," according to the company. Because Seagate's new Mozaic 3+ platform deals with new media with a very small grain size, an all-new writer, and a reader that features multiple tiny magnetic field readers, it also requires a lot of compute horsepower to orchestrate the drive's work. Therefore, Seagate has equipped with Mozaic 3+ platform with an all-new controller made on a 12nm fabrication process.Read more of this story at Slashdot.
China's Huawei will not support Android apps on the latest iteration of its in-house Harmony operating system, domestic financial media Caixin reported, as the company looks to bolster its own software ecosystem. From a report: The company plans to roll out a developer version of its HarmonyOS Next platform in the second quarter of this year followed by a full commercial version in the fourth quarter, it said in a company statement highlighting the launch event for the platform in its home city of Shenzhen on Thursday. Huawei first unveiled its proprietary Harmony system in 2019 and prepared to launch it on some smartphones a year later after U.S. restrictions cut its access to Google's technical support for its Android mobile OS. However, earlier versions of Harmony allowed apps built for Android to be used on the system, which will no longer be possible, according to Caixin.Read more of this story at Slashdot.
Organized crime is mining sand from rivers and coasts to feed demand worldwide, ruining ecosystems and communities. Can it be stopped? Scientific American reports: Very few people are looking closely at the illegal sand system or calling for changes, however, because sand is a mundane resource. Yet sand mining is the world's largest extraction industry because sand is a main ingredient in concrete, and the global construction industry has been soaring for decades. Every year the world uses up to 50 billion metric tons of sand, according to a United Nations Environment Program report. The only natural resource more widely consumed is water. A 2022 study by researchers at the University of Amsterdam concluded that we are dredging river sand at rates that far outstrip nature's ability to replace it, so much so that the world could run out of construction-grade sand by 2050. The U.N. report confirms that sand mining at current rates is unsustainable. The greatest demand comes from China, which used more cement in three years (6.6 gigatons from 2011 through 2013) than the U.S. used in the entire 20th century (4.5 gigatons), notes Vince Beiser, author of The World in a Grain. Most sand gets used in the country where it is mined, but with some national supplies dwindling, imports reached $1.9 billion in 2018, according to Harvard's Atlas of Economic Complexity. Companies large and small dredge up sand from waterways and the ocean floor and transport it to wholesalers, construction firms and retailers. Even the legal sand trade is hard to track. Two experts estimate the global market at about $100 billion a year, yet the U.S. Geological Survey Mineral Commodity Summaries indicates the value could be as high as $785 billion. Sand in riverbeds, lake beds and shorelines is the best for construction, but scarcity opens the market to less suitable sand from beaches and dunes, much of it scraped illegally and cheaply. With a shortage looming and prices rising, sand from Moroccan beaches and dunes is sold inside the country and is also shipped abroad, using organized crime's extensive transport networks, Abderrahmane has found. More than half of Morocco's sand is illegally mined, he says.Read more of this story at Slashdot.
A resident in Virginia has urged the Federal Communications Commission to reconsider canceling $886 million in federal funding for SpaceX's Starlink system. But rival satellite company Viasat has gone out of its way to oppose the citizen-led petition.APCMag: On Jan. 1, the FCC received a petition from the Virginia resident Greg Weisiger asking the commission to reconsider denying the $886 million to SpaceX. "Petitioner is at an absolute loss to understand the Commission's logic with these denials," wrote Weisiger, who lives in Midlothian, Virginia. "It is abundantly clear that Starlink has a robust, reliable, affordable service for rural and insular locations in all states and territories." The petition arrived a few weeks after the FCC denied SpaceX's appeal to receive $886 million from the commission's Rural Digital Opportunity Fund, which is designed to subsidize 100Mbps to gigabit broadband across the US. SpaceX wanted to use the funds to expand Starlink access in rural areas. But the FCC ruled that "Starlink is not reasonably capable of offering the required high-speed, low latency service throughout the areas where it won auction support."Weisiger disagrees. In his petition, he writes that the FCC's decision will deprive him of federal support to bring high-speed internet to his home. "Thousands of other Virginia locations were similarly denied support," he added.Read more of this story at Slashdot.
Microsoft is getting ready to place Teams meeting reminders on the Start menu in Windows 11. From a report: The software giant has started testing a new build of Windows 11 with Dev Channel testers that includes a Teams meeting reminder in the recommended section of the Start menu. Microsoft is also testing an improved way to instantly access new photos and screenshots from Android devices. [...] The Teams meeting reminders will be displayed alongside the regular recently used and recommended file list on the Start menu, and they won't be displayed for non-business users of Windows 11.Read more of this story at Slashdot.
A new survey of thousands of game development professionals finds a near-majority saying generative AI tools are already in use at their workplace. But a significant minority of developers say their company has no interest in generative AI tools or has outright banned their use. From a report: The Game Developers Conference's 2024 State of the Industry report, released Thursday, aggregates the thoughts of over 3,000 industry professionals as of last October. While the annual survey (conducted in conjunction with research partner Omdia) has been running for 12 years, this is the first time respondents were asked directly about their use of generative AI tools such as ChatGPT, DALL-E, GitHub Copilot, and Adobe Generative Fill. Forty-nine percent of the survey's developer respondents said that generative AI tools are currently being used in their workplace. That near-majority includes 31 percent (of all respondents) that say they use those tools themselves and 18 percent that say their colleagues do. The survey also found that different studio departments showed different levels of willingness to embrace AI tools. Forty-four percent of employees in business and finance said they were using AI tools, for instance, compared to just 16 percent in visual arts and 13 percent in "narrative/writing."Read more of this story at Slashdot.
The credibility of the Cop28 agreement to "transition away" from fossil fuels rides on the world's biggest historical polluters like the US, UK and Canada rethinking current plans to expand oil and gas production, according to the climate negotiator representing 135 developing countries. The Guardian: In an exclusive interview with the Guardian, Pedro Pedroso, the outgoing president of the G77 plus China bloc of developing countries, warned that the landmark deal made at last year's climate talks in Dubai risked failing. "We achieved some important outcomes at Cop28 but the challenge now is how we translate the deal into meaningful action for the people," Pedroso said. "As we speak, unless we lie to ourselves, none of the major developed countries, who are the most important historical emitters, have policies that are moving away from fossil fuels, on the contrary, they are expanding," said Pedroso. These countries must also deliver adequate finance for poorer nations to transition -and adapt to the climate crisis. In Dubai, Sultan Al Jaber, Cop28 president and chief of the Emirates national oil company, was subject to widespread scrutiny -- understandable given that the UAE is the world's seventh biggest oil producer with the fifth largest gas reserves. Yet the US was by far the biggest oil and gas producer in the world last year -- setting a new record, during a year that was the hottest ever recorded. The US, UK, Canada, Australia and Norway account for 51% of the total planned oil and gas expansion by 2050, according to research by Oil Change International. "It's very easy to label some emerging economies, especially the Gulf states, as climate villains, but this is very unfair by countries with historic responsibilities -- who keep trying to scapegoat and deviate the attention away from themselves. Just look at US fossil fuel plans and the UK's new drilling licenses for the North Sea, and Canada which has never met any of its emission reduction goals, not once," said Pedroso, a Cuban diplomat.Read more of this story at Slashdot.
Dan Rockmore, the director of the Neukom Institute for Computational Sciences at Dartmouth College, writing for The New Yorker: Recently, statistical modelling has taken on a new kind of importance as the engine of artificial intelligence -- specifically in the form of the deep neural networks that power, among other things, large language models, such as OpenAI's G.P.T.s. These systems sift vast corpora of text to create a statistical model of written expression, realized as the likelihood of given words occurring in particular contexts. Rather than trying to encode a principled theory of how we produce writing, they are a vertiginous form of curve fitting; the largest models find the best ways to connect hundreds of thousands of simple mathematical neurons, using trillions of parameters.They create a vast data structure akin to a tangle of Christmas lights whose on-off patterns attempt to capture a chunk of historical word usage. The neurons derive from mathematical models of biological neurons originally formulated by Warren S. McCulloch and Walter Pitts, in a landmark 1943 paper, titled "A Logical Calculus of the Ideas Immanent in Nervous Activity." McCulloch and Pitts argued that brain activity could be reduced to a model of simple, interconnected processing units, receiving and sending zeros and ones among themselves based on relatively simple rules of activation and deactivation. The McCulloch-Pitts model was intended as a foundational step in a larger project, spearheaded by McCulloch, to uncover a biological foundation of psychiatry. McCulloch and Pitts never imagined that their cartoon neurons could be trained, using data, so that their on-off states linked to certain properties in that data. But others saw this possibility, and early machine-learning researchers experimented with small networks of mathematical neurons, effectively creating mathematical models of the neural architecture of simple brains, not to do psychiatry but to categorize data. The results were a good deal less than astonishing. It wasn't until vast amounts of good data -- like text -- became readily available that computer scientists discovered how powerful their models could be when implemented on vast scales. The predictive and generative abilities of these models in many contexts is beyond remarkable. Unfortunately, it comes at the expense of understanding just how they do what they do. A new field, called interpretability (or X-A.I., for "explainable" A.I.), is effectively the neuroscience of artificial neural networks. This is an instructive origin story for a field of research. The field begins with a focus on a basic and well-defined underlying mechanism -- the activity of a single neuron. Then, as the technology scales, it grows in opacity; as the scope of the field's success widens, so does the ambition of its claims. The contrast with climate modelling is telling. Climate models have expanded in scale and reach, but at each step the models must hew to a ground truth of historical, measurable fact. Even models of covid or elections need to be measured against external data. The success of deep learning is different. Trillions of parameters are fine-tuned on larger and larger corpora that uncover more and more correlations across a range of phenomena. The success of this data-driven approach isn't without danger. We run the risk of conflating success on well-defined tasks with an understanding of the underlying phenomenon -- thought -- that motivated the models in the first place. Part of the problem is that, in many cases, we actually want to use models as replacements for thinking. That's the raison detre of modelling -- substitution. It's useful to recall the story of Icarus. If only he had just done his flying well below the sun. The fact that his wings worked near sea level didn't mean they were a good design for the upper atmosphere. If we don't understand how a model works, then we aren't in a good position to know its limitations until something goes wrong. By then it might be too late. Eugene Wigner, the physicist who noted the "unreasonable effectiveness of mathematics," restricted his awe and wonder to its ability to describe the inanimate world. Mathematics proceeds according to its own internal logic, and so it's striking that its conclusions apply to the physical universe; at the same time, how they play out varies more the further that we stray from physics. Math can help us shine a light on dark worlds, but we should look critically, always asking why the math is so effective, recognizing where it isn't, and pushing on the places in between.Read more of this story at Slashdot.
Airbus cemented its position last week as the world's biggest plane maker for the fifth straight year, announcing that it had delivered more aircraft and secured more orders than Boeing in 2023. At the same time, Boeing was trying to put out a huge public-relations and safety crisis caused by a harrowing near disaster involving its 737 Max line of airliners. In the long-running duel between the two aviation rivals, Airbus has pulled far ahead. The New York Times: "What used to be a duopoly has become two-thirds Airbus, one-third Boeing," said Richard Aboulafia, the managing director of AeroDynamic Advisory in Washington, D.C. "A lot of people, whether investors, financiers or customers, are looking at Airbus and seeing a company run by competent people," he said. "The contrast with Boeing is fairly profound." The incident involving the 737 Max 9, in which a hole blew open in the fuselage of an Alaska Airlines flight in midair, was the latest in a string of safety lapses in Boeing's workhorse aircraft -- including two fatal crashes in 2018 and 2019 -- that are indirectly helping propel the fortunes of the European aerospace giant. As the Federal Aviation Administration widens its scrutiny of Max 9 production, Airbus's edge is likely to sharpen. Airlines are embarking on massive expansions of their fleets to meet a postpandemic surge in the demand for global air travel, and are considering which company to turn to.Read more of this story at Slashdot.
Apple committed to address antitrust concerns posed by the European Commission surrounding its popular Apple Pay app, including allowing access to third-party mobile wallet and payment services. WSJ: The U.S. tech giant has agreed to allow companies' apps to make contactless payments on devices that use the iOS system, such as iPhones, for free without the need to use Apple Pay or Apple Wallet, the EU's executive arm said Friday.Read more of this story at Slashdot.
An anonymous reader shares an essay: No one clicks a webpage hoping to learn which cat can haz cheeseburger. Weirdos, maybe. Sickos. No, we get our content from a For You Page now -- algorithmically selected videos and images made by our favorite creators, produced explicitly for our preferred platform. Which platform doesn't matter much. So long as it's one of the big five. Creators churn out content for all of them. It's a technical marvel, that internet. Something so mindblowingly impressive that if you showed it to someone even thirty years ago, their face would melt the fuck off. So why does it feel like something's missing? Why are we all so collectively unhappy with the state of the web? A tweet went viral this Thanksgiving when a Twitter user posed a question to their followers. (The tweet said: "It feels like there are no websites anymore. There used to be so many websites you could go on. Where did all the websites go?") A peek at the comments, and I could only assume the tweet struck a nerve. Everyone had their own answer. Some comments blamed the app-ification of the web. "Everything is an app now!," one user replied. Others point to the death of Adobe Flash and how so many sites and games died along with it. Everyone agrees that websites have indeed vanished, and we all miss the days we were free to visit them.Read more of this story at Slashdot.
An anonymous reader quotes a report from Reuters: OpenAI's CEO Sam Altman on Tuesday said an energy breakthrough is necessary for future artificial intelligence, which will consume vastly more power than people have expected. Speaking at a Bloomberg event on the sidelines of the World Economic Forum's annual meeting in Davos, Altman said the silver lining is that more climate-friendly sources of energy, particularly nuclear fusion or cheaper solar power and storage, are the way forward for AI. "There's no way to get there without a breakthrough," he said. "It motivates us to go invest more in fusion." In 2021, Altman personally provided $375 million to private U.S. nuclear fusion company Helion Energy, which since has signed a deal to provide energy to Microsoft in future years. Microsoft is OpenAI's biggest financial backer and provides it computing resources for AI. Altman said he wished the world would embrace nuclear fission as an energy source as well. Further reading: Microsoft Needs So Much Power to Train AI That It's Considering Small Nuclear ReactorsRead more of this story at Slashdot.
A Boeing cargo plane headed for Puerto Rico was diverted Thursday night after taking off from Miami International Airport because of engine trouble, according to an official and flight data. From a report: Atlas Air Flight 5Y095 landed safely after experiencing an "engine malfunction" shortly after departure, the airline said early Friday. It was unclear what kind of cargo the plane was carrying. Data collected by FlightAware, a flight tracking company, showed the aircraft was a Boeing 747-8 that left its gate at Miami International at 10:11 p.m. on Thursday and returned to the airport about 50 minutes later. The website also showed that the plane traveled 60 miles in total. Reuters adds: The Atlas Air Flight 5Y095 was on its way to San Juan, Puerto Rico from Miami International Airport on late Thursday evening. The pilot made a Mayday call around 0333 GMT to report an engine fire and requested to return back to the airport, according to multi-channel recordings of conversations between the air traffic control and the plane available on liveatc.net. "We have a engine fire," one of the plane crew said, disclosing that there were five people on board.Read more of this story at Slashdot.
David Mills, the man who invented NTP and wrote the implementation, has passed away. He also created the Fuzzballs and EGP, and helped make global-scale internetworking possible. Vint Cerf, sharing the news on the Internet Society mail group: His daughter, Leigh, just sent me the news that Dave passed away peacefully on January 17, 2024. He was such an iconic element of the early Internet. Network Time Protocol, the Fuzzball routers of the early NSFNET, INARGtaskforce lead, COMSAT Labs and University of Delaware and so much more. R.I.P.Read more of this story at Slashdot.
Researchers have developed a way to apply quantum measurement to something no matter its mass or energy. "Our proposed experiment can test if an object is classical or quantum by seeing if an act of observation can lead to a change in its motion," says physicist Debarshi Das from UCL. ScienceAlert reports: Quantum physics describes a Universe where objects aren't defined by a single measurement, but as a range of possibilities. An electron can be spinning up and down, or have a high chance of existing in some areas more than others, for example. In theory, this isn't limited to tiny things. Your own body can in effect be described as having a very high probability of sitting in that chair and a very (very!) low probability of being on the Moon. There is just one fundamental truth to remember -- you touch it, you've bought it. Observing an object's quantum state, whether an electron, or a person sitting in a chair, requires interactions with a measuring system, forcing it to have a single measurement. There are ways to catch objects with their quantum pants still down, but they require keeping the object in a ground state -- super-cold, super-still, completely cut off from its environment. That's tricky to do with individual particles, and it gets a lot more challenging as the size of the scale goes up. The new proposal uses an entirely novel approach, one that uses a combination of assertions known as Leggett-Garg Inequalities and No-Signaling in Time conditions. In effect, these two concepts describe a familiar Universe, where a person on a chair is sitting there even if the room is dark and you can't see them. Switching on the light won't suddenly reveal they're actually under the bed. Should an experiment find evidence that somehow conflicts with these assertions, we just might be catching a glimpse of quantum fuzziness on a larger scale. The team proposes that objects can be observed as they oscillate on a pendulum, like a ball at the end of a piece of string. Light would then be flashed at the two halves of the experimental setup at different times -- counting as the observation -- and the results of the second flash would indicate if quantum behavior was happening, because the first flash would affect whatever was moving. We're still talking about a complex setup that would require some sophisticated equipment, and conditions akin to a ground state -- but through the use of motion and two measurements (light flashes), some of the restrictions on mass are removed. [...] "The next step is to try this proposed setup in an actual experiment," concludes the reports. "The mirrors at the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the US have already been proposed as suitable candidates for examination." "Those mirrors act as a single 10-kilogram (22-pound) object, quite a step up from the typical size of objects analyzed for quantum effects -- anything up to about a quintillionth of a gram." The findings have been published in the journal Physical Review Letters.Read more of this story at Slashdot.
Scientists have underestimated recent mass loss from Greenland by as much as 20%, finds a new study published in the journal Nature. CBS News reports: Since 1985, Greenland's ice sheet has lost approximately 5,091 square kilometers of ice researchers found using satellite imagery. Scientists said earlier estimates did not track melting at the edges of the ice sheets, known as calving, which measures ice breaking off at the terminus of a glacier. Greenland's ice sheet loses about 193 square kilometers of ice per year, researchers found. Study co-author Chad Greene and his colleagues said they qualified the extent of calving, which increased the scope of ice mass lost. They combined "236,328 observations of glacier terminus positions" compiled from various public data sets to capture monthly ice melt. Their measurements found that between 1985 and 2022, almost every glacier in Greenland experienced some level of loss. [...] Researchers in the study noted that "this retreat does not appear to substantially contribute to sea level rise" because most of the glacier margins the scientists measured were already underwater. The loss, however, may play a part in ocean circulation patterns, and how heat energy is distributed across the planet.Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: On Thursday, UK's Government Communications Headquarters (GCHQ) announced the release of previously unseen images and documents related to Colossus, one of the first digital computers. The release marks the 80th anniversary of the code-breaking machines that significantly aided the Allied forces during World War II. While some in the public knew of the computers earlier (PDF), the UK did not formally acknowledge the project's existence until the 2000s. Colossus was not one computer but a series of computers developed by British scientists between 1943 and 1945. These 2-meter-tall electronic beasts played an instrumental role in breaking the Lorenz cipher, a code used for communications between high-ranking German officials in occupied Europe. The computers were said to have allowed allies to "read Hitler's mind," according to The Sydney Morning Herald. The technology behind Colossus was highly innovative for its time. Tommy Flowers, the engineer behind its construction, used over 2,500 vacuum tubes to create logic gates, a precursor to the semiconductor-based electronic circuits found in modern computers. While 1945's ENIAC was long considered the clear front-runner in digital computing, the revelation of Colossus' earlier existence repositioned it in computing history. (However, it's important to note that ENIAC was a general-purpose computer, and Colossus was not.) GCHQ's public sharing of archival documents includes several photos of the computer at different periods and a letter discussing Tommy Flowers' groundbreaking work that references the interception of "rather alarming German instructions." Following the war, the UK government issued orders for the destruction of most Colossus machines, and Flowers was required to turn over all related documentation. The GCHQ claims that the Colossus tech "was so effective, its functionality was still in use by us until the early 1960s." In the GCHQ press release, Director Anne Keast-Butler paid tribute to Colossus' place in the UK's lineage of technological innovation: "The creativity, ingenuity and dedication shown by Tommy Flowers and his team to keep the country safe were as crucial to GCHQ then as today."Read more of this story at Slashdot.
Figure's first humanoid robot will be coming to a BMW manufacturing facility in South Carolina. TechCrunch reports: BMW has not disclosed how many Figure 01 models it will deploy initially. Nor do we know precisely what jobs the robot will be tasked with when it starts work. Figure did, however, confirm with TechCrunch that it is beginning with an initial five tasks, which will be rolled out one at a time. While folks in the space have been cavalierly tossing out the term "general purpose" to describe these sorts of systems, it's important to temper expectations and point out that they will all arrive as single- or multi-purpose systems, growing their skillset over time. Figure CEO Brett Adcock likens the approach to an app store -- something that Boston Dynamics currently offers with its Spot robot via SDK. Likely initial applications include standard manufacturing tasks such as box moving, pick and place and pallet unloading and loading -- basically the sort of repetitive tasks for which factory owners claim to have difficulty retaining human workers. Adcock says that Figure expects to ship its first commercial robot within a year, an ambitious timeline even for a company that prides itself on quick turnaround times. The initial batch of applications will be largely determined by Figure's early partners like BMW. The system will, for instance, likely be working with sheet metal to start. Adcock adds that the company has signed up additional clients, but declined to disclose their names. It seems likely Figure will instead opt to announce each individually to keep the news cycle spinning in the intervening 12 months.Read more of this story at Slashdot.
According to StatCounter, Bing's market share grew less than 1% since launching Bing Chat (now known as Copilot) roughly a year ago. From a report: Bloomberg reported (paywalled) on the StatCounter data, saying, "But Microsoft's search engine ended 2023 with just 3.4% of the global search market, according to data analytics firm StatCounter, up less than 1 percentage point since the ChatGPT announcement." Google still dominates the global search market with a 91.6% market share, followed by Bing's 3.4%, Yandex's 1.6% and Yahoo's 1.1%. "Other" search engines accounted for a total of just 2.2% of the global search market. You can view the raw chart and data from StatCounter here.Read more of this story at Slashdot.
Google announced today that it will invest $1 billion building a data center near London. Reuters reports: The data centre, located on a 33-acre (13-hectare) site bought by Google in 2020, will be located in the town of Waltham Cross, about 15 miles north of central London, the Alphabet-owned company said in a statement. The British government, which is pushing for investment by businesses to help fund new infrastructure, particularly in growth industries like technology and artificial intelligence, described Google's investment as a "huge vote of confidence" in the UK. "Google's $1 billion investment is testament to the fact that the UK is a centre of excellence in technology and has huge potential for growth," Prime Minister Rishi Sunak said in the Google statement. The investment follows Google's $1 billion purchase of a central London office building in 2022, close to Covent Garden, and another site in nearby King's Cross, where it is building a new office and where its AI company DeepMind is also based. In November, Microsoft announced plans to pump $3.2 billion into Britain over the next three years.Read more of this story at Slashdot.
An anonymous reader quotes a report released Tuesday (Jan. 16th) by the Federal Reserve Bank of San Francisco: The U.S. labor market experienced a massive increase in remote and hybrid work during the COVID-19 pandemic. At its peak, more than 60% of paid workdays were done remotely -- compared with only 5% before the pandemic. As of December 2023, about 30% of paid workdays are still done remotely (Barrero, Bloom, and Davis 2021). Some reports have suggested that teleworking might either boost or harm overall productivity in the economy. And certainly, overall productivity statistics have been volatile. In 2020, U.S. productivity growth surged. This led to optimistic views in the media about the gains from forced digital innovation and the productivity benefits of remote work. However, the surge ended, and productivity growth has retreated to roughly its pre-pandemic trend. Fernald and Li (2022) find from aggregate data that this pattern was largely explained by a predictable cyclical effect from the economy's downturn and recovery. In aggregate data, it thus appears difficult to see a large cumulative effect -- either positive or negative -- from the pandemic so far. But it is possible that aggregate data obscure the effects of teleworking. For example, factors beyond telework could have affected the overall pace of productivity growth. Surveys of businesses have found mixed effects from the pandemic, with many businesses reporting substantial productivity disruptions. In this Economic Letter, we ask whether we can detect the effects of remote work in the productivity performance of different industries. There are large differences across sectors in how easy it is to work off-site. Thus, if remote work boosts productivity in a substantial way, then it should improve productivity performance, especially in those industries where teleworking is easy to arrange and widely adopted, such as professional services, compared with those where tasks need to be performed in person, such as restaurants. After controlling for pre-pandemic trends in industry productivity growth rates, we find little statistical relationship between telework and pandemic productivity performance. We conclude that the shift to remote work, on its own, is unlikely to be a major factor explaining differences across sectors in productivity performance. By extension, despite the important social and cultural effects of increased telework, the shift is unlikely to be a major factor explaining changes in aggregate productivity. [...] The shift to remote and hybrid work has reshaped society in important ways, and these effects are likely to continue to evolve. For example, with less time spent commuting, some people have moved out of cities, and the lines between work and home life have blurred. Despite these noteworthy effects, in this Letter we find little evidence in industry data that the shift to remote and hybrid work has either substantially held back or boosted the rate of productivity growth. Our findings do not rule out possible future changes in productivity growth from the spread of remote work. The economic environment has changed in many ways during and since the pandemic, which could have masked the longer-run effects of teleworking. Continuous innovation is the key to sustained productivity growth. Working remotely could foster innovation through a reduction in communication costs and improved talent allocation across geographic areas. However, working off-site could also hamper innovation by reducing in-person office interactions that foster idea generation and diffusion. The future of work is likely to be a hybrid format that balances the benefits and limitations of remote work.Read more of this story at Slashdot.
Thomas Claburn reports via The Register: IBM has canceled a program that rewarded inventors at Big Blue for patents or publications, leaving some angry that they are missing out on potential bonuses. By cancelling the scheme, a source told The Register, IBM has eliminated a financial liability by voiding the accrued, unredeemed credits issued to program participants which could have been converted into potential cash awards. For years, IBM has sponsored an "Invention Achievement Award Plan" to incentivize employee innovation. In exchange for filing patents, or for publishing articles that served as defense against rival patents, IBM staff were awarded points that led to recognition and potentially cash bonuses. According to documentation seen by The Register, "Invention points are awarded to all inventors listed on a successful disclosure submission." One point was awarded for publishing. Three points were awarded for filing a patent or four if the filing was deemed high value. For accruing 12 points, program participants would get a payout. "Inventors reach an invention plateau for every 12 points they achieve -- which must include at least one file decision," the rules state. And for each plateau achieved, IBM would pay its inventors $1,200 in recognition of their efforts. No longer, it seems. IBM canceled the program at the end of 2023 and replaced it with a new one that uses a different, incompatible point system called BluePoints. "The previous Invention Achievement Award Plan will be sunset at midnight (eastern time) on December 31st, 2023," company FAQs explain. "Since Plateau awards are one of the items being sunset, plateau levels must be obtained on or before December 31, 2023 to be eligible for the award. Any existing plateau points that have not been applied will not be converted to BluePoints." We're told that IBM's invention review process could take months, meaning that employees just didn't have time between the announcement and the program sunset to pursue the next plateau and cash out. Those involved in the program evidently were none too pleased by the points grab. "My opinion...the invention award program was buggered a long time [ago]," said a former IBM employee. "It rewarded words on a page instead of true innovation. [Former CEO] Ginni [Rometty] made it worse by advocating the program to fluff up young egos."Read more of this story at Slashdot.
In a blog post today, Google announced that WebGPU is "now enabled by default in Chrome 121 on devices running Android 12 and greater powered by Qualcomm and ARM GPUs," with support for more Android devices rolling out gradually. Previously, the API was only available on Windows PCs that support Direct3D 12, macOS, and ChromeOS devices that support Vulkan. Google says WebGPU "offers significant benefits such as greatly reduced JavaScript workload for the same graphics and more than three times improvements in machine learning model inferences." With lower-level access to a device's GPU, developers are able to enable richer and more complex visual content in web applications. This will be especially apparent with games, as you can see in this demo. Next up: WebGPU for Chrome on Linux.Read more of this story at Slashdot.
According to Reuters, Reddit plans to launch its initial public offering (IPO) in March, "moving forward with a listing it has been eyeing for more than three years." From the report: It would be the first IPO of a major social media company since Pinterest's, opens new tab debut in 2019, and would come as Reddit and its peers face stiff competition for advertising dollars from the likes of TikTok and Facebook. The offering would also test the willingness of some Reddit users to back the company's stock market debut. Reddit, which filed confidentially for its IPO in December 2021, is planning to make its public filing in late February, launch its roadshow in early March, and complete the IPO by the end of March, two of the sources said. The San Francisco-based company, which was valued at about $10 billion in a funding round in 2021, is seeking to sell about 10% of its shares in the IPO, the sources added. It will decide on what IPO valuation it will pursue closer to the time of the listing, according to the sources.Read more of this story at Slashdot.
An anonymous reader quotes a report from Wired: Stablecoins, cryptocurrencies pegged to a stable value like the US dollar, were created with the promise of bringing the frictionless, border-crossing fluidity of Bitcoin to a form of digital money with far less volatility. That combination has proved to be wildly popular, rocketing the total value of stablecoin transactions since 2022 past even that of Bitcoin itself. It turns out, however, that as stablecoins have become popular among legitimate users over the past two years, they were even more popular among a different kind of user: those exploiting them for billions of dollars of international sanctions evasion and scams. As part of itsannual crime report, cryptocurrency-tracing firm Chainalysis today released new numbers on the disproportionate use of stablecoins for both of those massive categories of illicit crypto transactions over the last year. By analyzing blockchains, Chainalysis determined that stablecoins were used in fully 70 percent of crypto scam transactions in 2023, 83 percent of crypto payments to sanctioned countries like Iran and Russia, and 84 percent of crypto payments to specifically sanctioned individuals and companies. Those numbers far outstrip stablecoins' growing overall use -- including for legitimate purposes -- which accounted for 59 percent of all cryptocurrency transaction volume in 2023. In total, Chainalysis measured $40 billion in illicit stablecoin transactions in 2022 and 2023 combined. The largest single category of that stablecoin-enabled crime was sanctions evasion. In fact, across all cryptocurrencies, sanctions evasion accounted for more than half of the $24.2 billion in criminal transactions Chainalysis observed in 2023, with stablecoins representing the vast majority of those transactions. [...] Chainalysis concedes that the analysis in its report excludes some cryptocurrencies like Monero and Zcash that are designed to be harder or impossible to trace with blockchain analysis. It also says it based its numbers on the type of cryptocurrency sent directly to an illicit actor, which may leave out other currencies used in money laundering processes that repeatedly swap one type of cryptocurrency for another to make tracing more difficult. "Whether it's an individual located in Iran or a bad guy trying to launder money -- either way, there's a benefit to the stability of the US dollar that people are looking to obtain," says Andrew Fierman, Chainalysis' head of sanctions strategy. "If you're in a jurisdiction where you don't have access to the US dollar due to sanctions, stablecoins become an interesting play." Fierman points to Nobitex, the largest cryptocurrency exchange operating in the sanctioned country of Iran, as well as Garantex, a notorious exchange based in Russia that has been specifically sanctioned for its widespread criminal use. According to Chainalysis, "Stablecoin usage on Nobitex outstrips bitcoin by a 9:1 ratio, and on Garantex by a 5:1 ratio," reports Wired. "That's a stark difference from the roughly 1:1 ratio between stablecoins and bitcoins on a few nonsanctioned mainstream exchanges that Chainalysis checked for comparison."Read more of this story at Slashdot.
U.S. edutech platform Coursera added a new user every minute on average for its AI courses in 2023, CEO Jeff Maggioncalda said on Thursday, in a clear sign of people upskilling to tap a potential boom in generative AI. Reuters: The technology behind OpenAI's ChatGPT has taken the world by a storm and sparked a race among companies to roll out their own versions of the viral chatbot. "I'd say the real hotspot is generative AI because it affects so many people," he told Reuters in an interview at the World Economic Forum in Davos. Coursera is looking to offer AI courses along with companies that are the frontrunners in the AI race, including OpenAI and Google's DeepMind, Maggioncalda said. Investors had earlier feared that apps based on generative AI might replace ed-tech firms, but on the contrary the technology has encouraged more people to upskill, benefiting companies such as Coursera. The company has more than 800 AI courses and saw more than 7.4 million enrollments last year. Every student on the platform gets access to a ChatGPT-like AI assistant called "Coach" that provides personalized tutoring.Read more of this story at Slashdot.
OpenAI's stated mission is to create the artificial general intelligence, or AGI. Demis Hassabis, the leader of Google's AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race. From a report: While he doesn't have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he's shaking things up by moving Meta's AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta's apps. The goal is for Meta's AI breakthroughs to more directly reach its billions of users. "We've come to this view that, in order to build the products that we want to build, we need to build for general intelligence," Zuckerberg tells me in an exclusive interview. "I think that's important to convey because a lot of the best researchers want to work on the more ambitious problems." [...] No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive. "I don't have a one-sentence, pithy definition," he tells me. "You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition." He sees its eventual arrival as being a gradual process, rather than a single moment. "I'm not actually that sure that some specific threshold will feel that profound." As Zuckerberg explains it, Meta's new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn't think that the ability for it to generate code made sense for how people would use a LLM in Meta's apps. But it's still an important skill to develop for building smarter AI, so Meta built it anyway. External research has pegged Meta's H100 shipments for 2023 at 150,000, a number that is tied only with Microsoft's shipments and at least three times larger than everyone else's. When its Nvidia A100s and other AI chips are accounted for, Meta will have a stockpile of almost 600,000 GPUs by the end of 2024, according to Zuckerberg.Read more of this story at Slashdot.
Microsoft today made Reading Coach, its AI-powered tool that provides learners with personalized reading practice, available at no cost to anyone with a Microsoft account. From a report: As of this morning, Reading Coach is accessible on the web in preview -- a Windows app is forthcoming. And soon (in late spring), Reading Coach will integrate with learning management systems such as Canva, Microsoft says. Reading Coach builds on Reading Progress, a plug-in for the education-focused version of Microsoft Teams, Teams for Education, designed to help teachers foster reading fluency in their students. Inspired by the success of Reading Progress (evidently), Microsoft launched Reading Coach in 2022 as a part of Teams for Education and Immersive Reader, the company's cross-platform assistive service for language and reading comprehension.Read more of this story at Slashdot.
Coinbase said buying cryptocurrency on an exchange was more like collecting Beanie Babies than investing in a stock or bond. From a report: The biggest US crypto exchange made the comparison Wednesday in a New York federal court hearing. Coinbase was arguing for the dismissal of a Securities and Exchange Commission lawsuit accusing it of selling unregistered securities. William Savitt, a lawyer for Coinbase, told US District Judge Katherine Polk Failla that tokens trading on the exchange aren't securities subject to SEC jurisdiction because buyers don't gain any rights as a part of their purchases, as they do with stocks or bonds. "It's the difference between buying Beanie Babies Inc and buying Beanie Babies," Savitt said. The question of whether digital tokens are securities has divided courts.Read more of this story at Slashdot.
Private equity firms are increasingly buying hospitals across the US, and when they do, patients suffer, according to two separate reports. Specifically, the equity firms cut corners, slash services, lay off staff, lower quality of care, take on substantial debt, and reduce charity care, leading to lower ratings and more medical errors, the reports collectively find. ArsTechnica: Last week, the financial watchdog organization Private Equity Stakeholder Project (PESP) released a report delving into the state of two of the nation's largest hospital systems, Lifepoint and ScionHealth -- both owned by private equity firm Apollo Global Management. Through those two systems, Apollo runs 220 hospitals in 36 states, employing around 75,000 people. The report found that some of Apollo's hospitals were among the worst in their respective states, based on a ranking by The Lown Institute Hospital Index. The index ranks hospitals and health systems based on health equity, value, and outcomes, PESP notes. The hospitals also have dismal readmission rates and government rankings. The Center for Medicare and Medicaid Services (CMS) ranks hospitals on a one- to five-star system, with the national average of 3.2 stars overall and about 30 percent of hospitals at two stars or below. Apollo's overall average is 2.8 stars, with nearly 40 percent of hospitals at two stars or below. The other report, a study published in JAMA late last month, found that the rate of serious medical errors and health complications increases among patients in the first few years after private equity firms take over. The study examined Medicare claims from 51 private equity-run hospitals and 259 matched control hospitals. Specifically, the study, led by researchers at Harvard University, found that patients admitted to private equity-owned hospitals had a 25 percent increase in developing hospital-acquired conditions compared with patients in the control hospitals. In private equity hospitals, patients experienced a 27 percent increase in falls, a 38 percent increase in central-line bloodstream infections (despite placing 16 percent fewer central lines than control hospitals), and surgical site infections doubled.Read more of this story at Slashdot.
The EU has proposed sweeping changes within the music streaming industry to promote smaller artists and make sure underpaid performers are being fairly compensated. From a report: A resolution to address concerns regarding inadequate streaming royalties for artists and biased recommendation algorithms was adopted by members of the European Parliament (MEPs) on Wednesday, highlighting that no existing EU rules currently apply to music streaming services, despite being the most popular way to consume audio. The proposition was made to ensure European musical works are accessible and avoid being overshadowed by the "overwhelming amount" of content being continually added to streaming platforms like Spotify. MEPs also called for outdated "pre-digital" royalty rates to be revised, noting that some schemes force performers to accept little to no revenue in exchange for greater exposure. Imposing quotas for European musical works is being considered to help promote artists in the EU.Read more of this story at Slashdot.
Apple has updated its long-standing App Store guidelines, giving developers the option to let users make in-app purchases for iOS apps outside of its App Store. But the changes still haven't won over one of the company's longtime critics. From a report: Under the new rules, app developers can provide customers with links to third-party purchase options for their apps, but they must still pay Apple fees of either 12% or 27%. Spotify, one of Apple's biggest critics, isn't a fan of the changes. In a statement, the music streaming service slammed the new rules. "Once again, Apple has demonstrated that they will stop at nothing to protect the profits they exact on the backs of developers and consumers under their app store monopoly," the company said in a statement. "Their latest move in the US -- imposing a 27% fee for transactions made outside of an app on a developer's website -- is outrageous and flies in the face of the court's efforts to enable greater competition and user choice." Tech columnist John Gruber, writing at DaringFireball: Maybe the cynics are right! Let's just concede that they are, and that Apple will only make decisions here that benefit its bottom line. My argument remains that Apple should not be pursuing this plan for complying with the anti-steering injunction by collecting commissions from web sales that initiate in-app. Whatever revenue Apple would lose to non-commissioned web sales (for non-games) is not worth the hit they are taking to the company's brand and reputationa--athis move reeks of greed and avaricea--anor the increased ire and scrutiny of regulators and legislators on the "anti-Big-Tech" hunt. Apple should have been looking for ways to lessen regulatory and legislative pressure over the past few years, and in today's climate that's more true than ever. But instead, their stance has seemingly been "Bring it on." Confrontational, not conciliatory, conceding not an inch. Rather than take a sure win with most of what they could want, Apple is seemingly hell-bent on trying to keep everything. To win in chess all you need is to capture your opponent's king. Apple seemingly wants to capture every last piece on the boarda--aeven while playing in a tournament where the referees (regulators) are known to look askance at blatant poor sportsmanship (greed). Apple's calculus should be to balance its natural desire to book large amounts of revenue from the App Store with policies that to some degree placate, rather than antagonize, regulators and legislators. No matter what the sport, no matter what the letter of the rulebook says, it's never a good idea to piss off the refs.Read more of this story at Slashdot.
Google News is boosting sites that rip-off other outlets by using AI to rapidly churn out content, 404 Media has found. From the report: Google told 404 Media that although it tries to address spam on Google News, the company ultimately does not focus on whether a news article was written by an AI or a human, opening the way for more AI-generated content making its way onto Google News. The presence of AI-generated content on Google News signals two things: first, the black box nature of Google News, with entry into Google News' rankings in the first place an opaque, but apparently gameable, system. Second, is how Google may not be ready for moderating its News service in the age of consumer-access AI, where essentially anyone is able to churn out a mass of content with little to no regard for its quality or originality.Read more of this story at Slashdot.
OpenAI on Thursday announced its first partnership with a higher education institution. Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more. From a report: The partnership has been in the works for at least six months, when ASU chief information officer Lev Gonick first visited OpenAI's HQ, which was preceded by the university faculty and staff's earlier use of ChatGPT and other artificial intelligence tools, Gonick told CNBC in an interview. ChatGPT Enterprise, which debuted in August, is ChatGPT's business tier and includes access to GPT-4 with no usage caps, performance that's up to two times faster than previous versions and API credits. With the OpenAI partnership, ASU plans to build a personalized AI tutor for students, not only for certain courses but also for study topics. STEM subjects are a focus and are "the make-or-break subjects for a lot of higher education," Gonick said. The university will also use the tool in ASU's largest course, Freshman Composition, to offer students writing help. ASU also plans to use ChatGPT Enterprise to develop AI avatars as a "creative buddy" for studying certain subjects, like bots that can sing or write poetry about biology, for instance.Read more of this story at Slashdot.
Google researchers say they have evidence that a notorious Russian-linked hacking group -- tracked as "Cold River" -- is evolving its tactics beyond phishing to target victims with data-stealing malware. From a report: Cold River, also known as "Callisto Group" and "Star Blizzard," is known for conducting long-running espionage campaigns against NATO countries, particularly the United States and the United Kingdom. Researchers believe the group's activities, which typically target high-profile individuals and organizations involved in international affairs and defense, suggest close ties to the Russian state. U.S. prosecutors in December indicted two Russian nationals linked to the group. Google's Threat Analysis Group (TAG) said in new research this week that it has observed Cold River ramping up its activity in recent months and using new tactics capable of causing more disruption to its victims, predominantly targets in Ukraine and its NATO allies, academic institutions and non-government organizations. These latest findings come soon after Microsoft researchers reported that the Russia-aligned hacking group had improved its ability to evade detection. In research shared with TechCrunch ahead of its publication on Thursday, TAG researchers say that Cold River has continued to shift beyond its usual tactic of phishing for credentials to delivering malware via campaigns using PDF documents as lures.Read more of this story at Slashdot.
Google has laid off over a thousand employees across various departments since January 10th. CEO Sundar Pichai's message is to brace for more cuts. The Verge: "We have ambitious goals and will be investing in our big priorities this year," Pichai told all Google employees on Wednesday in an internal memo that was shared with me. "The reality is that to create the capacity for this investment, we have to make tough choices." So far, those "tough choices" have included layoffs and reorganizations in Google's hardware, ad sales, search, shopping, maps, policy, core engineering, and YouTube teams. "These role eliminations are not at the scale of last year's reductions, and will not touch every team," Pichai wrote in his memo -- a reference to when Google cut 12,000 jobs this time last year. "But I know it's very difficult to see colleagues and teams impacted." Pichai said the layoffs this year were about "removing layers to simplify execution and drive velocity in some areas." He confirmed what many inside Google have been fearing: that more "role eliminations" are to come. "Many of these changes are already announced, though to be upfront, some teams will continue to make specific resource allocation decisions throughout the year where needed, and some roles may be impacted," he wrote.Read more of this story at Slashdot.
When Microsoft announced it was baking ChatGPT into its Bing search engine last February, bullish analysts declared the move an "iPhone moment" that could upend the search market and chip away at Google's dominance. "The entire search category is now going through a sea change," Chief Executive Officer Satya Nadella said at the time. "That opportunity comes very few times." Almost a year later, the sea has yet to change. Bloomberg: The new Bing -- powered by OpenAI's generative AI technology -- dazzled internet users with conversational replies to queries asked in a natural way. But Microsoft's search engine ended 2023 with just 3.4% of the global search market, according to data analytics firm StatCounter, up less than 1 percentage point since the ChatGPT announcement. Bing has long struggled for relevance and attracted more mockery than recognition over the years as a serious alternative to Google. Multiple rebrandings and redesigns since its 2009 debut did little to boost Bing's popularity. A month before Microsoft infused the search engine with generative AI, people were spending 33% less time using it than they had 12 months earlier, according to SensorTower. The ChatGPT reboot at least helped reverse those declines. In the second quarter of 2023, US monthly active users more than doubled year over year to 3.1 million, according to a Bloomberg Intelligence analysis of SensorTower mobile app data. Overall, users were spending 84% more time on the search engine, the data show. By year-end, Bing's monthly active users had increased steadily to 4.4 million, according to SensorTower.Read more of this story at Slashdot.
An anonymous reader quotes a report from Wired: As more companies ramp up development of artificial intelligence systems, they are increasingly turning to graphics processing unit (GPU) chips for the computing power they need to run large language models (LLMs) and to crunch data quickly at massive scale. Between video game processing and AI, demand for GPUs has never been higher, and chipmakers are rushing to bolster supply. In new findings released today, though, researchers are highlighting a vulnerability in multiple brands and models of mainstream GPUs -- including Apple, Qualcomm, and AMD chips -- that could allow an attacker to steal large quantities of data from a GPU's memory. The silicon industry has spent years refining the security of central processing units, or CPUs, so they don't leak data in memory even when they are built to optimize for speed. However, since GPUs were designed for raw graphics processing power, they haven't been architected to the same degree with data privacy as a priority. As generative AI and other machine learning applications expand the uses of these chips, though, researchers from New York -- based security firm Trail of Bits say that vulnerabilities in GPUs are an increasingly urgent concern. "There is a broader security concern about these GPUs not being as secure as they should be and leaking a significant amount of data," Heidy Khlaaf, Trail of Bits' engineering director for AI and machine learning assurance, tells WIRED. "We're looking at anywhere from 5 megabytes to 180 megabytes. In the CPU world, even a bit is too much to reveal." To exploit the vulnerability, which the researchers call LeftoverLocals, attackers would need to already have established some amount of operating system access on a target's device. Modern computers and servers are specifically designed to silo data so multiple users can share the same processing resources without being able to access each others' data. But a LeftoverLocals attack breaks down these walls. Exploiting the vulnerability would allow a hacker to exfiltrate data they shouldn't be able to access from the local memory of vulnerable GPUs, exposing whatever data happens to be there for the taking, which could include queries and responses generated by LLMs as well as the weights driving the response. In their proof of concept, as seen in the GIF below, the researchers demonstrate an attack where a target -- shown on the left -- asks the open source LLM Llama.cpp to provide details about WIRED magazine. Within seconds, the attacker's device -- shown on the right -- collects the majority of the response provided by the LLM by carrying out a LeftoverLocals attack on vulnerable GPU memory. The attack program the researchers created uses less than 10 lines of code. [...] Though exploiting the vulnerability would require some amount of existing access to targets' devices, the potential implications are significant given that it is common for highly motivated attackers to carry out hacks by chaining multiple vulnerabilities together. Furthermore, establishing "initial access" to a device is already necessary for many common types of digital attacks. The researchers did not find evidence that Nvidia, Intel, or Arm GPUs contain the LeftoverLocals vulnerability, but Apple, Qualcomm, and AMD all confirmed to WIRED that they are impacted. Here's what each of the affected companies had to say about the vulnerability, as reported by Wired: Apple: An Apple spokesperson acknowledged LeftoverLocals and noted that the company shipped fixes with its latest M3 and A17 processors, which it unveiled at the end of 2023. This means that the vulnerability is seemingly still present in millions of existing iPhones, iPads, and MacBooks that depend on previous generations of Apple silicon. On January 10, the Trail of Bits researchers retested the vulnerability on a number of Apple devices. They found that Apple's M2 MacBook Air was still vulnerable, but the iPad Air 3rd generation A12 appeared to have been patched.Qualcomm: A Qualcomm spokesperson told WIRED that the company is "in the process" of providing security updates to its customers, adding, "We encourage end users to apply security updates as they become available from their device makers." The Trail of Bits researchers say Qualcomm confirmed it has released firmware patches for the vulnerability.AMD: AMD released a security advisory on Wednesday detailing its plans to offer fixes for LeftoverLocals. The protections will be "optional mitigations" released in March.Google: For its part, Google says in a statement that it "is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs. Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs."Read more of this story at Slashdot.
In an opinion piece for the Guardian, American journalist and author John R. MacArthur discusses the alarming decline in reading skills among American youth, highlighted by a Department of Education survey showing significant drops in text comprehension since 2019-2020, with the situation worsening since 2012. While remote learning during the pandemic and other factors like screen-based reading are blamed, a new study by Columbia University suggests that reading on paper is more effective for comprehension than reading on screens, a finding not yet widely adopted in digital-focused educational approaches. From the report: What if the principal culprit behind the fall of middle-school literacy is neither a virus, nor a union leader, nor "remote learning"? Until recently there has been no scientific answer to this urgent question, but a soon-to-be published, groundbreaking study from neuroscientists at Columbia University's Teachers College has come down decisively on the matter: for "deeper reading" there is a clear advantage to reading a text on paper, rather than on a screen, where "shallow reading was observed." [...] [Dr Karen Froud] and her team are cautious in their conclusions and reluctant to make hard recommendations for classroom protocol and curriculum. Nevertheless, the researchers state: "We do think that these study outcomes warrant adding our voices ... in suggesting that we should not yet throw away printed books, since we were able to observe in our participant sample an advantage for depth of processing when reading from print." I would go even further than Froud in delineating what's at stake. For more than a decade, social scientists, including the Norwegian scholar Anne Mangen, have been reporting on the superiority of reading comprehension and retention on paper. As Froud's team says in its article: "Reading both expository and complex texts from paper seems to be consistently associated with deeper comprehension and learning" across the full range of social scientific literature. But the work of Mangen and others hasn't influenced local school boards, such as Houston's, which keep throwing out printed books and closing libraries in favor of digital teaching programs and Google Chromebooks. Drunk on the magical realism and exaggerated promises of the "digital revolution," school districts around the country are eagerly converting to computerized test-taking and screen-reading programs at the precise moment when rigorous scientific research is showing that the old-fashioned paper method is better for teaching children how to read. Indeed, for the tech boosters, Covid really wasn't all bad for public-school education: "As much as the pandemic was an awful time period," says Todd Winch, the Levittown, Long Island, school superintendent, "one silver lining was it pushed us forward to quickly add tech supports." Newsday enthusiastically reports: "Island schools are going all-in on high tech, with teachers saying they are using computer programs such as Google Classroom, I-Ready, and Canvas to deliver tests and assignments and to grade papers." Terrific, especially for Google, which was slated to sell 600 Chromebooks to the Jericho school district, and which since 2020 has sold nearly $14bn worth of the cheap laptops to K-12 schools and universities. If only Winch and his colleagues had attended the Teachers College symposium that presented the Froud study last September. The star panelist was the nation's leading expert on reading and the brain, John Gabrieli, an MIT neuroscientist who is skeptical about the promises of big tech and its salesmen: "I am impressed how educational technology has had no effect on scale, on reading outcomes, on reading difficulties, on equity issues," he told the New York audience. "How is it that none of it has lifted, on any scale, reading? ... It's like people just say, "Here is a product. If you can get it into a thousand classrooms, we'll make a bunch of money.' And that's OK; that's our system. We just have to evaluate which technology is helping people, and then promote that technology over the marketing of technology that has made no difference on behalf of students ... It's all been product and not purpose." I'll only take issue with the notion that it's "OK" to rob kids of their full intellectual potential in the service of sales -- before they even get started understanding what it means to think, let alone read.Read more of this story at Slashdot.
With NASA's Artemis moon program now targeting September 2025 for its Artemis 2 mission and September 2026 for Artemis 3, some members of Congress are concerned about the potential repercussions, particularly with China's growing ambitions in lunar exploration. "For the United States and its partners not to be on the moon when others are on the moon is unacceptable," said Mike Griffin, former NASA administrator. "We need a program that is consistent with that theme. Artemis is not that program. We need to restart it, not keep it on track." Space.com reports: The U.S. House of Representatives' Committee on Science, Space and Technology held a hearing about the new Artemis plan today (Jan. 17), and multiple members voiced concern about the slippage. "I remind my colleagues that we are not the only country interested in sending humans to the moon," Committee Chairman Frank Lucas (R-OK) said in his opening remarks. "The Chinese Communist Party is actively soliciting international partners for a lunar mission -- a lunar research station -- and has stated its ambition to have human astronauts on the surface by 2030," he added. "The country that lands first will have the ability to set a precedent for whether future lunar activities are conducted with openness and transparency, or in a more restricted manner." The committee's ranking member, California Democrat Zoe Lofgren (D-CA), voiced similar sentiments. "Let me be clear: I support Artemis," she said in her opening remarks. "But I want it to be successful, especially with China at our heels. And we want to be helpful here in the committee in ensuring that Artemis is strong and staying on track as we look to lead the world, hand-in-hand with our partners, in the human exploration of the moon and beyond." Several other committee members stressed that the new moon race is part of a broader competition with China, and that coming in second could imperil U.S. national security. "It's no secret that China has a goal to surpass the United States by 2045 as global leaders in space. We can't allow this to happen," Rich McCormick (R-GA) said during the hearing. "I think the leading edge that we have in space technology will protect the United States -- not just the economy, but technologies that can benefit humankind." And Bill Posey (R-FL) referred to space as the "ultimate military high ground," saying that whoever leads in the final frontier "will control the destiny of this Earth."Read more of this story at Slashdot.
An anonymous reader quotes a report from BleepingComputer: Have I Been Pwned has added almost 71 million email addresses associated with stolen accounts in the Naz.API dataset to its data breach notification service. The Naz.API dataset is a massive collection of 1 billion credentials compiled using credential stuffing lists and data stolen by information-stealing malware. Credential stuffing lists are collections of login name and password pairs stolen from previous data breaches that are used to breach accounts on other sites. Information-stealing malware attempts to steal a wide variety of data from an infected computer, including credentials saved in browsers, VPN clients, and FTP clients. This type of malware also attempts to steal SSH keys, credit cards, cookies, browsing history, and cryptocurrency wallets. The stolen data is collected in text files and images, which are stored in archives called "logs." These logs are then uploaded to a remote server to be collected later by the attacker. Regardless of how the credentials are stolen, they are then used to breach accounts owned by the victim, sold to other threat actors on cybercrime marketplaces, or released for free on hacker forums to gain reputation amongst the hacking community. The Naz.API is a dataset allegedly containing over 1 billion lines of stolen credentials compiled from credential stuffing lists and from information-stealing malware logs. It should be noted that while the Naz.API dataset name includes the word "Naz," it is not related to network attached storage (NAS) devices. This dataset has been floating around the data breach community for quite a while but rose to notoriety after it was used to fuel an open-source intelligence (OSINT) platform called illicit.services. This service allows visitors to search a database of stolen information, including names, phone numbers, email addresses, and other personal data. The service shut down in July 2023 out of concerns it was being used for Doxxing and SIM-swapping attacks. However, the operator enabled the service again in September. Illicit.services use data from various sources, but one of its largest sources of data came from the Naz.API dataset, which was shared privately among a small number of people. Each line in the Naz.API data consists of a login URL, its login name, and an associated password stolen from a person's device, as shown [here]. "Here's the back story: this week I was contacted by a well-known tech company that had received a bug bounty submission based on a credential stuffing list posted to a popular hacking forum," explained Troy Hunt, the creator of Have I Been Pwned, in blog post. "Whilst this post dates back almost 4 months, it hadn't come across my radar until now and inevitably, also hadn't been sent to the aforementioned tech company." "They took it seriously enough to take appropriate action against their (very sizeable) user base which gave me enough cause to investigate it further than your average cred stuffing list." To check if your credentials are in the Naz.API dataset, you can visit Have I Been Pwned.Read more of this story at Slashdot.
"The ambient light sensors present in most mobile devices can be accessed by software without any special permissions, unlike permissions required for accessing the microphone or the cameras," writes longtime Slashdot reader BishopBerkeley. "When properly interrogated, the data from the light sensor can reveal much about the user." IEEE Spectrum reports: While that may not seem to provide much detailed information, researchers have already shown these sensors can detect light intensity changes that can be used to infer what kind of TV programs someone is watching, what websites they are browsing or even keypad entries on a touchscreen. Now, [Yang Liu, a PhD student at MIT] and colleagues have shown in a paper in Science Advances that by cross-referencing data from the ambient light sensor on a tablet with specially tailored videos displayed on the tablet's screen, it's possible to generate images of a user's hands as they interact with the tablet. While the images are low-resolution and currently take impractically long to capture, he says this kind of approach could allow a determined attacker to infer how someone is using the touchscreen on their device. [...] "The acquisition time in minutes is too cumbersome to launch simple and general privacy attacks on a mass scale," says Lukasz Olejnik, an independent security researcher and consultant who has previously highlighted the security risks posed by ambient light sensors. "However, I would not rule out the significance of targeted collections for tailored operations against chosen targets." But he also points out that, following his earlier research, the World Wide Web Consortium issued a new standard that limited access to the light sensor API, which has already been adopted by browser vendors. Liu notes, however, that there are still no blanket restrictions for Android apps. In addition, the researchers discovered that some devices directly log data from the light sensor in a system file that is easily accessible, bypassing the need to go through an API. The team also found that lowering the resolution of the images could bring the acquisition times within practical limits while still maintaining enough detail for basic recognition tasks. Nonetheless, Liu agrees that the approach is too complicated for widespread attacks. And one saving grace is that it is unlikely to ever work on a smartphone as the displays are simply too small. But Liu says their results demonstrate how seemingly harmless combinations of components in mobile devices can lead to surprising security risks.Read more of this story at Slashdot.