Feed slashdot Slashdot

Favorite IconSlashdot

Link https://slashdot.org/
Feed https://rss.slashdot.org/Slashdot/slashdotMain
Copyright Copyright Slashdot Media. All Rights Reserved.
Updated 2024-11-24 04:30
Laid-off Techies Face 'Sense of Impending Doom' With Job Cuts at Highest Since Dot-com Crash
An anonymous reader shares a report: Since the start of the year, more than 50,000 workers have been laid off from over 200 tech companies, according to tracking website Layoffs.fyi. It's a continuation of the predominant theme of 2023, when more than 260,000 workers across nearly 1,200 tech companies lost their jobs. Alphabet, Amazon, Meta and Microsoft have all taken part in the downsizing this year, along with eBay, Unity Software, SAP and Cisco. Wall Street has largely cheered on the cost-cutting, sending many tech stocks to record highs on optimism that spending discipline coupled with efficiency gains from artificial intelligence will lead to rising profits. PayPal announced in January that it was eliminating 9% of its workforce, or about 2,500 jobs. For the tens of thousands of people in Croisant's [anecdote in the linked story] position, the path toward reemployment is daunting. All told, 2023 was the second-biggest year of cuts on record in the technology sector, behind only the dot-com crash in 2001, according to outplacement firm Challenger, Gray & Christmas. Not since the spectacular flameouts of Pets.com, eToys and Webvan have so many tech workers lost their jobs in such a short period of time. Last month's job cut count was the highest of any February since 2009, when the financial crisis forced companies into cash preservation mode.Read more of this story at Slashdot.
NVIDIA Partners With Ubisoft To Further Develop Its AI-driven NPCs
NVIDIA has been working on adding generative AI to non-playable characters (NPCs) for a while now. The company is hoping a newly-announced partnership with Ubisoft will accelerate development of this technology and, ultimately, bring these AI-driven NPCs to modern games. From a report: Ubisoft helped build new "NEO NPCs" by using NVIDIA's Avatar Cloud Engine (ACE) technology, with an assist from dynamic NPC experts Inworld AI. The end result? Characters that don't repeat the same phrase over and over, while ignoring the surrounding violent mayhem. These NEO NPCs are said to interact in real time with players, the environment and other in-game characters. NVIDIA says this opens up "new possibilities for emergent storytelling." To that end, Ubisoft's narrative team built complete backgrounds, knowledge bases and conversational styles for two NPCs as a proof of concept.Read more of this story at Slashdot.
US Broadband Providers To Begin Providing New Comparison Labels
Major U.S. broadband internet providers must start displaying information similar to nutrition labels on food products to help consumers shop for services starting on April 10, under new rules from the Federal Communications Commission. From a report: Verizon Communications said it will begin providing the labels on Wednesday. The FCC first moved to mandate the labels in 2022. Smaller providers will be required to provide labels starting in October. The rules require broadband providers to display, at the point of sale, labels that show prices, speeds, fees and data allowances for both wireless and wired products.Verizon Chief Customer Experience Officer Brian Higgins said in an interview the labels will help consumers make "an equal comparison" between product offerings, speeds and fees. Higgins said standardized labels across the industry "make it easier for customers to do a comparison of which provider is going to be the best fit for their needs." He said customers will still need to research various bundling offers across carriers. The labels were first unveiled as a voluntary program in 2016. Congress ordered the FCC to mandate them under the 2021 infrastructure law. "Consumers will finally get information they can use to comparison shop, avoid junk fees, and make informed choices about which high-speed internet service is the best fit for their needs and budget," FCC Chair Jessica Rosenworcel said.Read more of this story at Slashdot.
Apex Legends Hacker Said He Hacked Tournament Games 'For Fun'
An anonymous reader shares a report: On Sunday, the world of video games was shaken by a hacking and cheating scandal. During a competitive esports tournament of Apex Legends, a free-to-play shooter video game played by hundreds of thousands of players daily, hackers appeared to insert cheats into the games of two well-known streamers -- effectively hacking the players midgame. "Wait, what the fuck? I'm getting hacked, I'm getting hacked bro, I'm getting hacked," said one of the players allegedly compromised during a livestream of the gameplay. The incidents forced the organizers of the Apex Legends Global Series tournament, which has a $5 million total prize pool, to postpone the event indefinitely "due to the competitive integrity of this series being compromised." As the midgame hacks were underway, the game's chatbot displayed messages on-screen that appeared to come from the hackers: "Apex hacking global series, by Destroyer2009 &R4andom," the messages read. In an interview with TechCrunch, the hacker Destroyer2009 took credit for the hacks, saying that he did it "just for fun," and with the goal of forcing the Apex Legends' developers to fix the vulnerability he exploited. The hacks sent the Apex Legends community into a frenzy, with countless streamers reacting to the incidents, and some players suggesting Apex Legends is not safe to play, because every player could be at risk of getting hacked not only in-game, but potentially having their computers hacked, too. Destroyer2009 declined to provide details of how he allegedly pulled off hacking the two players midgame, or what specific vulnerabilities he exploited. "I really don't want to go into the details until everything is fully patched and everything goes back to normal," the hacker said. The only thing Destroyer2009 said regarding the technique he used was that the vulnerability "has nothing to do with the server and I've never touched anything outside of the Apex process," and that he did not hack the two players' computers directly. The hacks "never went outside of the game," he said.Read more of this story at Slashdot.
Why Do People Let Their Life Insurance Lapse?
The abstract of a new paper published on Journal of Financial Economics: We study aggregate lapsation risk in the life insurance sector. We construct two lapsation risk factors that explain a large fraction of the common variation in lapse rates of the 30 largest life insurance companies. The first is a cyclical factor that is positively correlated with credit spreads and unemployment, while the second factor is a trend factor that correlates with the level of interest rates. Using a novel policy-level database from a large life insurer, we examine the heterogeneity in risk factor exposures based on policy and policyholder characteristics. Young policyholders with higher health risk in low-income areas are more likely to lapse their policies during economic downturns. We explore the implications for hedging and valuation of life insurance contracts. Ignoring aggregate lapsation risk results in mispricing of life insurance policies. The calibrated model points to overpricing on average. In the cross-section, young, low-income, and high-health risk households face higher effective mark-ups than the old, high-income, and healthy.Read more of this story at Slashdot.
Intel Awarded Up To $8.5 Billion in CHIPS Act Grants, With Billions More in Loans Available
The White House said Wednesday Intel has been awarded up to $8.5 billion in CHIPS Act funding, as the Biden administration ramps up its effort to bring semiconductor manufacturing to U.S. soil. From a report: Intel could receive an additional $11 billion in loans from the CHIPS and Science Act, which was passed in 2022. The awards will be announced by President Joe Biden in Arizona on Wednesday. The money will help "leading-edge semiconductors made in the United States" keep "America in the driver's seat of innovation," U.S. Secretary of Commerce Gina Raimondo said on a call with reporters. Intel and the White House said their agreement is nonbinding and preliminary and could change. Intel has long been a stalwart of the U.S. semiconductor industry, developing chips that power many of the world's PCs and data center servers. However, the company has been eclipsed in revenue by Nvidia, which leads in artificial intelligence chips, and has been surpassed in market cap by rival AMD and mobile phone chipmaker Qualcomm.Read more of this story at Slashdot.
Ethereum Foundation Under Investigation by 'State Authority'
CoinDesk: The Ethereum Foundation -- the Swiss non-profit organization at the heart of the Ethereum ecosystem -- is under investigation by an unnamed "state authority," according to the group's website's GitHub repository. The scope of the investigation and its focus was unknown at press time. According to the GitHub commit dated Feb. 26, 2024, "we have received a voluntary enquiry from a state authority that included a requirement for confidentiality." The investigation comes during a time of change for Ethereum's technology. Ethereum is the second-largest blockchain by market cap after Bitcoin, launching in 2015 following an initial coin offering for the chain's native ETH token. Earlier this month, the chain underwent a major technical upgrade, dubbed Dencun, designed to bring down transaction costs for users of Ethereum-based layer-2 platforms.Read more of this story at Slashdot.
OpenAI's Chatbot Store is Filling Up With Spam
An anonymous reader shares a report: When OpenAI CEO Sam Altman announced GPTs, custom chatbots powered by OpenAI's generative AI models, onstage at the company's first-ever developer conference in November, he described them as a way to "accomplish all sorts of tasks" -- from programming to learning about esoteric scientific subjects to getting workout pointers. "Because [GPTs] combine instructions, expanded knowledge and actions, they can be more helpful to you," Altman said. "You can build a GPT ... for almost anything." He wasn't kidding about the anything part. TechCrunch found that the GPT Store, OpenAI's official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that imply a light touch where it concerns OpenAI's moderation efforts. A cursory search pulls up GPTs that purport to generate art in the style of Disney and Marvel properties, serve as little more than funnels to third-party paid services, advertise themselves as being able to bypass AI content detection tools such as Turnitin and Copyleaks.Read more of this story at Slashdot.
Users Ditch Glassdoor, Stunned By Site Adding Real Names Without Consent
Readers waspleg and SpzToid shared the following report: Glassdoor, where employees go to leave anonymous reviews of employers, has recently begun adding real names to user profiles without users' consent. Glassdoor acquired Fishbowl, a professional networking app that integrated with Glassdoor last July. This acquisition meant that every Glassdoor user was automatically signed up for a Fishbowl account. And because Fishbowl requires users to verify their identities, Glassdoor's terms of service changed to require all users to be verified. Ever since Glassdoor's integration with Fishbowl, Glassdoor's terms say that Glassdoor 'may update your Profile with information we obtain from third parties. We may also use personal data you provide to us via your resume(s) or our other services.' This effort to gather information on Fishbowl users includes Glassdoor staff consulting publicly available sources to verify information that is then used to update Glassdoor users' accounts.Read more of this story at Slashdot.
OpenAI To Release 'Materially Better' GPT-5 For Its Chatbot Mid-Year, Report Says
An anonymous reader shares a report: The generative AI company helmed by Sam Altman is on track to put out GPT-5 sometime mid-year, likely during summer, according to two people familiar with the company. Some enterprise customers have recently received demos of the latest model and its related enhancements to the ChatGPT tool, another person familiar with the process said. These people, whose identities Business Insider has confirmed, asked to remain anonymous so they could speak freely. "It's really good, like materially better," said one CEO who recently saw a version of GPT-5. OpenAI demonstrated the new model with use cases and data unique to his company, the CEO said. He said the company also alluded to other as-yet-unreleased capabilities of the model, including the ability to call AI agents being developed by OpenAI to perform tasks autonomously. The company does not yet have a set release date for the new model, meaning current internal expectations for its release could change. OpenAI is still training GPT-5, one of the people familiar said. After training is complete, it will be safety tested internally and further "red teamed," a process where employees and typically a selection of outsiders challenge the tool in various ways to find issues before it's made available to the public.Read more of this story at Slashdot.
'Disabling Cyberattacks' Are Hitting Critical US Water Systems, White House Warns
An anonymous reader quotes a report from Ars Technica: The Biden administration on Tuesday warned the nation's governors that drinking water and wastewater utilities in their states are facing "disabling cyberattacks" by hostile foreign nations that are targeting mission-critical plant operations. "Disabling cyberattacks are striking water and wastewater systems throughout the United States," Jake Sullivan, assistant to the President for National Security Affairs, and Michael S. Regan, administrator of the Environmental Protection Agency, wrote in a letter. "These attacks have the potential to disrupt the critical lifeline of clean and safe drinking water, as well as impose significant costs on affected communities." [...] "Drinking water and wastewater systems are an attractive target for cyberattacks because they are a lifeline critical infrastructure sector but often lack the resources and technical capacity to adopt rigorous cybersecurity practices," Sullivan and Regan wrote in Tuesday's letter. They went on to urge all water facilities to follow basic security measures such as resetting default passwords and keeping software updated. They linked to this list of additional actions, published by CISA and guidance and tools jointly provided by CISA and the EPA. They went on to provide a list of cybersecurity resources available from private sector companies. The letter extended an invitation for secretaries of each state's governor to attend a meeting to discuss better securing the water sector's critical infrastructure. It also announced that the EPA is forming a Water Sector Cybersecurity Task Force to identify vulnerabilities in water systems. The virtual meeting will take place on Thursday. "EPA and NSC take these threats very seriously and will continue to partner with state environmental, health, and homeland security leaders to address the pervasive and challenging risk of cyberattacks on water systems," Regan said in a separate statement.Read more of this story at Slashdot.
Physicist Claims Universe Has No Dark Matter and Is Twice As Old As We Thought
schwit1 shares a report from ScienceAlert: Sound waves fossilized in the maps of galaxies across the Universe could be interpreted as signs of a Big Bang that took place 13 billion years earlier than current models suggest. Last year, theoretical physicist Rajendra Gupta from the University of Ottawa in Canada published a rather extraordinary proposal that the Universe's currently accepted age is a trick of the light, one that masks its truly ancient state while also ridding us of the need to explain hidden forces. Gupta's latest analysis suggests oscillations from the earliest moments in time preserved in large-scale cosmic structures support his claims. "The study's findings confirm that our previous work about the age of the Universe being 26.7 billion years has allowed us to discover that the Universe does not require dark matter to exist," says Gupta. "In standard cosmology, the accelerated expansion of the Universe is said to be caused by dark energy but is in fact due to the weakening forces of nature as it expands, not due to dark energy." [...] Current cosmological models make the reasonable assumption that certain forces governing the interactions of particles have remained constant throughout time. Gupta challenges a specific example of this 'coupling constant', asking how it might affect the spread of space over exhaustively long periods of time. It's hard enough for any novel hypothesis to survive the intense scrutiny of the scientific community. But Gupta's suggestion isn't even entirely new -- it's loosely based on an idea that was shown the door nearly a century ago. In the late 1920s, Swiss physicist Fritz Zwicky wondered if the reddened light of far distant objects was a result of lost energy, like a marathon runner exhausted by a long journey across the eons of space. His 'tired light' hypothesis was in competition with the now-accepted theory that light's red-shifted frequency is due to the cumulative expansion of space tugging at light waves like a stretched spring. The consequences of Gupta's version of the tired light hypothesis -- what is referred to as covarying coupling constants plus tired light, or CCC+TL -- would affect the Universe expansion, doing away with mysterious pushing forces of dark energy and blaming changing interactions between known particles for the increased stretching of space. To replace existing models with CCC+TL, Gupta would need to convince cosmologists his model does a better job of explaining what we see at large. His latest paper attempts to do that by using CCC+TL to explain fluctuations in the spread of visible matter across space caused by sound waves in a newborn Universe, and the glow of ancient dawn known as the cosmic microwave background. While his analysis concludes his hybrid tired light theory can play nicely with certain features of the Universe's residual echoes of light and sound, it does so only if we also ditch the idea that dark matter is also a thing. The research has been published in The Astrophysical Journal.Read more of this story at Slashdot.
Europe Turns To the Falcon 9 To Launch Its Navigation Satellites
The European Union has agreed to launch four Galileo navigation satellites on SpaceX's Falcon 9 rocket at a 30 percent premium over the standard launch price. Ars Technica reports: According to Politico, the security agreement permits staff working for the EU and European Space Agency to have access to the launch pad at all times and, should there be a mishap with the mission, the first opportunity to retrieve debris. With the agreement, final preparations can begin for two launches of two satellites each, on the Falcon 9 rocket from Florida. These Galileo missions will occur later this year. The satellites, which each weigh about 700 kg, will be launched into an orbit about 22,000 km above the planet. The heightened security measures are due to the proprietary technology incorporated into the satellites, which cost hundreds of millions of euros to build; they perform a similar function to US-manufactured Global Positioning System satellites. The Florida launches will be the first time Galileo satellites, which are used for civilian and military purposes, have been exported outside of European territory. Due to the extra overhead related to the national security mission, the European Union agreed to pay 180 million euros for the two launches, or about $196 million. This represents about a 30 percent premium over the standard launch price of $67 million for a Falcon 9 launch. Over the past two years, the European Space Agency (ESA) had to rely on SpaceX for several launches, including significant projects like the Euclid space telescope and other ESA satellites, due to the cessation of collaborations with Roscosmos after the invasion of Ukraine and delays in the Ariane 6 rocket's development. With the Ariane 5 retired and no immediate replacement, Europe's access to space was compromised. That said, the Ariane 6 is working towards a launch window in the coming months, promising a return to self-reliance for ESA with a packed schedule of missions ahead.Read more of this story at Slashdot.
Only Seven Countries Meet WHO Air Quality Standard, Research Finds
An anonymous reader quotes a report from The Guardian: Only seven countries are meeting an international air quality standard, with deadly air pollution worsening in places due to a rebound in economic activity and the toxic impact of wildfire smoke, a new report has found. Of 134 countries and regions surveyed in the report, only seven -- Australia, Estonia, Finland, Grenada, Iceland, Mauritius and New Zealand -- are meeting a World Health Organization (WHO) guideline limit for tiny airborne particles expelled by cars, trucks and industrial processes. The vast majority of countries are failing to meet this standard for PM2.5, a type of microscopic speck of soot less than the width of a human hair that when inhaled can cause a myriad of health problems and deaths, risking serious implications for people, according to the report by IQAir, a Swiss air quality organization that draws data from more than 30,000 monitoring stations around the world. While the world's air is generally much cleaner than it was in much of the past century, there are still places where the pollution levels are particularly dangerous. The most polluted country, Pakistan, has PM2.5 levels more than 14 times higher than the WHO standard, the IQAir report found, with India, Tajikistan and Burkina Faso the next most polluted countries. But even in wealthy and fast-developing countries, progress in cutting air pollution is under threat. Canada, long considered as having some of the cleanest air in the western world, became the worst for PM2.5 last year due to record wildfires that ravaged the country, sending toxic spoke spewing across the country and into the US. In China, meanwhile, improvements in air quality were complicated last year by a rebound in economic activity in the wake of the Covid-19 pandemic, with the report finding a 6.5% increase in PM2.5 levels. The most polluted urban area in the world last year was Begusarai in India, the sixth annual IQAir report found, with India home to the four most polluted cities in the world. Much of the developing world, particularly countries in Africa, lacks reliable air quality measurements, however. The WHO lowered its guideline for "safe" PM2.5 levels in 2021 to five micrograms per cubic meter and by this measure many countries, such as those in Europe that have cleaned up their air significantly in the past 20 years, fall short. But even this more stringent guideline may not fully capture the risk of insidious air pollution. Research released by US scientists last month found there is no safe level of PM2.5, with even the smallest exposures linked to an increase in hospitalizations for conditions such as heart disease and asthma.Read more of this story at Slashdot.
Nvidia's Jensen Huang Says AGI Is 5 Years Away
Haje Jan Kamps writes via TechCrunch: Artificial General Intelligence (AGI) -- often referred to as "strong AI," "full AI," "human-level AI" or "general intelligent action" -- represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks (such as detecting product flaws, summarize the news, or build you a website), AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidia's annual GTC developer conference, CEO Jensen Huang appeared to be getting really bored of discussing the subject -- not least because he finds himself misquoted a lot, he says. The frequency of the question makes sense: The concept raises existential questions about humanity's role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGI's decision-making processes and objectives, which might not align with human values or priorities (a concept explored in depth in science fiction since at least the 1940s). There's concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed. When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity -- or at least the current status quo. Needless to say, AI CEOs aren't always eager to tackle the subject. Predicting when we will see a passable AGI depends on how you define AGI, Huang argues, and draws a couple of parallels: Even with the complications of time-zones, you know when new year happens and 2025 rolls around. If you're driving to the San Jose Convention Center (where this year's GTC conference is being held), you generally know you've arrived when you can see the enormous GTC banners. The crucial point is that we can agree on how to measure that you've arrived, whether temporally or geospatially, where you were hoping to go. "If we specified AGI to be something very specific, a set of tests where a software program can do very well -- or maybe 8% better than most people -- I believe we will get there within 5 years," Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he's not willing to make a prediction. Fair enough.Read more of this story at Slashdot.
Modern Web Bloat Means Some Pages Load 21MB of Data
Christopher Harper reports via Tom's Hardware: Earlier this month, Danluu.com released an exhaustive 23-page analysis/op-ed/manifesto on the current status of unoptimized web pages and web app performance, finding that just loading a web page can even bog down an entry-level device that can run the popular game PUBG at 40 fps. In fact, the Wix webpage requires loading 21MB of data for one page, while the more famous websites Patreon and Threads load 13MB of data for one page. This can result in slow load times that reach up to 33 seconds or, in some cases, result in the page failing to load at all. As the testing above shows, some of the most brutally intensive websites include the likes of... Quora, and basically every major social media platform. Newer content production platforms like Squarespace and newer Forum platforms like Discourse also have significantly worse performance than their older counterparts, often to the point of unusability on some devices. The Tecno S8C, one of the prominent entry-level phones common in emerging markets, is one particularly compelling test device that stuck. The device is actually quite impressive in some ways, including its ability to run PlayerUnknown's Battlegrounds Mobile at 40 FPS -- but the same device can't even run Quora and experiences nigh-unusable lag when scrolling on social media sites. That example is most likely the best summation of the overall point, which is that modern web and app design is increasingly trending toward an unrealistic assumption of ever-increasing bandwidth and processing. Quora is a website where people answer questions -- there is absolutely no reason any of these websites should be harder to run than a Battle Royale game.Read more of this story at Slashdot.
Job Boards Are Rife With 'Ghost Jobs'
"Job openings across the country are seemingly endless," writes longtime Slashdot reader smooth wombat. "Millions of jobs are listed, but are they real? Companies may post job openings with no intent to ever fill it. These are known as ghost jobs and there are more than most people realize. The BBC reports: Clarify Capital, a New York-based business loan provider, surveyed 1,000 hiring managers, and found nearly seven in 10 jobs stay open for more than 30 days, with 10% unfilled for more than half a year. Half the respondents reported they keep job listings open indefinitely because they "always open to new people." More than one in three respondents said they kept the listings active to build a pool of applicants in case of turnover -- not because a role needs to be filled in a timely manner. The posted roles are more than just a talent vacuum sucking up resumes from applicants. They are also a tool for shaping perception inside and outside of the company. More than 40% of hiring managers said they list jobs they aren't actively trying to fill to give the impression that the company is growing. A similar share said the job listings are made to motivate employees, while 34% said the jobs are posted to placate overworked staff who may be hoping for additional help to be brought on. "Ghost jobs are everywhere," says Geoffrey Scott, senior content manager and hiring manager at Resume Genius, a US company that helps workers design their resumes. "We discovered a massive 1.7 million potential ghost job openings on LinkedIn just in the US," says Scott. In the UK, StandOut CV, a London-based career resources company, found more than a third of job listings in 2023 were ghost jobs, defined as listings posted for more than 30 days. "Experts caution not every posting that seems like a ghost job is one," notes the report. "Still, whether these postings are ghost jobs -- or simply look and feel like them -- the result is similar. Jobseekers end up discouraged and burnt out."Read more of this story at Slashdot.
Kids' Cartoons Get a Free Pass From YouTube's Deepfake Disclosure Rules
An anonymous reader quotes a report from Wired: YouTube has updated its rulebook for the era of deepfakes. Starting today, anyone uploading video to the platform must disclose certain uses of synthetic media, including generative AI, so viewers know what they're seeing isn't real. YouTube says it applies to "realistic" altered media such as "making it appear as if a real building caught fire" or swapping"the face of one individual with another's." The new policy shows YouTube taking steps that could help curb the spread of AI-generated misinformation as the US presidential election approaches. It is also striking for what it permits: AI-generated animations aimed at kids are not subject to the new synthetic content disclosure rules. YouTube's new policies exclude animated content altogether from the disclosure requirement. This means that the emerging scene of get-rich-quick, AI-generated content hustlers can keep churning out videos aimed at children without having to disclose their methods. Parents concerned about the quality of hastily made nursery-rhyme videos will be left to identify AI-generated cartoons by themselves. YouTube's new policy also says creators don't need to flag use of AI for "minor" edits that are "primarily aesthetic" such as beauty filters or cleaning up video and audio. Use of AI to "generate or improve" a script or captions is also permitted without disclosure. [...] The exemption for animation in YouTube's new policy could mean that parents cannot easily filter such videos out of search results or keep YouTube's recommendation algorithm from autoplaying AI-generated cartoons after setting up their child to watch popular and thoroughly vetted channels like PBS Kids or Ms. Rachel. Some problematic AI-generated content aimed at kids does require flagging under the new rules. In 2023, the BBC investigated a wave of videos targeting older children that used AI tools to push pseudoscience and conspiracy theories, including climate change denialism. These videos imitated conventional live-action educational videos -- showing, for example, the real pyramids of Giza -- so unsuspecting viewers might mistake them for factually accurate educational content. (The pyramid videos then went on the suggest that the structures can generate electricity.) This new policy would crack down on that type of video. "We require kids content creators to disclose content that is meaningfully altered or synthetically generated when it seems realistic," says YouTube spokesperson Elena Hernandez. "We don't require disclosure of content that is clearly unrealistic and isn't misleading the viewer into thinking it's real."Read more of this story at Slashdot.
Saudi Arabia Plans $40 Billion Push Into Artificial Intelligence
According to the New York Times, Saudi Arabia's government plans to create a fund of about $40 billion to invest in artificial intelligence. Reuters reports: Representatives of Saudi Arabia's Public Investment Fund (PIF) have discussed a potential partnership with U.S. venture capital firm Andreessen Horowitz and other financiers in recent weeks, the newspaper reported. Andreessen Horowitz and PIF governor Yasir Al-Rumayyan have discussed the possibility of the U.S. firm setting up an office in Riyadh, according to the report. PIF officials also discussed what role Andreessen Horowitz could play and how such a fund would work, the newspaper said, adding the plans could still change. Other venture capitalists may participate in kingdom's artificial intelligence fund, which is expected to commence in the second half of 2024, the newspaper said. Saudi representatives have indicated to potential partners that the country is interested in supporting a variety of tech start-ups associated with artificial intelligence, including chip makers and large-scale data centers, the report added. Last month, PIF's Al-Rumayyan pitched the kingdom as a prospective hub for artificial intelligence activity outside U.S., citing its energy resources and funding capacity. Al-Rumayyan had said the kingdom had the "political will" to make artificial intelligence projects happen and ample funds it could deploy to nurture the technology's development.Read more of this story at Slashdot.
AI Researchers Have Started Reviewing Their Peers Using AI Assistance
Academics in the artificial intelligence field have started using generative AI services to help them review the machine learning work of their peers. In a new paper on arXiv, researchers analyzed the peer reviews of papers submitted to leading AI conferences, including ICLR 2024, NeurIPS 2023, CoRL 2023 and EMNLP 2023. The Register reports on the findings: The authors took two sets of data, or corpora -- one written by humans and the other one written by machines. And they used these two bodies of text to evaluate the evaluations -- the peer reviews of conference AI papers -- for the frequency of specific adjectives. "[A]ll of our calculations depend only on the adjectives contained in each document," they explained. "We found this vocabulary choice to exhibit greater stability than using other parts of speech such as adverbs, verbs, nouns, or all possible tokens." It turns out LLMs tend to employ adjectives like "commendable," "innovative," and "comprehensive" more frequently than human authors. And such statistical differences in word usage have allowed the boffins to identify reviews of papers where LLM assistance is deemed likely. "Our results suggest that between 6.5 percent and 16.9 percent of text submitted as peer reviews to these conferences could have been substantially modified by LLMs, i.e. beyond spell-checking or minor writing updates," the authors argued, noting that reviews of work in the scientific journal Nature do not exhibit signs of mechanized assistance. Several factors appear to be correlated with greater LLM usage. One is an approaching deadline: The authors found a small but consistent increase in apparent LLM usage for reviews submitted three days or less before the deadline. The researchers emphasized that their intention was not to pass judgment on the use of AI writing assistance, nor to claim that any of the papers they evaluated were written completely by an AI model. But they argued the scientific community needs to be more transparent about the use of LLMs. And they contended that such practices potentially deprive those whose work is being reviewed of diverse feedback from experts. What's more, AI feedback risks a homogenization effect that skews toward AI model biases and away from meaningful insight.Read more of this story at Slashdot.
AT&T Says Leaked Data of 70 Million People Is Not From Its Systems
An anonymous reader quotes a report from BleepingComputer: AT&T says a massive trove of data impacting 71 million people did not originate from its systems after a hacker leaked it on a cybercrime forum and claimed it was stolen in a 2021 breach of the company. While BleepingComputer has not been able to confirm the legitimacy of all the data in the database, we have confirmed some of the entries are accurate, including those whose data is not publicly accessible for scraping. The data is from an alleged 2021 AT&T data breach that a threat actor known as ShinyHunters attempted to sell on the RaidForums data theft forum for a starting price of $200,000 and incremental offers of $30,000. The hacker stated they would sell it immediately for $1 million. AT&T told BleepingComputer then that the data did not originate from them and that its systems were not breached. "Based on our investigation today, the information that appeared in an internet chat room does not appear to have come from our systems," AT&T told BleepingComputer in 2021. When we told ShinyHunters that AT&T said the data did not originate from them, they replied, "I don't care if they don't admit. I'm just selling." AT&T continues to tell BleepingComputer today that they still see no evidence of a breach in their systems and still believe that this data did not originate from them. Today, another threat actor known as MajorNelson leaked data from this alleged 2021 data breach for free on a hacking forum, claiming it was the data ShinyHunters attempted to sell in 2021. This data includes names, addresses, mobile phone numbers, encrypted date of birth, encrypted social security numbers, and other internal information. However, the threat actors have decrypted the birth dates and social security numbers and added them to another file in the leak, making those also accessible. BleepingComputer has reviewed the data, and while we cannot confirm that all 73 million lines are accurate, we verified some of the data contains correct information, including social security numbers, addresses, dates of birth, and phone numbers. Furthermore, other cybersecurity researchers, such as Dark Web Informer, who first told BleepingComputer about the leaked data, and VX-Underground have also confirmed some of the data to be accurate. Despite AT&T's statement, BleepingComputer says if you were an AT&T customer before and through 2021, it's "[safe] to assume that your data was exposed and can be used in targeted attacks." Have I Been Pwned's Troy Hunt writes: "I have proven, with sufficient confidence, that the data is real and the impact is significant."Read more of this story at Slashdot.
Nicholas Hawkes, 39, Becomes First in England To Be Jailed for Cyber Flashing
A man has been sentenced for cyber flashing in England for the first time. From a report: Nicholas Hawkes, 39, from Basildon in Essex, was jailed for 66 weeks at Southend Crown Court today after he sent unsolicited photos of his erect penis to a 15-year-old girl and a woman on 9 February. The older victim took screenshots of the offending image on WhatsApp and reported Hawkes to the police the same day. Cyber flashing became a criminal offence in England with the passage of the Online Safety Act on 31 January. It has been a crime in Scotland since 2010. The offence covers the sending of an unsolicited sexual image to people via social media, dating apps, text message or data-sharing services such as Bluetooth and AirDrop. Victims of cyber flashing get lifelong anonymity from the time they report the offence, as it also falls under the Sexual Offences Act.Read more of this story at Slashdot.
Google DeepMind's New AI Assistant Helps Elite Soccer Coaches Get Even Better
Soccer teams are always looking to get an edge over their rivals. Whether it's studying players' susceptibility to injury, or opponents' tactics -- top clubs look at reams of data to give them the best shot of winning. They might want to add a new AI assistant developed by Google DeepMind to their arsenal. From a report: It can suggest tactics for soccer set-pieces that are even better than those created by professional club coaches. The system, called TacticAI, works by analyzing a dataset of 7,176 corner kicks taken by players for Liverpool FC, one of the biggest soccer clubs in the world. Corner kicks are awarded to an attacking team when the ball passes over the goal line after touching a player on the defending team. In a sport as free-flowing and unpredictable as soccer, corners -- like free kicks and penalties -- are rare instances in the game when teams can try out pre-planned plays. TacticAI uses predictive and generative AI models to convert each corner kick scenario -- such as a receiver successfully scoring a goal, or a rival defender intercepting the ball and returning it to their team -- into a graph, and the data from each player into a node on the graph, before modeling the interactions between each node. The work was published in Nature Communications today. Using this data, the model provides recommendations about where to position players during a corner to give them, for example, the best shot at scoring a goal, or the best combination of players to get up front. It can also try to predict the outcomes of a corner, including whether a shot will take place, or which player is most likely to touch the ball first.Read more of this story at Slashdot.
Mozilla Firefox 124 Is Now Available for Download
An anonymous reader writes: Mozilla Firefox 124 looks like a small update that only updates the Caret Browsing mode to also work in the PDF viewer and adds support for the Screen Wake Lock API to prevent devices from dimming or locking the screen when an application needs to keep running. The Firefox View feature has been updated as well in this release to allow users to sort open tabs by either recent activity (default setting) or tab order. Also, Firefox 124 expands Qwant's availability to all languages in the France region along with Belgium, Italy, Netherlands, Spain, and Switzerland. This release also adds support for using HTTP(S) and relative URLs when creating WebSockets, as well as support for the AbortSignal: any() static method, which takes an iterable of abort signals and returns an AbortSignal (more details are available here). For Android users, Firefox 124 enables the Pull to Refresh feature, which is now more robust than ever, by default and adds support for the HTML drag and drop API when using a mouse, which accepts plain text or HTML text by the drop operation from external apps. For macOS users, this release uses the fullscreen API for all types of full-screen windows, promising a better match to the expected macOS user experience for full-screen spaces, the Menubar, and the Dock. If you want to disable this feature, you'll need to set the full-screen-api.macos-native-full-screen preference to false in about:config. For Windows users, this release adds the ability to populate the Windows taskbar jump list more efficiently. According to Mozilla, this change should allow for a "smoother overall browsing experience."Read more of this story at Slashdot.
Microsoft Hires DeepMind Co-Founder Suleyman To Run Consumer AI, Hires Most of Inflection AI Startup Staff
Microsoft has named Mustafa Suleyman head of its consumer artificial intelligence business, hiring most of the staff from his Inflection AI startup as the software giant seeks to fend off Alphabet's Google in the fiercely contested market for AI products. From a report: Suleyman, who co-founded Google's DeepMind, will report to Chief Executive Officer Satya Nadella and oversee a range of projects, such as integrating an AI Copilot into Windows and adding conversational elements to the Bing search engine. His hiring will put Microsoft's consumer AI work under one leader for the first time. Inflection, a rival of Microsoft's key AI partner OpenAI, is exiting its Pi consumer chatbot effort and shifting to selling AI software to businesses. Karen Simonyan, Inflection's co-founder, will join Microsoft as chief scientist for the new consumer AI group. In the past year, Nadella has been revamping his company's major products around artificial intelligence technology from OpenAI. Under the Copilot brand, Microsoft has blended an AI assistant into products including Windows, consumer and enterprise Office software, Bing and security tools. With Google and others trying to catch up, Nadella's multibillion-dollar investment in OpenAI has given Microsoft a first-mover advantage. And yet, 13 months after unveiling an AI-enhanced Bing search, the company has made few gains in that market, which remains dominated by Google.Read more of this story at Slashdot.
Apple Working on Solution for App Store Fee That Could Bankrupt Viral Apps
Joe_Dragon shares a report: Since Apple announced plans for the 0.50 euro Core Technology Fee that apps distributed using the new EU App Store business terms must pay, there have been ongoing concerns about what that fee might mean for a developer that suddenly has a free app go viral. Apple's VP of regulatory law Kyle Andeers today met with developers during a workshop on Apple's Digital Markets Act compliance. iOS developer Riley Testut, best known for Game Boy Advance emulator GBA4iOS, asked what Apple would do if a young developer unwittingly racked up millions in fees. Testut explained that when he was younger, that exact situation happened to him. Back in 2014 as an 18-year-old high school student, he released GBA4iOS outside of the App Store using an enterprise certificate. The app was unexpectedly downloaded more than 10 million times, and under Apple's new rules with Core Technology Fee, Testut said that would have cost $5 million euros, bankrupting his family. He asked whether Apple would actually collect that fee in a similar situation, charging the high price even though it could financially ruin a family. In response, Andeers said that Apple is working on figuring out a solution, but has not done so yet. He said Apple does not want to stifle innovation and wants to figure out how to keep young app makers and their parents from feeling scared to release an app.Read more of this story at Slashdot.
Intermittent Fasting Linked To Higher Risk of Cardiovascular Death, Research Suggests
Several readers shared the following report: Intermittent fasting, a diet pattern that involves alternating between periods of fasting and eating, can lower blood pressure and help some people lose weight, past research has indicated. But an analysis presented Monday at the American Heart Association's scientific sessions in Chicago challenges the notion that intermittent fasting is good for heart health. Instead, researchers from Shanghai Jiao Tong University School of Medicine in China found that people who restricted food consumption to less than eight hours per day had a 91% higher risk of dying from cardiovascular disease over a median period of eight years, relative to people who ate across 12 to 16 hours. It's some of the first research investigating the association between time-restricted eating (a type of intermittent fasting) and the risk of death from cardiovascular disease. The analysis -- which has not yet been peer-reviewed or published in an academic journal -- is based on data from the Centers for Disease Control and Prevention's National Health and Nutrition Examination Survey collected between 2003 and 2018. The researchers analyzed responses from around 20,000 adults who recorded what they ate for at least two days, then looked at who had died from cardiovascular disease after a median follow-up period of eight years. However, Victor Wenze Zhong, a co-author of the analysis, said it's too early to make specific recommendations about intermittent fasting based on his research alone.Read more of this story at Slashdot.
Nokia Tells Reddit It Infringes Some Patents in Lead-Up To IPO
An anonymous reader shares a report: Reddit, the social media platform gearing up for an initial public offering this week, said Nokia has accused it of infringing some of their patents. Nokia Technologies, the company's licensing business, sent Reddit a letter on Monday with the claims, and Reddit is evaluating them, according to a filing made Tuesday. Nokia's claims come as Reddit prepares for an initial public offering in an effort to raise hundreds of millions of dollars. The company has been working toward a listing for years, and its public market debut this week is set to become a high-profile addition to the year's roster of newly and soon-to-be public companies. Reddit said in the filing: "On March 18, 2024, Nokia sent us a letter indicating they believed that Reddit infringes certain of their patents. We will evaluate their claims. As we face increasing competition and become increasingly high profile, the possibility of receiving more intellectual property claims against us grows. In addition, various 'non-practicing entities,' and other intellectual property rights holders have asserted in the past, and may attempt to assert in the future, intellectual property claims against us and have sought, and may attempt to seek in the future, to monetize the intellectual property rights they own to extract value through licensing arrangements or other settlements."Read more of this story at Slashdot.
Commercial Bank of Ethiopia Glitch Lets Customers Withdraw Millions
Ethiopia's biggest commercial bank is scrambling to recoup large sums of money withdrawn by customers after a "systems glitch." From a report: The customers discovered early on Saturday that they could take out more cash than they had in their accounts at the Commercial Bank of Ethiopia (CBE). More than $40m was withdrawn or transferred to other banks, local media reported. It took several hours for the institution to freeze transactions. Much of the money was withdrawn from state-owned CBE by students, bank president Abe Sano told journalists on Monday. News of the glitch spread across universities largely via messaging apps and phone calls. Long lines formed at campus ATMs, with a student in western Ethiopia telling BBC Amharic people were withdrawing money until police officers arrived on campus to stop them.Read more of this story at Slashdot.
C++ Creator Rebuts White House Warning
An anonymous reader quotes a report from InfoWorld: C++ creator Bjarne Stroustrup has defended the widely used programming language in response to a Biden administration report that calls on developers to use memory-safe languages and avoid using vulnerable ones such as C++ and C. In a March 15 response to an inquiry from InfoWorld, Stroustrup pointed out strengths of C++, which was designed in 1979. "I find it surprising that the writers of those government documents seem oblivious of the strengths of contemporary C++ and the efforts to provide strong safety guarantees," Stroustrup said. "On the other hand, they seem to have realized that a programming language is just one part of a tool chain, so that improved tools and development processes are essential." Safety improvement always has been a goal of C++ development efforts, Stroustrup stressed. "Improving safety has been an aim of C++ from day one and throughout its evolution. Just compare the K&R C language with the earliest C++, and the early C++ with contemporary C++. My CppCon 2023 keynote outlines that evolution," he said. "Much quality C++ is written using techniques based on RAII (Resource Acquisition Is Initialization), containers, and resource management pointers rather than conventional C-style pointer messes." Stroustrup cited a number of efforts to improve C++ safety. "There are two problems related to safety. Of the billions of lines of C++, few completely follow modern guidelines, and peoples' notions of which aspects of safety are important differ. I and the C++ standard committee are trying to deal with that," he said. "Profiles is a framework for specifying what guarantees a piece of code requires and enable implementations to verify them. There are documents describing that on the committee's website -- look for WG21 -- and more are coming. However, some of us are not in a mood to wait for the committee's necessarily slow progress." Profiles, Stroustrup said, "is a framework that allows us to incrementally improve guarantees -- e.g., to eliminate most range errors relatively soon -- and to gradually introduce guarantees into large code bases through local static analysis and minimal run-time checks. My long-term aim for C++ is and has been for C++ to offer type and resource safety when and where needed. Maybe the current push for memory safety -- a subset of the guarantees I want -- will prove helpful to my efforts, which are shared by many in the C++ standards committee." Stroustrup previously defended the safety of C++ against the NSA, which recommended using memory-safe languages instead of C++ and C in a November 2022 bulletin.Read more of this story at Slashdot.
Astronaut Thomas Stafford, Commander of Apollo 10, Dies At 93
The Associated Press reports on the passing of astronaut Thomas P. Stafford, the commander of a dress rehearsal flight for the 1969 moon landing and the first U.S.-Soviet space linkup. He was 93. From the report: Stafford, a retired Air Force three-star general, took part in four space missions. Before Apollo 10, he flew on two Gemini flights, including the first rendezvous of two U.S. capsules in orbit. He died in a hospital near his Space Coast Florida home, said Max Ary, director of the Stafford Air & Space Museum in Weatherford, Oklahoma. Stafford was one of 24 NASA astronauts who flew to the moon, but he did not land on it. Only seven of them are still alive. After he put away his flight suit, Stafford was the go-to guy for NASA when it sought independent advice on everything from human Mars missions to safety issues to returning to flight after the 2003 space shuttle Columbia accident. He chaired an oversight group that looked into how to fix the then-flawed Hubble Space Telescope, earning a NASA public service award. "Tom was involved in so many things that most people were not aware of, such as being known as the 'Father of Stealth,'" Ary said in an email. Stafford was in charge of the famous 'Area 51' desert base that was the site of many UFO theories, but the home of testing of Air Force stealth technologies. The Apollo 10 mission in May 1969 set the stage for Apollo 11's historic mission two months later. Stafford and Gene Cernan took the lunar lander nicknamed Snoopy within 9 miles (14 kilometers) of the moon's surface. Astronaut John Young stayed behind in the main spaceship dubbed Charlie Brown. "The most impressive sight, I think, that really changed your view of things is when you first see Earth," Stafford recalled in a 1997 oral history, talking about the view from lunar orbit. Then came the moon's far side: "The Earth disappears. There's this big black void." Apollo 10's return to Earth set the world's record for fastest speed by a crewed vehicle at 24,791 mph (39,897 kph). After the moon landings ended, NASA and the Soviet Union decided on a joint docking mission and Stafford, a one-star general at the time, was chosen to command the American side. It meant intensive language training, being followed by the KGB while in the Soviet Union, and lifelong friendships with cosmonauts. The two teams of space travelers even went to Disney World and rode Space Mountain together before going into orbit and joining ships. "We have capture," Stafford radioed in Russian as the Apollo and Soyuz spacecraft hooked up. His Russian counterpart, Alexei Leonov, responded in English: "Well done, Tom, it was a good show. I vote for you." [...] The 1975 mission included two days during which the five men worked together on experiments. After, the two teams toured the world together, meeting President Gerald Ford and Soviet leader Leonid Brezhnev. "It helped prove to the rest of the world that two completely opposite political systems could work together," Stafford recalled at a 30th anniversary gathering in 2005. Later, Stafford was a central part of discussions in the 1990s that brought Russia into the partnership building and operating the International Space Station.Read more of this story at Slashdot.
Global Ocean Heat Has Hit a New Record Every Single Day For the Last Year
According to new data from the National Oceanic and Atmospheric Administration (NOAA), the world's oceans have hit a new temperature record every day since mid-March last year, fueling concerns for marine life and extreme weather across the planet. From a report: Global average ocean temperatures in 2023 were 0.25 degrees Celsius warmer than the previous year, said Gregory C. Johnson, a NOAA oceanographer. That rise is "is equivalent to about two decades' worth of warming in a single year," he told CNN. "So it is quite large, quite significant, and a bit surprising." Scientists have said ocean heat is being supercharged by human-caused global warming, boosted by El Nino, a natural climate pattern marked by higher-than-average ocean temperatures. The main consequences are on marine life and global weather. Global ocean warmth can add more power to hurricanes and other extreme weather events, including scorching heat waves and intense rainfall. [...] "At times, the records (in the North Atlantic) have been broken by margins that are virtually statistically impossible," Brian McNoldy, a senior research associate at the University of Miami Rosenstiel School told CNN. If very high ocean temperatures continue into the second half of 2024 and a La Nina event develops -- El Nino's counterpart that tends to amplify Atlantic hurricane season -- "this would increase the risk of a very active hurricane season," Hirschi said. About 90% of the world's excess heat produced by burning planet-heating fossil fuels is stored in the oceans. "Measuring ocean warming allows us to track the status and evolution of planetary warming," Schuckmann told CNN. "The ocean is the sentinel for global warming."Read more of this story at Slashdot.
EPA Bans Chrysotile Asbestos
An anonymous reader quotes a report from the Associated Press: The Environmental Protection Agency on Monday announced a comprehensive ban on asbestos, a carcinogen that kills tens of thousands of Americans every year but is still used in some chlorine bleach, brake pads and other products. The final rule marks a major expansion of EPA regulation under a landmark 2016 law that overhauled regulations governing tens of thousands of toxic chemicals in everyday products, from household cleaners to clothing and furniture. The new rule would ban chrysotile asbestos, the only ongoing use of asbestos in the United States. The substance is found in products such as brake linings and gaskets and is used to manufacture chlorine bleach and sodium hydroxide, also known as caustic soda, including some that is used for water purification. [...] The 2016 law authorized new rules for tens of thousands of toxic chemicals found in everyday products, including substances such as asbestos and trichloroethylene that for decades have been known to cause cancer yet were largely unregulated under federal law. Known as the Frank Lautenberg Chemical Safety Act, the law was intended to clear up a hodgepodge of state rules governing chemicals and update the Toxic Substances Control Act, a 1976 law that had remained unchanged for 40 years. The EPA banned asbestos in 1989, but the rule was largely overturned by a 1991 Court of Appeals decision that weakened the EPA's authority under TSCA to address risks to human health from asbestos or other existing chemicals. The 2016 law required the EPA to evaluate chemicals and put in place protections against unreasonable risks. Asbestos, which was once common in home insulation and other products, is banned in more than 50 countries, and its use in the U.S. has been declining for decades. The only form of asbestos known to be currently imported, processed or distributed for use in the U.S. is chrysotile asbestos, which is imported primarily from Brazil and Russia. It is used by the chlor-alkali industry, which produces bleach, caustic soda and other products. Most consumer products that historically contained chrysotile asbestos have been discontinued. While chlorine is a commonly used disinfectant in water treatment, there are only eight chlor-alkali plants in the U.S. that still use asbestos diaphragms to produce chlorine and sodium hydroxide. The plants are mostly located in Louisiana and Texas. The use of asbestos diaphragms has been declining and now accounts for less than one-third of the chlor-alkali production in the U.S., the EPA said. The EPA rule will ban imports of asbestos for chlor-alkali as soon as the rule is published but will phase in prohibitions on chlor-alkali use over five or more years to provide what the agency called "a reasonable transition period." A ban on most other uses of asbestos will effect in two years. A ban on asbestos in oilfield brake blocks, aftermarket automotive brakes and linings and other gaskets will take effect in six months. The EPA rule allows asbestos-containing sheet gaskets to be used until 2037 at the U.S. Department of Energy's Savannah River Site in South Carolina to ensure that safe disposal of nuclear materials can continue on schedule. Separately, the EPA is also evaluating so-called legacy uses of asbestos in older buildings, including schools and industrial sites, to determine possible public health risks. A final risk evaluation is expected by the end of the year.Read more of this story at Slashdot.
Nvidia Reveals Blackwell B200 GPU, the 'World's Most Powerful Chip' For AI
Sean Hollister reports via The Verge: Nvidia's must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead -- with the new Blackwell B200 GPU and GB200 "superchip." Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors and that a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It "reduces cost and energy consumption by up to 25x" over an H100, says Nvidia. Training a 1.8 trillion parameter model would have previously taken 8,000 Hopper GPUs and 15 megawatts of power, Nvidia claims. Today, Nvidia's CEO says 2,000 Blackwell GPUs can do it while consuming just four megawatts. On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia says the GB200 has a somewhat more modest seven times the performance of an H100, and Nvidia says it offers 4x the training speed. Nvidia told journalists one of the key improvements is a second-gen transformer engine that doubles the compute, bandwidth, and model size by using four bits for each neuron instead of eight (thus, the 20 petaflops of FP4 I mentioned earlier). A second key difference only comes when you link up huge numbers of these GPUs: a next-gen NVLink switch that lets 576 GPUs talk to each other, with 1.8 terabytes per second of bidirectional bandwidth. That required Nvidia to build an entire new network switch chip, one with 50 billion transistors and some of its own onboard compute: 3.6 teraflops of FP8, says Nvidia. Further reading: Nvidia in Talks To Acquire AI Infrastructure Platform Run:aiRead more of this story at Slashdot.
Hertz CEO Resigns After Blowing Big Gamble On EVs
Press2ToContinue quotes a report from the Gateway Pundit: Stephen Scherr, chief executive officer of Hertz Global Holdings Inc. and a member of its board of directors, will step down on March 31, following the car rental company's largest quarterly loss since 2020 after a risky bet on electric vehicles. According to Fox Business, Scherr is working with Gil West, former chief operating officer of Delta Airlines and General Motors' Cruise unit, to ensure a smooth transition. West will officially start his new role at Hertz on April 1. Scherr, 59, joined Hertz two years ago as the company was emerging from bankruptcy and putting a big focus on EVs during that time. Hertz soon discovered that EVs are more expensive to maintain than they had initially thought. Scherr reportedly told investors that Hertz's profits experienced a $348 million loss, which he blamed EVs for. In January, Hertz announced its plan to offload 20,000 electric vehicles from its U.S. fleet throughout 2024, and switch back to gas cars. In November, the Associated Press reported on a Consumer Reports survey that found EVs from the 2021 to 2023 model years are significantly less reliable than gasoline-powered vehicles. A whopping eighty percent less reliable, according to the AP, particularly with battery and charging systems, as well as fit issues with body panels and interiors. Car dealers and manufacturers are reportedly also struggling to sell EVs despite using deep discounts and promotional tactics. In 2021, Hertz announced plans to order 100,000 Tesla vehicles by the end of 2022. It later said it would buy "up to" 65,000 Polestar EVs for its rental fleet over the next five years.Read more of this story at Slashdot.
Indiana Becomes 9th State To Make CS a High School Graduation Requirement
Longtime Slashdot reader theodp writes: Last October, tech-backed nonprofit Code.org publicly called out Indiana in its 2023 State of Computer Science Education report, advising the Hoosier state it needed to heed Code.org's new policy recommendation and "adopt a graduation requirement for all high school students in computer science." Having already joined 49 other Governors who signed a Code.org-organized compact calling for increased K-12 CS education in his state after coming under pressure from hundreds of the nation's tech, business, and nonprofit leaders, Indiana Governor Eric J. Holcomb apparently didn't need much convincing. "We must prepare our students for a digitally driven world by requiring Computer Science to graduate from high school," Holcomb proclaimed in his January State of the State Address. Two months later -- following Microsoft-applauded testimony for legislation to make it so by Code.org partners College Board and Nextech (the Indiana Code.org Regional Partner which is also paid by the Indiana Dept. of Education to prepare educators to teach K-12 CS, including Code.org's curriculum) -- Holcomb on Wednesday signed House Bill 1243 into law, making CS a HS graduation requirement. The IndyStar reports students beginning with the Class of 2029 will be required to take a computer science class that must include instruction in algorithms and programming, computing systems, data and analysis, impacts of computing and networks and the internet. The new law is not Holcomb's first foray into K-12 CS education. Back in 2017, Holcomb and Indiana struck a deal giving Infosys (a big Code.org donor) the largest state incentive package ever -- $31M to bring 2,000 tech employees to Central Indiana - that also promised to make Indiana kids more CS savvy through the Infosys Foundation USA, headed at the time by Vandana Sikka, a Code.org Board member and wife of Infosys CEO Vishal Sikka. Following the announcement of the now-stalled deal, Holcomb led a delegation to Silicon Valley where he and Indiana University (IU) President Michael McRobbie joined Code.org CEO Hadi Partovi and Infosys CEO Vishal Sikka on a Thought Leader panel at the Infosys Confluence 2017 conference to discuss Preparing America for Tomorrow. At the accompanying Infosys Crossroads 2017 CS education conference, speakers included Sikka's wife Vandana, McRobbie's wife Laurie Burns McRobbie, Nextech President and co-CEO Karen Jung, Code.org execs, and additional IU educators. Later that year, IU 'First Lady' Laurie Burns McRobbie announced that Indiana would offer the IU Bloomington campus as a venue for Infosys Foundation USA's inaugural Pathfinders Summer Institute, a national event for K-12 teacher education in CS that offered professional development from Code.org and Nextech, as well as an unusual circumvent-your-school's-approval-and-name-your-own-stipend funding arrangement for teachers via an Infosys partnership with the NSF and DonorsChoose that was unveiled at the White House. And that, Schoolhouse Rock Fans, is one more example of how Microsoft's National Talent Strategy is becoming Code.org-celebrated K-12 CS state laws!Read more of this story at Slashdot.
BitTorrent Is No Longer the 'King' of Upstream Internet Traffic
An anonymous reader quotes a report from TorrentFreak: Back in 2004, in the pre-Web 2.0 era, research indicated that BitTorrent was responsible for an impressive 35% of all Internet traffic. At the time, file-sharing via peer-to-peer networks was the main traffic driver as no other services consumed large amounts of bandwidth. Fast-forward two decades and these statistics are ancient history. With the growth of video streaming, including services such as YouTube, Netflix, and TikTok, file-sharing traffic is nothing more than a drop in today's data pool. [...] This week, Canadian broadband management company Sandvine released its latest Global Internet Phenomena Report which makes it clear that BitTorrent no longer leads any charts. The latest data show that video and social media are the leading drivers of downstream traffic, accounting for more than half of all fixed access and mobile data worldwide. Needless to say, BitTorrent is nowhere to be found in the list of 'top apps'. Looking at upstream traffic, BitTorrent still has some relevance on fixed access networks where it accounts for 4% of the bandwidth. However, it's been surpassed by cloud storage apps, FaceTime, Google, and YouTube. On mobile connections, BitTorrent no longer makes it into the top ten. The average of 46 MB upstream traffic per subscriber shouldn't impress any file-sharer. However, since only a small percentage of all subscribers use BitTorrent, the upstream traffic per user is of course much higher.Read more of this story at Slashdot.
Cisco Completes $28 Billion Acquisition of Splunk
Cisco on Monday completed its $28 billion acquisition of Splunk, a powerhouse in data analysis, security and observability tools. The deal was first announced in September 2023. SecurityWeek reports: Cisco plans to leverage Splunk's AI, security and observability capabilities complement Cisco's solution portfolio. Cisco says the transaction is expected to be cash flow positive and non-GAAP gross margin accretive in Cisco's fiscal year 2025, and non-GAAP EPS accretive in fiscal year 2026. "We are thrilled to officially welcome Splunk to Cisco," Chuck Robbins, Chair and CEO of Cisco, said in a statement. "As one of the world's largest software companies, we will revolutionize the way our customers leverage data to connect and protect every aspect of their organization as we help power and protect the AI revolution."Read more of this story at Slashdot.
Sony Reportedly Pauses PSVR 2 Production Due To Low Sales
According to Bloomberg, Sony has paused production of its PlayStation VR 2 virtual reality headset, as sales have "slowed progressively" since its February 2023 launch. Road to VR reports: Citing people familiar with the company's plans, Sony has produced "well over 2 million units" since launch, noting that stocks of the $550 headset are building up. The report alleges the surplus is "throughout Sony's supply chain," indicating the issue isn't confined to a single location, but is spread across different stages of Sony's production and distribution network. This follows news that Sony Interactive Entertainment laid off eight percent of the company, which affected a number of its first-party game studios also involved in VR game production. Sony entirely shuttered its London Studio, which created VR action-adventure game Blood & Truth (2019), and reduced headcount at Firesprite, the studio behind PSVR 2 exclusive Horizon Call of the Mountain. Meanwhile, Sony is making PSVR 2 officially compatible with PC VR games, as the company hopes to release some sort of PC support for the headset later this year. How and when Sony will do that is still unknown, although the move underlines just how little confidence the company has in its future lineup of exclusive content just one year after launch of PSVR 2.Read more of this story at Slashdot.
5-Year Study Finds No Brain Abnormalities In 'Havana Syndrome' Patients
An anonymous reader quotes a report from CBC News: An array of advanced tests found no brain injuries or degeneration among U.S. diplomats and other government employees who suffer mysterious health problems once dubbed "Havana syndrome," researchers reported Monday. The National Institutes of Health's (NIH) nearly five-year study offers no explanation for symptoms including headaches, balance problems and difficulties with thinking and sleep that were first reported in Cuba in 2016 and later by hundreds of American personnel in multiple countries. But it did contradict some earlier findings that raised the spectre of brain injuries in people experiencing what the State Department now calls "anomalous health incidents." "These individuals have real symptoms and are going through a very tough time," said Dr. Leighton Chan, NIH's chief of rehabilitation medicine, who helped lead the research. "They can be quite profound, disabling and difficult to treat." Yet sophisticated MRI scans detected no significant differences in brain volume, structure or white matter -- signs of injury or degeneration -- when Havana syndrome patients were compared to healthy government workers with similar jobs, including some in the same embassy. Nor were there significant differences in cognitive and other tests, according to findings published in the Journal of the American Medical Association.Read more of this story at Slashdot.
Chinese and Western Scientists Identify 'Red Lines' on AI Risks
Leading western and Chinese AI scientists have issued a stark warning that tackling risks around the powerful technology requires global co-operation similar to the cold war effort to avoid nuclear conflict. From a report: A group of renowned international experts met in Beijing last week, where they identified "red lines" on the development of AI, including around the making of bioweapons and launching cyber attacks. In a statement seen by the Financial Times, issued in the days after the meeting, the academics warned that a joint approach to AI safety was needed to stop "catastrophic or even existential risks to humanity within our lifetimes." "In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology," the statement said. Signatories include Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are often described as "godfathers" of AI; Stuart Russell, a professor of computer science at the University of California, Berkeley; and Andrew Yao, one of China's most prominent computer scientists. The statement followed the International Dialogue on AI Safety in Beijing last week, a meeting that included officials from the Chinese government in a signal of tacit official endorsement for the forum and its outcomes.Read more of this story at Slashdot.
US Supreme Court Seems Wary of Curbing US Government Contacts With Social Media Platforms
U.S. Supreme Court justices on Monday appeared skeptical of a challenge on free speech grounds to how President Joe Biden's administration encouraged social media platforms to remove posts that federal officials deemed misinformation, including about elections and COVID-19. From a report: The justices heard oral arguments in the administration's appeal of a lower court's preliminary injunction constraining how White House and certain other federal officials communicate with social media platforms.The Republican-led states of Missouri and Louisiana, along with five individual social media users, sued the administration. They argued that the government's actions violated the U.S. Constitution's First Amendment free speech rights of users whose posts were removed from platforms such as Facebook, YouTube, and Twitter, now called X. The case tests whether the administration crossed the line from mere communication and persuasion to strong arming or coercing platforms - sometimes called "jawboning" - to unlawfully censor disfavored speech, as lower courts found.Read more of this story at Slashdot.
Games Are Coming To LinkedIn
Soon you might be able to compete in games against friends and colleagues and even the office next door on LinkedIn. From a report: The Microsoft-owned company is reportedly planning to add a new game experience to the platform. According to TechCrunch, the experience is designed to tap into the same popularity of games like Wordle. Players' scores will be sorted by their workplace and ranked, allowing you to take on another office or even across the country. App researcher Nima Owji posted photos of the gaming experience on Twitter/X on Saturday. A representative from LinkedIn confirmed to TechCrunch that the company is working on adding puzzle-based games to the LinkedIn experience as a way to "unlock a bit of fun, deepen relationships, and hopefully spark the opportunity for conversations."Read more of this story at Slashdot.
Investment Advisors Pay the Price For Selling What Looked a Lot Like AI Fairy Tales
Two investment advisors have reached settlements with the US Securities and Exchange Commission for allegedly exaggerating their use of AI, which in both cases were purported to be cornerstones of their offerings. From a report: Canada-based Delphia and San Francisco-headquartered Global Predictions will cough up $225,000 and $175,000 respectively for telling clients that their products used AI to improve forecasts. The financial watchdog said both were engaging in "AI washing," a term used to describe the embellishment of machine-learning capabilities. "We've seen time and again that when new technologies come along, they can create buzz from investors as well as false claims by those purporting to use those new technologies," said SEC chairman Gary Gensler. "Delphia and Global Predictions marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not." Delphia claimed its system utilized AI and machine learning to incorporate client data, a statement the SEC said it found to be false. "Delphia represented that it used artificial intelligence and machine learning to analyze its retail clients' spending and social media data to inform its investment advice when, in fact, no such data was being used in its investment process," the SEC said in a settlement order. Despite being warned about suspected misleading practices in 2021 and agreeing to amend them, Delphia only partially complied, according to the SEC. The company continued to market itself as using client data as AI inputs but never did anything of the sort, the regulator said.Read more of this story at Slashdot.
Apex Legends Streamers Warned To 'Perform a Clean OS Reinstall as Soon as Possible' After Hacks During NA Finals Match
An anonymous reader shares a report: The Apex Legends Global Series is currently in regional finals mode, but the North America finals have been delayed after two players were hacked mid-match. First, Noyan "Genburten" Ozkose of DarkZero suddenly found himself able to see other players through walls, then Phillip "ImperialHal" Dosen of TSM was given an aimbot. Genburten's hack happened part of the way through the day's third match. A Twitch clip of the moment shows the words "Apex hacking global series by Destroyer2009 & R4ndom" repeating over chat as he realizes he's been given a cheat and takes his hands off the controls. "I can see everyone!" he says, before leaving the match. ImperialHal was hacked in the game immediately after that. "I have aimbot right now!" he shouts in a clip of the moment, before declaring "I can't shoot." Though he continued attempting to play out the round, the match was later abandoned. The volunteers at the Anti-Cheat Police Department have since issued a PSA announcing, "There is currently an RCE exploit being abused in [Apex Legends]" and that it could be delivered via from the game itself, or its anti-cheat protection. "I would advise against playing any games protected by EAC or any EA titles", they went on to say. As for players of the tournament, they strongly recommended taking protective measures. "It is advisable that you change your Discord passwords and ensure that your emails are secure. also enable MFA for all your accounts if you have not done it yet", they said, "perform a clean OS reinstall as soon as possible. Do not take any chances with your personal information, your PC may have been exposed to a rootkit or other malicious software that could cause further damage." The rest of the series has now been postponed, "Due to the competitive integrity of this series being compromised," as the official Twitter account announced. They finished by saying, "We will share more information soon."Read more of this story at Slashdot.
AI-Generated Science
Published scientific papers include language that appears to have been generated by AI-tools like ChatGPT, showing how pervasive the technology has become, and highlighting longstanding issues with some peer-reviewed journals. From a report: Searching for the phrase "As of my last knowledge update" on Google Scholar, a free search tool that indexes articles published in academic journals, returns 115 results. The phrase is often used by OpenAI's ChatGPT to indicate when the data the answer it is giving users is coming from, and the specific months and years found in these academic papers correspond to previous ChatGPT "knowledge updates." "As of my last knowledge update in September 2021, there is no widely accepted scientific correlation between quantum entanglement and longitudinal scalar waves," reads a paper titled "Quantum Entanglement: Examining its Nature and Implications" published in the "Journal of Material Sciences & Manfacturing [sic] Research," a publication that claims it's peer-reviewed. Over the weekend, a tweet showing the same AI-generated phrase appearing in several scientific papers went viral. Most of the scientific papers I looked at that included this phrase are small, not well known, and appear to be "paper mills," journals with low editorial standards that will publish almost anything quickly. One publication where I found the AI-generated phrase, the Open Access Research Journal of Engineering and Technology, advertises "low publication charges," an "e-certificate" of publication, and is currently advertising a call for papers, promising acceptance within 48 hours and publication within four days.Read more of this story at Slashdot.
Fujitsu Says It Was Hacked, Warns of Data Breach
Multinational technology giant Fujitsu confirmed a cyberattack in a statement Friday, and warned that hackers may have stolen personal data and customer information. From a report: "We confirmed the presence of malware on multiple work computers at our company, and as a result of an internal investigation, we discovered that files containing personal information and customer information could be illegally taken out," said Fujitsu in its statement on its website, translated from Japanese. Fujitsu said it disconnected the affected systems from its network, and is investigating how its network was compromised by malware and "whether information has been leaked." The tech conglomerate did not specify what kind of malware was used, or the nature of the cyberattack. Fujitsu also did not say what kind of personal information may have been stolen, or who the personal information pertains to -- such as its employees, corporate customers, or citizens whose governments use the company's technologies.Read more of this story at Slashdot.
Google Researchers Unveil 'VLOGGER', an AI That Can Bring Still Photos To Life
Google researchers have developed a new AI system that can generate lifelike videos of people speaking, gesturing and moving -- from just a single still photo. From a report: The technology, called VLOGGER, relies on advanced machine learning models to synthesize startlingly realistic footage, opening up a range of potential applications while also raising concerns around deepfakes and misinformation. Described in a research paper titled "VLOGGER: Multimodal Diffusion for Embodied Avatar Synthesis," (PDF) the AI model can take a photo of a person and an audio clip as input, and then output a video that matches the audio, showing the person speaking the words and making corresponding facial expressions, head movements and hand gestures. The videos are not perfect, with some artifacts, but represent a significant leap in the ability to animate still images. The researchers, led by Enric Corona at Google Research, leveraged a type of machine learning model called diffusion models to achieve the novel result. Diffusion models have recently shown remarkable performance at generating highly realistic images from text descriptions. By extending them into the video domain and training on a vast new dataset, the team was able to create an AI system that can bring photos to life in a highly convincing way. "In contrast to previous work, our method does not require training for each person, does not rely on face detection and cropping, generates the complete image (not just the face or the lips), and considers a broad spectrum of scenarios (e.g. visible torso or diverse subject identities) that are critical to correctly synthesize humans who communicate," the authors wrote.Read more of this story at Slashdot.
Grok AI Goes Open Source
xAI has opened sourced its large language model Grok. From a report: The move, which Musk had previously proclaimed would happen this week, now enables any other entrepreneur, programmer, company, or individual to take Grok's weights -- the strength of connections between the model's artificial "neurons," or software modules that allow the model to make decisions and accept inputs and provide outputs in the form of text -- and other associated documentation and use a copy of the model for whatever they'd like, including for commercial applications. "We are releasing the base model weights and network architecture of Grok-1, our large language model," the company announced in a blog post. "Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI." Those interested can download the code for Grok on its Github page or via a torrent link. Parameters refers to the weights and biases that govern the model -- the more parameters, generally the more advanced, complex and performant the model is. At 314 billion parameters, Grok is well ahead of open source competitors such as Meta's Llama 2 (70 billion parameters) and Mistral 8x7B (12 billion parameters). Grok was open sourced under an Apache License 2.0, which enables commercial use, modifications, and distribution, though it cannot be trademarked and there is no liability or warranty that users receive with it. In addition, they must reproduce the original license and copyright notice, and state the changes they've made.Read more of this story at Slashdot.
Apple Is in Talks To Let Google's Gemini Power iPhone Generative AI Features
Apple is in talks to build Google's Gemini AI engine into the iPhone, Bloomberg News reported Monday, citing people familiar with the situation, setting the stage for a blockbuster agreement that would shake up the AI industry. From the report: The two companies are in active negotiations to let Apple license Gemini, Google's set of generative AI models, to power some new features coming to the iPhone software this year, said the people, who asked not to be identified because the deliberations are private. Apple also recently held discussions with OpenAI and has considered using its model, according to the people.Read more of this story at Slashdot.
...919293949596979899100...