An anonymous reader shares a report: When Mohamed Maslouh, a London-based contractor, was assigned to enter data into Google's internal gHire recruitment system last September, he noticed something surprising. The database contained the profiles of thousands of people in the EU and U.K. whose names, phone numbers, personal email addresses and resumes dated back as far as 2011. Maslouh knew something was amiss, as he had received data-protection training from Randstad, the European human-resources giant that employed him, and was aware of the EU's five-year-old General Data Protection Regulation (GDPR), which remained part of British law after Brexit. Under the law, companies in the European Union and U.K. may not hang onto anyone's personal data -- that is, information relating to any identifiable living person -- for longer than is strictly necessary, which generally means a maximum retention time measured in weeks or months. Google may now face investigations over potential violations of the GDPR, after Maslouh filed protected whistleblower complaints with the U.K. Information Commissioner's Office in November and with the Irish Data Protection Commission (DPC) -- which has jurisdiction over Google's activities in the EU -- in February.Read more of this story at Slashdot.
An anonymous reader quotes a report from The Drive: While General Motors has announced that it plans to phase out CarPlay in its EVs starting in 2024, Ford has just doubled down on long-term CarPlay compatibility. In an interview with The Wall Street Journal, Ford CEO Jim Farley laid it bare: "In terms of content, we kind of lost that battle 10 years ago," Farley said. "So like get real with it, because you're not going to make a ton of money on content inside the vehicle." Farley's argument is extremely sound. He is contending that since most people bring their smartphones into their cars with them, that people want the infotainment to be an extension of their phones and not another thing to deal with. On another level, embracing CarPlay and Android Auto cost automakers money to license but that cost is amortized over a large production run. The possibility of having a CarPlay-only infotainment is distant and highly unlikely, as automakers do need their own interface for the high-tech gadgets of today's cars. And let's be real: CarPlay is one of the best things to happen to modern cars. It simplifies driving, keeps people less distracted by vastly reducing the learning curve, and is just more convenient. Ford is embracing it.Read more of this story at Slashdot.
Paramount Global, the parent company of CBS, Nickelodeon, Comedy Central and Showtime, announced today that it is laying off some 25% of its staff and shutting down MTV News. NPR reports: In addition to reports of a soft ad market, Paramount Global is doing considerable restructuring. Earlier this year, Showtime merged with MTV Entertainment Studios. In an email to staff obtained by NPR, Chris McCarthy, president and CEO of Showtime/MTV Entertainment Studios and Paramount Media Networks, explained the decision-making behind the cuts. While touting the "incredible track record of hits" such as Yellowstone, South Park, and Yellowjackets, McCarthy wrote, "despite this success in streaming, we continue to feel pressure from broader economic headwinds like many of our peers. To address this, our senior leaders in coordination with HR have been working together over the past few months to determine the optimal organization for the current and future needs of our business." "This is a very sad day for a lot of friends and colleagues," wrote MTV News' Josh Horowitz on Instagram, "Many great people lost their jobs. I was hired by MTV News 17 years ago. I'm so honored to have been a small part of its history. Wishing the best for the best in the business." The news comes on the heels of a disappointing first quarter earnings report for the corporation.Read more of this story at Slashdot.
Amazon has launched a new unit, Amazon MGM Studios Distribution, that will allow the company to license Amazon Originals and other titles to third-party media companies, including streaming services and cable TV. TechCrunch reports: For the first time, titles such as âoeThe Marvelous Mrs. Maisel," "Borat Subsequent Moviefilm," "Coming 2 America," "Goliath," "Hunters," "The Tender Bar," "The Tomorrow War," "The Voyeurs" and "Without Remorse," among others, will be sold to other media outlets following their initial run on Prime Video. While the company has distributed shows before, this new venture will be on a much larger scale. Plus, Amazon Originals are mainly exclusive to Prime Video, making it an enticing sale for companies looking to have popular titles on their platforms. The launch of Amazon MGM Studios Distribution will also allow the company to handle sales of MGM-owned franchises James Bond, Rocky and Creed, as well as "The Handmaid's Tale," "Fargo" and "Vikings." Last year, Amazon acquired MGM for $8.5 billion, giving the company access to more than 4,000 films and 17,000 TV series. [...] According to Chris Ottinger, who will lead Amazon MGM Studios Distribution, the unit will offer flexible bundles, reported Deadline, so sellers can create bundled content packages that work for them. This strategy will likely allow the company to stand out from competitors.Read more of this story at Slashdot.
Researchers have developed a "language" called Remmyo, which relies on specific facial muscle movements that can occur during rapid eye movement (REM) sleep. People who are capable of lucid dreaming can learn this language during their waking hours and potentially communicate while they are asleep. Ars Technica reports: "You can transfer all important information from lucid dreams using no more than three letters in a word," [sleep expert Michael Raduga], who founded Phase Research Center in 2007 to study sleep, told Ars. "This level of optimization took a lot of time and intellectual resources." Remmyo consists of six sets of facial movements that can be detected by electromyography (EMG) sensors on the face. Slight electrical impulses that reach facial muscles make them capable of movement during sleep paralysis, and these are picked up by sensors and transferred to software that can type, vocalize, and translate Remmyo. Translation depends on which Remmyo letters are used by the sleeper and picked up by the software, which already has information from multiple dictionaries stored in its virtual brain. It can translate Remmyo into another language as it is being "spoken" by the sleeper. "We can digitally vocalize Remmyo or its translation in real time, which helps us to hear speech from lucid dreams," Raduga said. For his initial experiment, Raduga used the sleep laboratory of the Neurological Clinic of Frankfurt University in Germany. His subjects had already learned Remmyo and were also trained to enter a state of lucid dreaming and signal that they were in that lucid state during REM sleep. While they were immersed in lucid dreams, EMG sensors on their faces sent information from electrical impulses to the translation software. The results were uncertain. Based on attempts to translate planned phrases, Remmyo turned out to be anywhere from 13 to 81 percent effective, and in the interview, Raduga said he faced skepticism about the effectiveness of the translation software during the peer review process of his study, which is now published in the journal Psychology of Consciousness: Theory, Research and Practice. He still looks forward to making results more consistent by leveling up translation methods in the future. "The main problem is that it is hard to use only one muscle on your face to say something in Remmyo," said Raduga. "Unintentionally, people strain more than one muscle, and EMG sensors detect it all. Now we use only handwritten algorithms to overcome the problem, but we're going to use machine learning and AI to improve Remmyo decoding."Read more of this story at Slashdot.
An anonymous reader shares an excerpt from a Motherboard article: Now, frustrated with a lack of transparency and trust around official accounts of UFO phenomena, a team of developers has decided to take matters into their own hands with an open source citizen science project called Sky360, which aims to blanket the earth in affordable monitoring stations to watch the skies 24/7, and even plans to use AI and machine learning to spot anomalous behavior. Unlike earlier 20th century efforts such as inventors proposing "geomagnetic detectors" to discover nearby UFOs, or more recent software like the short-lived UFO ID project, Sky360 hopes that it can establish a network of autonomously operating surveillance units to gather real-time data of our skies. Citizen-led UFO research is not new. Organizations like MUFON, founded in 1969, have long investigated sightings, while amateur groups like the American Flying Saucer Investigating Committee of Columbus even ran statistical analysis on sightings in the 1960s (finding that most of them happened on Wednesdays). However, Sky360 believes that the level of interest and the technology have now both reached an inflection point, where citizen researchers can actually generate large-scale actionable data for analysis all on their own. The Sky360 stations consist of an AllSkyCam with a wide angle fish-eye lens and a pan-tilt-focus camera, with the fish-eye camera registering all movement. Underlying software performs an initial rough analysis of these events, and decides whether to activate other sensors -- and if so, the pan-tilt-focus camera zooms in on the object, tracks it, and further analyzes it. According to developer Nikola Galiot, the software is currently based on a computer vision "background subtraction" algorithm that detects any motion in the frame compared to previous frames captured; anything that moves is then tracked as long as possible and then automatically classified. The idea is that the more data these monitoring stations acquire, the better the classification will be. There are a combination of AI models under the hood, and the system is built using the open-source TensorFlow machine learning platform so it can be deployed on almost any computer. Next, the all-volunteer team wants to create a single algorithm capable of detection, tracking and classification all in one. All the hardware components, from the cameras to passive radar and temperature gauges, can be bought cheaply and off-the-shelf worldwide -- with the ultimate goal of finding the most effective combinations for the lowest price. Schematics, blueprints, and suggested equipment are all available on the Sky360 site and interested parties are encouraged to join the project's Discord server. There are currently 20 stations set up across the world, from the USA to Canada to more remote regions like the Azores in the middle of the Atlantic [...]Once enough of the Sky360 stations have been deployed, the next step is to work towards real-time monitoring, drawing all the data together, and analyzing it. By striving to create a huge, open, transparent network, anyone would be free to examine the data themselves. In June of this year, Sky360, which has a team of 30 volunteer developers working on the software, hopes to release its first developer-oriented open source build. At its heart is a component called 'SimpleTracker', which receives images frame by frame from the cameras, auto-adjusting parameters to get the best picture possible. The component determines whether something in the frame is moving, and if so, another analysis is performed, where a machine learning algorithm trained on the trajectories of normal flying objects like planes, birds, or insects, attempts to classify the object based on its movement. If it seems anomalous, it's flagged for further investigation.Read more of this story at Slashdot.
For the first time, the American Psychological Association (APA) has issued guidelines for teenagers, parents, teachers and policymakers on how to use social media, with the aim of reducing the rate of depression, anxiety and loneliness in adolescents. NPR reports: The 10 recommendations in the report summarize recent scientific findings and advise actions, primarily by parents, such as monitoring teens' feeds and training them in social media literacy, even before they begin using these platforms. But some therapists and clinicians say the recommendations place too much of the burden on parents. To implement this guidance requires cooperation from the tech companies and possibly regulators. While social media can provide opportunities for staying connected, especially during periods of social isolation, like the pandemic, the APA says adolescents should be routinely screened for signs of "problematic social media use." The APA recommends that parents should also closely monitor their children's social media feed during early adolescence, roughly ages 10-14. Parents should try to minimize or stop the dangerous content their child is exposed to, including posts related to suicide, self-harm, disordered eating, racism and bullying. Studies suggest that exposure to this type of content may promote similar behavior in some youth, the APA notes. Another key recommendation is to limit the use of social media for comparison, particularly around beauty -- or appearance-related content. Research suggests that when kids use social media to pore over their own and others' appearance online, this is linked with poor body image and depressive symptoms, particularly among girls. As kids age and gain digital literacy skills they should have more privacy and autonomy in their social media use, but parents should always keep an open dialogue about what they are doing online. The report also cautions parents to monitor their own social media use, citing research that shows that adults' attitudes toward social media and how they use it in front of kids may affect young people. The APA's report does contain recommendations that could be picked up by policy makers seeking to regulate the industry. For instance it recommends the creation of "reporting structures" to identify and remove or deprioritize social media content depicting "illegal or psychologically maladaptive behavior," such as self-harm, harming others, and disordered eating. It also notes that the design of social media platforms may need to be changed to take into account "youths' development capabilities," including features like endless scrolling and recommended content. It suggests that teens should be warned "explicitly and repeatedly" about how their personal data could be stored, shared and used.Read more of this story at Slashdot.
A study commissioned by Meta has found that the metaverse could contribute around 2.4% to U.S. annual GDP by 2035, equating to as much as $760 billion. Reuters reports: The concept of the metaverse includes augmented and virtual reality technologies that allow users to immerse themselves in a virtual world or overlay information digitally on images of the real world, according to the report by consulting firm Deloitte. Economic gains may come from the use of the technologies in the defense, medical and manufacturing sectors, plus entertainment use cases such as video games and communication, the report said. Social media giant Meta, which pivoted its focus on building metaverse technologies in 2021, has forecast the tech would eventually replace mobile as the main computing platform. In a separate report, Meta said the European Union may see an increased economic opportunity of up to 489 billion euros ($538.29 billion) in annual GDP by 2035 or about 1.3%-2.4% of its total GDP. The metaverse could contribute between C$45.3 billion ($33.88 billion) and C$85.5 billion to Canada's annual GDP by 2035, Deloitte said. Last year, a Meta-funded report estimated that metaverse adoption would contribute $3.01 trillion by 2031.Read more of this story at Slashdot.
Roblox's new anti-cheat software puts a stop to in-game exploits, but at what cost? According to Liam Dawe from Gaming On Linux, it's blocking the Wine application, meaning "you won't be able to play it on Linux any more, at all, unless you find some sort of special workaround." He adds: "Previously the roll-out of this update was being tested only with some users. Now though it's here for everyone giving a 64 bit client and introducing their Hyperion anti-cheat software which they are intentionally blocking Wine with." Here's what one of their staff had to say about this: Hi - thanks for the question. I definitely get where you're coming from, and as you point out, you deserve a clear, good-faith answer. Unfortunately that answer is essentially "no." From a personal perspective, a lot of people at Roblox would love to support Linux (including me). Practically speaking, there's just no way for us to justify it. If we release a client, we have to support it, which means QA, CS, documentation, etc., all of which is much more difficult on a fragmented platform. We release weekly on a half-dozen platforms. Adding in the time to test, debug, and release a Linux client would be expensive, which means time taken away from improving Roblox on our current platforms. Even Wine support is difficult because of anti-cheat. As wonderful as it would be to allow Roblox under Wine, the number of users who would take advantage of that is minuscule compared with our other platforms, and it's not worthwhile if it makes it easy for exploiters to cheat. I'm sorry to be such a downer about this, but it's the reality. We have to spend our time porting to and supporting the platforms that will grow our community. Again, I'm personally sorry to have to say this. Way back in 2000 I had a few patches accepted into the kernel, and I led the port of Roblox game servers from Windows to Linux several years ago. From a technical and philosophical perspective, it would be a wonderful thing to do. But our first responsibility is to our overall community, and the opportunity cost of supporting a Linux client is far, far too high to justify.Read more of this story at Slashdot.
An anonymous reader quotes a report from TechCrunch: In an effort to peel back the layers of LLMs, OpenAI is developing a tool to automatically identify which parts of an LLM are responsible for which of its behaviors. The engineers behind it stress that it's in the early stages, but the code to run it is available in open source on GitHub as of this morning. "We're trying to [develop ways to] anticipate what the problems with an AI system will be," William Saunders, the interpretability team manager at OpenAI, told TechCrunch in a phone interview. "We want to really be able to know that we can trust what the model is doing and the answer that it produces." To that end, OpenAI's tool uses a language model (ironically) to figure out the functions of the components of other, architecturally simpler LLMs -- specifically OpenAI's own GPT-2. How? First, a quick explainer on LLMs for background. Like the brain, they're made up of "neurons," which observe some specific pattern in text to influence what the overall model "says" next. For example, given a prompt about superheros (e.g. "Which superheros have the most useful superpowers?"), a "Marvel superhero neuron" might boost the probability the model names specific superheroes from Marvel movies. OpenAI's tool exploits this setup to break models down into their individual pieces. First, the tool runs text sequences through the model being evaluated and waits for cases where a particular neuron "activates" frequently. Next, it "shows" GPT-4, OpenAI's latest text-generating AI model, these highly active neurons and has GPT-4 generate an explanation. To determine how accurate the explanation is, the tool provides GPT-4 with text sequences and has it predict, or simulate, how the neuron would behave. In then compares the behavior of the simulated neuron with the behavior of the actual neuron. "Using this methodology, we can basically, for every single neuron, come up with some kind of preliminary natural language explanation for what it's doing and also have a score for how how well that explanation matches the actual behavior," Jeff Wu, who leads the scalable alignment team at OpenAI, said. "We're using GPT-4 as part of the process to produce explanations of what a neuron is looking for and then score how well those explanations match the reality of what it's doing." The researchers were able to generate explanations for all 307,200 neurons in GPT-2, which they compiled in a dataset that's been released alongside the tool code. "Most of the explanations score quite poorly or don't explain that much of the behavior of the actual neuron," Wu said. "A lot of the neurons, for example, are active in a way where it's very hard to tell what's going on -- like they activate on five or six different things, but there's no discernible pattern. Sometimes there is a discernible pattern, but GPT-4 is unable to find it." "We hope that this will open up a promising avenue to address interpretability in an automated way that others can build on and contribute to," Wu said. "The hope is that we really actually have good explanations of not just what neurons are responding to but overall, the behavior of these models -- what kinds of circuits they're computing and how certain neurons affect other neurons."Read more of this story at Slashdot.
Chinese police have arrested a man for using ChatGPT to create a fake news article about a train crash, under a new law governing "deep synthesis technologies" introduced by China this year. CNBC reports: Police in Gansu province in northwest China detained a man, surnamed Hong, who they said allegedly fabricated a news story regarding a train crash that caused nine deaths. The authorities found that more than 20 accounts had posted this article on a blogging platform owned by Chinese search giant Baidu and they'd garnered more than 15,000 views. Hong allegedly used ChatGPT to create slightly different versions of the fake news article to pass duplication checks on the Baidu-owned platform. The Gansu police authorities arrested Hong under the first-of-its kind law governing "deep synthesis technologies" which China introduced this year. Deep synthesis technologies refer to AI being used to generate text, images, video or other media. The law states that deep synthesis services cannot be used to disseminate fake news. China drafted the law as ChatGPT was taking off and going viral, as authorities looked to get ahead of the technology. China's internet is heavily censored and controlled. Beijing has sought to introduce laws governing new technologies which could present concerns to the central government. ChatGPT is blocked in China but can be accessed with the use of a virtual private network -- a software that can help bypass the country's internet restrictions.Read more of this story at Slashdot.
FTX founder Sam Bankman-Fried is seeking the dismissal of 10 of the 13 charges against him over the collapse of the cryptocurrency exchange. Axios reports: Lawyers for Bankman-Fried, who's pleaded not guilty to fraud, conspiracy, campaign finance law violations and money laundering, in a filing argued that several of the charges failed to properly state an offense. The motion that was filed to the U.S. District Court for the Southern District of New York is seeking the dismissal of 10 of the 13 charges against him. "Simply making a false statement, by itself, does not constitute wire fraud unless it is made for the purpose of obtaining money or property from the victim of the fraud," Bankman-Fried's lawyers wrote. According to Ars Technica, SBF's lawyers are essentially arguing that there's no evidence of harm caused because fraud requires a "scheme to cause economic loss to the victim," which prosecutors allegedly haven't proved. Instead, SBF alleges that federal prosecutors have concocted "a hodgepodge of different intangible losses" suffered by banks and lenders -- including "the right to honest services," "the loss of control of assets," and "the deprivation of valuable information." [...] "In the end, the Government is trying to transform allegations of dishonesty and unfair dealing into violations of the federal fraud statutes," SBF's lawyers wrote. "While such conduct may well be improper, it is not wire fraud." The 31-year-old Bankman-Fried, who is currently under house arrest on a $250 million bond at his parents' home in Palo Alto, California, faces more than 155 years in prison if convicted on all counts. A trial has been scheduled for October.Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: A team of researchers at the Italian Institute of Technology (IIT) in Milan recently created a fully rechargeable battery using nontoxic edible components. This is probably the world's first battery that is safe to ingest and entirely made of food-grade materials. "Given the level of safety of these batteries, they could be used in children's toys, where there is a high risk of ingestion," said Mario Caironi, a senior researcher at IIT. However, this isn't the only solution the edible battery could provide. Apart from serving as an alternative to conventional toxic toy batteries, the edible battery from IIT could also play a key role in making health care applications safer than ever. For instance, doctors have to be cautious regarding the use of miniature electronic devices (such as drug-delivery robots, biosensors, etc.) inside the human body, as they come equipped with batteries made of toxic substances. An edible battery could solve this problem. There are also more mundane applications, like replacing batteries in pet toys. Ivan K. Ilic, first author of the study and a postdoctoral researcher at IIT, told Ars Technica, "Two main ways a battery damages human tissue when it's inside the body is by doing water electrolysis and by the toxicity of its materials. Water electrolysis is a phenomenon where electricity with a voltage higher than 1.2 V (virtually all commercial batteries) breaks water into oxygen and hydrogen (an explosive gas), and it is very dangerous if it occurs in the stomach. Our battery is way below this voltage, around 0.65 V, so water electrolysis cannot occur. On the other hand, we used only food materials, so nothing is toxic!" Before the battery is useful, however, the researchers will need to first enhance the battery's power capacity. Currently, the edible battery can supply 48 microamperes of current for a bit over 10 minutes. So it can easily meet the power demand of a miniature medical device or a small LED. "These batteries are no competition to ordinary batteries -- they will not power electric cars -- but they are meant to power edible electronics and maybe some other niche applications, so their main advantage is non-toxicity," said Ilic. Here's a list of what makes these edible batteries work, as mentioned by Ars: - "Quercetin, a pigment found in almonds and capers, serves as the battery cathode, whereas riboflavin (vitamin B2) makes up the battery anode.- The researchers used nori (edible seaweed that is used in the wrapping of sushi rolls) as the separator and a water-based solution (aqueous NaHSO4) as the electrolyte.- Activated charcoal is employed to achieve high electrical conductivity in the battery.The battery electrodes come covered in beeswax and connect to a gold foil (used to cover pastries) that laminates a supporting structure made of ethyl cellulose." The research has been published in the journal Advanced Materials.Read more of this story at Slashdot.
The FBI has sabotaged a suite of malicious software used by elite Russian spies, U.S. authorities said on Tuesday, providing a glimpse of the digital tug-of-war between two cyber superpowers. From a report: Senior law enforcement officials said FBI technical experts had identified and disabled malware wielded by Russia's FSB security service against an undisclosed number of American computers, a move they hoped would deal a death blow to one of Russia's leading cyber spying programs. "We assess this as being their premier espionage tool," one of the U.S. officials told journalists ahead of the release. He said Washington hoped the operation would "eradicate it from the virtual battlefield." The official said the FSB spies behind the malware, known as Snake, are part of a notorious hacking group tracked by the private sector and known as "Turla." The group has been active for two decades against a variety of NATO-aligned targets, U.S. government agencies and technology companies, a senior FBI official said.Read more of this story at Slashdot.
Spotify has removed tens of thousands of songs from artificial intelligence music start-up Boomy, ramping up policing of its platform amid complaints of fraud and clutter across streaming services. From a report: In recent months the music industry has been confronting the rise of AI-generated songs and, more broadly, the growing number of tracks inundating streaming platforms daily. Spotify, the largest audio streaming business, recently took down about 7 per cent of the tracks that had been uploaded by Boomy, the equivalent of "tens of thousands" of songs, according to a person familiar with the matter. Recording giant Universal Music had flagged to all the main streaming platforms that it saw suspicious streaming activity on Boomy tracks, according to another person close to the situation. The Boomy songs were removed because of suspected "artificial streaming" -- online bots posing as human listeners to inflate the audience numbers for certain songs. AI has made this type of activity easier because it allows someone to instantly generate many music tracks, which can then be uploaded online and streamed. Boomy, which was launched two years ago, allows users to choose various styles or descriptors, such as "rap beats" or "rainy nights," to create a machine-generated track. Users can then release the music to streaming services, where they will generate royalty payments. California-based Boomy says its users have created more than 14mn songs.Read more of this story at Slashdot.
Amazon CTO Werner Vogels, writes in a blog post: Software architectures are not like the architectures of bridges and houses. After a bridge is constructed, it is hard, if not impossible, to change the way it was built. Software is quite different, once we are running our software, we may get insights about our workloads that we did not have when it was designed. And, if we had realized this at the start, and we chose an evolvable architecture, we could change components without impacting the customer experience. My rule of thumb has been that with every order of magnitude of growth you should revisit your architecture, and determine whether it can still support the next order level of growth. A great example can be found in two insightful blog posts written by Prime Video's engineering teams. The first describes how Thursday Night Football live streaming is built around a distributed workflow architecture. The second is a recent post that dives into the architecture of their stream monitoring tool, and how their experience and analysis drove them to implement it as a monolithic architecture. There is no one-size-fits-all. We always urge our engineers to find the best solution, and no particular architectural style is mandated. If you hire the best engineers, you should trust them to make the best decisions. I always urge builders to consider the evolution of their systems over time and make sure the foundation is such that you can change and expand them with the minimum number of dependencies. Event-driven architectures (EDA) and microservices are a good match for that. However, if there are a set of services that always contribute to the response, have the exact same scaling and performance requirements, same security vectors, and most importantly, are managed by a single team, it is a worthwhile effort to see if combining them simplifies your architecture. Evolvable architectures are something that we've taken to heart at Amazon from the very start. Re-evaluating and re-architecting our systems to meet the ever-increasing demands of our customers. You can go all the way back to 1998, when a group of senior engineers penned the Distributed Computing Manifesto, which put the wheels in motion to move Amazon from a monolith to a service-oriented architecture. In the decades since, things have continued to evolve, as we moved to microservices, then microservices on shared infrastructure, and as I spoke about at re:Invent, EDA.Read more of this story at Slashdot.
International online sports broadcasting company DAZN has joined a global task force that aims to shut down pirated and unauthorized sports streaming operations worldwide. The new group is operated by the Alliance for Creativity and Entertainment (ACE), which counts giants like Amazon, Apple, NBC Universal, Netflix, Disney, Sony, and Warner Bros. among its members. From a report: Unauthorized streaming sources can often be the only available option for people to watch certain teams and matches subject to complicated broadcasting deals, locked into high-priced bundles, and blackouts. With more tech and entertainment companies using sports as a sweetener for their services (NFL Sunday Ticket on YouTube, MLS / MLB for Apple TV Plus, and Thursday Night Football on Amazon Prime are a few examples), they have more reasons to collectively take issue with anyone popping up a free stream. ACE as a whole had previously taken down IPTV-based service NitroTV, which allegedly charged users $20 per month in the US for a collection of unlicensed streaming content. ACE was first formed in 2017 as the anti-piracy arm of the Motion Picture Association (formerly known as the MPAA until it dropped the second A in 2019). Now with DAZN, it consists of 53 big media companies.Read more of this story at Slashdot.
Textbooks giant Pearson is currently taking legal action over the use of its intellectual property to train AI models, chief executive Andy Bird revealed today as the firm laid out its plans for its own artificial intelligence-powered products. From a report: The firm laid out its plans on how it would use AI a week after its share price tumbled by 15% as American rival Chegg said its own business had been hurt by the rise of ChatGPT. Those plans would include AI-powered summaries of Pearson educational videos, to be rolled out this month for Pearson+ members, as well as AI-generated multiple choice questions for areas where a student might need more help. Bird said Pearson had an advantage as its AI products would use Pearson content for training, which he said would make it more reliable. However, he also added that the business was also monitoring the situation regarding other businesses using Pearson content to train its AI. He said Pearson had already sent out a cease-and-desist letter, though did not say who it was addressed to.Read more of this story at Slashdot.
AleRunner shares a report: Methane leaks alone from Turkmenistan's two main fossil fuel fields caused more global heating in 2022 than the entire carbon emissions of the UK, satellite data has revealed. Emissions of the potent greenhouse gas from the oil- and gas-rich country are "mind-boggling," and an "infuriating" problem that should be easy to fix, experts have told the Guardian. The data produced by Kayrros for the Guardian found that the western fossil fuel field in Turkmenistan, on the Caspian coast, leaked 2.6m tonnes of methane in 2022. The eastern field emitted 1.8m tonnes. Together, the two fields released emissions equivalent to 366m tonnes of CO2, more than the UK's annual emissions, which are the 17th-biggest in the world. Methane emissions have surged alarmingly since 2007 and this acceleration may be the biggest threat to keeping below 1.5C of global heating, according to scientists. It also seriously risks triggering catastrophic climate tipping points, researchers say.Read more of this story at Slashdot.
Anthropic, an artificial intelligence startup backed by Google owner Alphabet, on Tuesday disclosed the set of written moral values that it used to train and make safe Claude, its rival to the technology behind OpenAI's ChatGPT. From a report: The moral values guidelines, which Anthropic calls Claude's constitution, draw from several sources, including the United Nations Declaration on Human Rights and even Apple's data privacy rules. Anthropic was founded by former executives from Microsoft-backed OpenAI to focus on creating safe AI systems that will not, for example, tell users how to build a weapon or use racially biased language. Co-founder Dario Amodei was one of several AI executives who met with Biden last week to discuss potential dangers of AI. Most AI chatbot systems rely on getting feedback from real humans during their training to decide what responses might be harmful or offensive. But those systems have a hard time anticipating everything people might ask, so they tend to avoid some potentially contentious topics like politics and race altogether, making them less useful. Anthropic takes a different approach, giving its Open AI competitor Claude a set of written moral values to read and learn from as it makes decisions on how to respond to questions. Those values include "choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment," Anthropic said in a blog post on Tuesday.Read more of this story at Slashdot.
LinkedIn, the networking platform used by millions of employees and companies, said on Monday it will pare down its operations in China, capping a multiyear pullback that exemplified the challenges of running a foreign business in China. From a report: The company, owned by Microsoft, said it will lay off 716 employees worldwide, including teams dedicated to engineering and marketing in China, because of slumping demand. It did not say how many of those layoffs will be in China. LinkedIn will also shut its China job posting app, a bare-bones version of its international service, by August. Users of the app, called InCareer, could only search for jobs and not post or share articles the way they can on LinkedIn. When LinkedIn started a Chinese-language version of its website in 2014, it charted a path that its peers, including Facebook and Google, had shied away from. It partnered with local firms and began censoring the content of millions of Chinese customers in accordance with Beijing's strict laws. Several U.S. journalists and activists said their profiles had been blocked because of "prohibited content." The company said at the time that while it opposed government censorship, its absence in the country could deprive Chinese professionals of the chance to make professional connections.Read more of this story at Slashdot.
Meta has announced a new open-source AI model that links together multiple streams of data, including text, audio, visual data, temperature, and movement readings. From a report: The model is only a research project at this point, with no immediate consumer or practical applications, but it points to a future of generative AI systems that can create immersive, multisensory experiences and shows that Meta continues to share AI research at a time when rivals like OpenAI and Google have become increasingly secretive. The core concept of the research is linking together multiple types of data into a single multidimensional index (or "embedding space," to use AI parlance). This idea may seem a little abstract, but it's this same concept that underpins the recent boom in generative AI.Read more of this story at Slashdot.
Microsoft is expanding preview access to its Microsoft 365 Copilot, a digital assistant based on OpenAI's GPT-4 that brings AI-powered capabilities across Microsoft 365 apps and services. The tech giant has also announced a new indexing tool that lets Copilot more accurately report on internal company data, alongside some new Copilot features for apps like Microsoft Whiteboard, Outlook, and PowerPoint. From a report: The company is launching the Microsoft 365 Copilot Early Access Program -- an invitation-only paid preview that will initially be rolled out to 600 global customers. Prior to this expansion, just 20 customers have been able to test the Microsoft 365 Copilot. Those new customers will be asked to pay an unspecified amount for the privilege, but Microsoft doesn't say when the rollout will begin. Microsoft is also introducing a range of new capabilities to the Microsoft 365 Copilot. A new Semantic Index feature is being rolled out for enterprise customers running the Microsoft 365 E3 or E5 suite that creates an intuitive map of both user and company data. Microsoft says that the Semantic Index "is critical to getting relevant, actionable responses to prompts in Microsoft 365 Copilot." For example, Microsoft says that by asking Copilot about a "March sales report," the tool will recognize that "sales reports are produced by Kelly on the finance team and created in Excel," rather than simply looking for any documents containing those keywords.Read more of this story at Slashdot.
Apple is bringing Final Cut Pro and Logic Pro to the iPad. Both apps will be available for $4.99 per month or $49 per year on iPad starting on May 23rd. For comparison, buying Logic Pro on a Mac costs $199.99, and buying Final Cut Pro normally costs $299.99. From a report: The video and music editing apps will come with enhancements specifically for iPads. Final Cut Pro, for example, will come with a new jog wheel that's supposed to make the editing process "easier than ever," allowing you to navigate the magnetic timeline, move clips, and perform edits using just your finger and multi-touch gestures. There's also a new feature called Live Drawing that lets you use your Apple Pencil to draw and write directly on top of video content. If you have an iPad Pro with an M2 chip, you can use the Apple Pencil's hover feature to skim and preview footage without even touching the screen.Read more of this story at Slashdot.
IBM on Tuesday launched watsonx, a new artificial intelligence and data platform to help companies integrate AI in their business. From a report: The new AI platform launch comes over a decade after IBM's software called Watson got attention for winning the game show Jeopardy. IBM at the time said Watson could "learn" and process human language. But Watson's high cost at the time made it a challenge for companies to use, according to Reuters reporting. Fast forward a decade, chatbot ChatGPT's overnight success is making AI adoption at companies a focus, and IBM is looking to grab new business. This time, the lower cost of implementing the large language AI models means the chances of success are high, IBM CEO Arvind Krishna told Reuters ahead of the company's annual Think conference. "When something becomes 100 times cheaper, it really sets up an attraction that's very, very different," said Krishna. "The first barrier to create the model is high, but once you've done that, to adapt that model for a hundred or a thousand different tasks is very easy and can be done by a non-expert." Krishna said AI could reduce certain back office jobs at IBM in the coming years. "That doesn't mean the total employment decreases," he said about some media reports talking about IBM pausing hiring for thousands of jobs that AI could replace. "That gives the ability to plow a lot more investment into value-creating activities...We hired more people than were let go because we're hiring into areas where there is a lot more demand from our clients."Read more of this story at Slashdot.
An anonymous reader quotes a report from The Guardian: An EU plan under which all WhatsApp, iMessage and Snapchat accounts could be screened for child abuse content has hit a significant obstacle after internal legal advice said it would probably be annulled by the courts for breaching users' rights. Under the proposed "chat controls" regulation, any encrypted service provider could be forced to survey billions of messages, videos and photos for "identifiers" of certain types of content where it was suspected a service was being used to disseminate harmful material. The providers issued with a so-called "detection order" by national bodies would have to alert police if they found evidence of suspected harmful content being shared or the grooming of children. Privacy campaigners and the service providers have already warned that the proposed EU regulation and a similar online safety bill in the UK risk end-to-end encryption services such as WhatsApp disappearing from Europe. Now leaked internal EU legal advice, which was presented to diplomats from the bloc's member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year. The legal service of the council of the EU, the decision-making body led by national ministers, has advised the proposed regulation poses a "particularly serious limitation to the rights to privacy and personal data" and that there is a "serious risk" of it falling foul of a judicial review on multiple grounds. The EU lawyers write that the draft regulation "would require the general and indiscriminate screening of the data processed by a specific service provider, and apply without distinction to all the persons using that specific service, without those persons being, even indirectly, in a situation liable to give rise to criminal prosecution." The legal service goes on to warn that the European court of justice has previously judged the screening of communications metadata is "proportionate only for the purpose of safeguarding national security" and therefore "it is rather unlikely that similar screening of content of communications for the purpose of combating crime of child sexual abuse would be found proportionate, let alone with regard to the conduct not constituting criminal offenses." The lawyers conclude the proposed regulation is at "serious risk of exceeding the limits of what is appropriate and necessary in order to meet the legitimate objectives pursued, and therefore of failing to comply with the principle of proportionality". The legal service is also concerned about the introduction of age verification technology and processes to popular encrypted services. "The lawyers write that this would necessarily involve the mass profiling of users, or the biometric analysis of the user's face or voice, or alternatively the use of a digital certification system they note 'would necessarily add another layer of interference with the rights and freedoms of the users,'" reports the Guardian. "Despite the advice, it is understood that 10 EU member states -- Belgium, Bulgaria, Cyprus, Hungary, Ireland, Italy, Latvia, Lithuania, Romania and Spain -- back continuing with the regulation without amendment."Read more of this story at Slashdot.
Louise Lerner writes via Phys.Org: Inside a lab, scientists marvel at a strange state that forms when they cool down atoms to nearly absolute zero. Outside their window, trees gather sunlight and turn them into new leaves. The two seem unrelated -- but a new study from the University of Chicago suggests that these processes aren't so different as they might appear on the surface. The study, published in PRX Energy on April 28, found links at the atomic level between photosynthesis and exciton condensates -- a strange state of physics that allows energy to flow frictionlessly through a material. The finding is scientifically intriguing and may suggest new ways to think about designing electronics, the authors said. When a photon from the sun strikes a leaf, it sparks a change in a specially designed molecule. The energy knocks loose an electron. The electron, and the "hole" where it once was, can now travel around the leaf, carrying the energy of the sun to another area where it triggers a chemical reaction to make sugars for the plant. Together, that traveling electron-and-hole-pair is referred to as an "exciton." When the team took a birds-eye view and modeled how multiple excitons move around, they noticed something odd. They saw patterns in the paths of the excitons that looked remarkably familiar. In fact, it looked very much like the behavior in a material that is known as a Bose-Einstein condensate, sometimes known as "the fifth state of matter." In this material, excitons can link up into the same quantum state -- kind of like a set of bells all ringing perfectly in tune. This allows energy to move around the material with zero friction. (These sorts of strange behaviors intrigue scientists because they can be the seeds for remarkable technology -- for example, a similar state called superconductivity is the basis for MRI machines). According to the models [...], the excitons in a leaf can sometimes link up in ways similar to exciton condensate behavior. This was a huge surprise. Exciton condensates have only been seen when the material is cooled down significantly below room temperature. It'd be kind of like seeing ice cubes forming in a cup of hot coffee. "Photosynthetic light harvesting is taking place in a system that is at room temperature and what's more, its structure is disordered -- very unlike the pristine crystallized materials and cold temperatures that you use to make exciton condensates," explained [study co-author Anna Schouten]. This effect isn't total -- it's more akin to "islands" of condensates forming, the scientists said. "But that's still enough to enhance energy transfer in the system," said Sager-Smith. In fact, their models suggest it can as much as double the efficiency. The findings open up some new possibilities for generating synthetic materials for future technology, said study co-author Prof. David Mazziotti. "A perfect ideal exciton condensate is sensitive and requires a lot of special conditions, but for realistic applications, it's exciting to see something that boosts efficiency but can happen in ambient conditions."Read more of this story at Slashdot.
Arianespace CEO Stephane Israel says Europe will have to wait until the 2030s for a reusable rocket. Space.com reports: Arianespace is currently preparing its Ariane 6 rocket for a test flight following years of delays. Europe's workhorse Ariane 5, which has been operational for nearly 30 years, recently launched the JUICE Jupiter mission and now has only one flight remaining before retirement. Ariane 6 will be expendable, despite entering development nearly a decade ago, when reusability was being developed and tested in the United States, most famously by SpaceX. "When the decisions were made on Ariane 6, we did so with the technologies that were available to quickly introduce a new rocket," said Israel, according to European Spaceflight. The delays to Ariane 6, however, mean that Europe lacks its own options for access to space. This issue was highlighted in a recent report from an independent advisory group to the European Space Agency. Israel stated that, in his opinion, Ariane 6 would fly for more than 10 years before Europe transitions to a reusable successor in the 2030s. Aside from Arianespace, Europe is currently fostering a number of private rocket companies, including Rocket Factory Augsburg, Isar Aerospace, PLD Space and Skyrora, with some of these rockets to be reusable. However the rockets in development are light-lift, whereas Ariane 6 and its possible successor are much more capable, medium-heavy-lift rockets.Read more of this story at Slashdot.
An anonymous reader quotes a report from Ars Technica: The US Justice Department has seized the domains of 13 DDoS-for hire services as part of an ongoing initiative for combatting the Internet menace. The providers of these illicit services platforms describe them as "booter" or "stressor" services that allow site admins to test the robustness and stability of their infrastructure. Almost, if not all, are patronized by people out to exact revenge on sites they don't like or to further extortion, bribes, or other forms of graft. The international law enforcement initiative is known as Operation PowerOFF. In December, federal authorities seized another 48 domains. Ten of them returned with new domains, many that closely resembled their previous names. "Ten of the 13 domains seized today are reincarnations of services that were seized during a prior sweep in December, which targeted 48 top booter services," the Justice Department said. "For example, one of the domains seized this week -- cyberstress.org -- appears to be the same service operated under the domain cyberstress.us, which was seized in December. While many of the previously disrupted booter services have not returned, today's action reflects law enforcement's commitment to targeting those operators who have chosen to continue their criminal activities." According to a seizure warrant (PDF) filed in federal court, the FBI used live accounts available through the services to take down sites with high-capacity bandwidth that were under FBI control. "The FBI tested each of services associated with the SUBJECT DOMAINS, meaning that agents or other personnel visited each of the websites and either used previous login information or registered a new account on the service to conduct attacks," FBI Special Agent Elliott Peterson wrote in the affidavit. "I believe that each of the SUBJECT DOMAINS is being used to facilitate the commission of attacks against unwitting victims to prevent the victims from accessing the Internet, to disconnect the victim from or degrade communication with established Internet connections, or to cause other similar damage."Read more of this story at Slashdot.
A vulnerability in the "Advanced Custom Fields" plugin for WordPress is putting more than two million users at risk of cyberattacks, warns Patchstack researcher Rafie Muhammad. The Register reports: A warning from Patchstack about the flaw claimed there are more than two million active installs of the Advanced Custom Fields and Advanced Custom Fields Pro versions of the plugins, which are used to give site operators greater control of their content and data, such as edit screens and custom field data. Patchstack researcher Rafie Muhammad uncovered the vulnerability on February 5, and reported it to Advanced Custom Fields' vendor Delicious Brains, which took over the software last year from developer Elliot Condon. On May 5, a month after a patched version of the plugins was released by Delicious Brains, Patchstack published details of the flaw. It's recommended users update their plugin to at least version 6.1.6. The flaw, tracked as CVE-2023-30777 and with a CVSS score of 6.1 out of 10 in severity, leaves sites vulnerable to reflected XSS attacks, which involve miscreants injecting malicious code into webpages. The code is then "reflected" back and executed within the browser of a visitor. Essentially, it allows someone to run JavaScript within another person's view of a page, allowing the attacker to do things like steal information from the page, perform actions as the user, and so on. That's a big problem if the visitor is a logged-in administrative user, as their account could be hijacked to take over the website. "This vulnerability allows any unauthenticated user [to steal] sensitive information to, in this case, privilege escalation on the WordPress site by tricking the privileged user to visit the crafted URL path," Patchstack wrote in its report. The outfit added that "this vulnerability could be triggered on a default installation or configuration of Advanced Custom Fields plugin. The XSS also could only be triggered from logged-in users that have access to the Advanced Custom Fields plugin."Read more of this story at Slashdot.
According to CoinDesk, crypto exchange Bittrex has filed for bankruptcy in the U.S. state of Delaware, "months after announcing it would wind down operations in the country and weeks after being sued by the Securities and Exchange Commission (SEC)." From the report: The exchange believes it has more than 100,000 creditors, with estimated liabilities and assets both within the $500 million to $1 billion range, according to a court filing shared by Randall Reese of Chapter 11 Dockets, a bankruptcy tracker. Bittrex's U.S. branch has had a rough 2023 so far, laying off 80 people in February and announcing in March that it would end all operations by the end of April. These changes have not affected Bittrex Global, the non-U.S. crypto exchange. Despite Bittrex's impending exit from the U.S., the SEC sued it in mid-April on allegations it operated a national securities exchange, broker and clearing agency. The SEC also sued former Bittrex CEO Bill Shihara and Bittrex Global. Bittrex Global CEO Oliver Linch said last month that the exchange intended to fight these charges in court, but a bankruptcy proceeding may make this more difficult.Read more of this story at Slashdot.
According to the Wall Street Journal (paywalled), the New York Times is getting around $100 million from Google over the next three years as part of a deal that allows Google to feature Times content on some of its platforms. Reuters reports: The deal includes the Times' participation in Google News Showcase, a product that pays publishers to feature their content on Google News and some other Google platforms, according to the report, which cited people familiar with the matter. The Times in February announced an expansion of its agreement with Google that included content distribution and subscriptions.Read more of this story at Slashdot.
According to Activision Blizzard's latest financial report, the video game company's PC platform outperformed consoles by $27 million at the start of 2023, "continuing a trend with the Call of Duty, World of Warcraft, Diablo, and Overwatch 2 publisher that's been consistent for nearly a year now," reports PC Gamer. From the report: Between January 1 and March 31, Activision made $666 million on PC versus $639 on console. Its PC segment also outsold its console business throughout half of last year, though console did outsell PC overall for Activision in 2022. This is a notable change: As far back as far as I can look at Activision's publicly available financial reports, console has always been king. This was the case in the early 2000s at the peak of Tony Hawk and Guitar Hero, in the 2010s when Call of Duty was on the rise, and even after Activision bought Blizzard in 2008 (WoW subscriptions were still big, but not Call of Duty big). Activision's latest financial report marks the third quarter in a row that PC outsold console, and there's reason to believe the trend will continue throughout 2023. Activision attributes its 74% increase in PC revenue since this time last year to the success of Call of Duty and Overwatch 2, but it also specifically highlights higher revenues for WoW: Dragonflight and Diablo Immortal (two games that aren't on console). Blizzard is currently the largest factor in the PC's growth within Activision. While Blizzard games are only making about half as much as Call of Duty, 72% of that revenue is on PC and just 8% is on console. Call of Duty's revenue is more evenly split: 59% console, 26% PC, and 15% mobile. Blizzard's console audience could grow significantly when Diablo 4 launches in June simultaneously on PC and consoles (a first for the series). Zoom out on Activision's numbers, and you can see the PC is gaining ground in Activision's yearly reports, too. Last year, the company recorded the smallest gap between console and PC revenue in recent history: just $100 million. That's several hundred million less than 2021, 2020, 2019, 2018, and 2017. If the year goes on like this, 2023 could be the year that the PC becomes Activision's second-biggest platform behind mobile (Candy Crush continues to crush).Read more of this story at Slashdot.
An anonymous reader quotes a report from TechCrunch: NextGen Healthcare, a U.S.-based provider of electronic health record software, admitted that hackers breached its systems and stole the personal data of more than 1 million patients. In a data breach notification filed with the Maine attorney general's office, NextGen Healthcare confirmed that hackers accessed the personal data of 1.05 million patients, including approximately 4,000 Maine residents. In a letter sent to those affected, NextGen Healthcare said that hackers stole patients' names, dates of birth, addresses and Social Security numbers. "Importantly, our investigation has revealed no evidence of any access or impact to any of your health or medical records or any health or medical data," the company added. TechCrunch asked NextGen Healthcare whether it has the means, such as logs, to determine what data was exfiltrated, but company spokesperson Tami Andrade declined to answer. In its filing with Maine's AG, NextGen Healthcare said it was alerted to suspicious activity on March 30, and later determined that hackers had access to its systems between March 29 and April 14, 2023. The notification says that the attackers gained access to its NextGen Office system -- a cloud-based EHR and practice management solution -- using client credentials that "appear to have been stolen from other sources or incidents unrelated to NextGen." "When we learned of the incident, we took steps to investigate and remediate, including working together with leading outside cybersecurity experts and notifying law enforcement," Andrade told TechCrunch in a statement. "The individuals known to be impacted by this incident were notified on April 28, 2023, and we have offered them 24 months of free fraud detection and identity theft protection." NextGen was also the victim of a ransomware attack in January this year, adds TechCrunch. The stolen data, including employee names, addresses, phone numbers and passport scans, appears to be available on the dark web.Read more of this story at Slashdot.
At its annual Google I/O developers conference on Wednesday, Google is planning to announce a number of generative AI updates, including launching a general-use large language model (LLM) called PaLM 2. CNBC reports: According to internal documents about Google I/O viewed by CNBC, the company will unveil PaLM 2, its most recent and advanced LLM. PaLM 2 includes more than 100 languages and has been operating under the internal codename "Unified Language Model." It's also performed a broad range of coding and math tests as well as creative writing tests and analysis. At the event, Google will make announcements on the theme of how AI is "helping people reach their full potential," including "generative experiences" to Bard and Search, the documents show. Pichai will be speaking to a live crowd of developers as he pitches his company's AI advancements. Google first announced the PaLM language model in April of 2022. In March of this year, the company launched an API for PaLM alongside a number of AI enterprise tools it says will help businesses "generate text, images, code, videos, audio, and more from simple natural language prompts." Last month, Google said its medical LLM called "Med-PaLM 2" can answer medical exam questions at an "expert doctor level" and is accurate 85% of the time.Read more of this story at Slashdot.
The U.S. Securities and Exchange Commission (SEC) has given its largest ever award of almost $279 million to a whistleblower whose information was crucial in an enforcement action by the regulator. The SEC did not reveal the case involved, but the award shows there is a significant incentive for whistleblowers to come forward with accurate information about potential securities law violations. Reuters reports: The award is more than double the $114 million that it had issued in October 2020. "As this award shows, there is a significant incentive for whistleblowers to come forward with accurate information about potential securities law violations," said Gurbir Grewal, director of the SEC's Division of Enforcement, in a statement. "The whistleblower's sustained assistance including multiple interviews and written submissions was critical to the success of these actions," said Creola Kelly, chief of the SEC's Office of the Whistleblower. Payments to whistleblowers are made out of an investor protection fund that was established by Congress and financed entirely through monetary sanctions paid to the SEC by securities law violators. Awards to whistleblowers can range from 10% to 30% of the money collected when the monetary sanctions exceed $1 million.Read more of this story at Slashdot.
An anonymous reader quotes a report from The Register: This year's DEF CON AI Village has invited hackers to show up, dive in, and find bugs and biases in large language models (LLMs) built by OpenAI, Google, Anthropic, and others. The collaborative event, which AI Village organizers describe as "the largest red teaming exercise ever for any group of AI models," will host "thousands" of people, including "hundreds of students from overlooked institutions and communities," all of whom will be tasked with finding flaws in LLMs that power today's chat bots and generative AI. Think: traditional bugs in code, but also problems more specific to machine learning, such as bias, hallucinations, and jailbreaks -- all of which ethical and security professionals are now having to grapple with as these technologies scale. DEF CON is set to run from August 10 to 13 this year in Las Vegas, USA. For those participating in the red teaming this summer, the AI Village will provide laptops and timed access to LLMs from various vendors. Currently this includes models from Anthropic, Google, Hugging Face, Nvidia, OpenAI, and Stability. The village people's announcement also mentions this is "with participation from Microsoft," so perhaps hackers will get a go at Bing. We're asked for clarification about this. Red teams will also have access to an evaluation platform developed by Scale AI. There will be a capture-the-flag-style point system to promote the testing of "a wide range of harms," according to the AI Village. Whoever gets the most points wins a high-end Nvidia GPU. The event is also supported by the White House Office of Science, Technology, and Policy; America's National Science Foundation's Computer and Information Science and Engineering (CISE) Directorate; and the Congressional AI Caucus.Read more of this story at Slashdot.
Facebook says it is not dead. Facebook also wants you to know that it is not just for "old people," as young people have been saying for years. From a report: Now, with the biggest thorn in its side -- TikTok -- facing heightened government scrutiny amid growing tensions between the U.S. and China, Facebook could, perhaps, position itself as a viable, domestic-bred alternative. There's just one problem: young adults like Devin Walsh (anecdote in the story) have moved on. [...] Today, 3 billion people check it each month. That's more than a third of the world's population. And 2 billion log in every day. Yet it still finds itself in a battle for relevancy, and its future, after two decades of existence. For younger generations -- those who signed up in middle school, or those who are now in middle school, it's decidedly not the place to be. Without this trend-setting demographic, Facebook, still the main source of revenue for parent company Meta, risks fading into the background -- utilitarian but boring, like email.Read more of this story at Slashdot.
Perhaps woken by news of its next premier first-party title already looking really impressive on emulators, Nintendo has moved to take down key tools for emulating and unlocking Switch consoles, including one that lets Switch owners grab keys from their own device. From a report: Simon Aarons maintained a forked repository of Lockpick, a tool (along with Lockpick_RCM) that grabbed the encryption keys from a Nintendo Switch and allowed it to run officially licensed games. Aarons tweeted on Thursday night that Nintendo had issued DMCA takedown requests to GitHub, asking Lockpick, Lockpick_RCM, and nearly 80 forks and derivations to be taken down under section 1201 of the Digital Millennium Copyright Act, which largely makes illegal the circumvention of technological protection measures that safeguard copyrighted material. Nintendo's takedown request (RTF file) notes that the Switch contains "multiple technological protection measures" that allow the Switch to play only "legitimate Nintendo video game files." Lockpick tools, combined with a modified Switch, let users grab the cryptographic keys from their own Switch and use them on "systems without Nintendo's Console TPMs" to play "pirated versions of Nintendo's copyright-protected game software." GitHub typically allows repositories with DMCA strikes filed against them to remain open while their maintainers argue their case. Still, it was an effective move. Seeing Nintendo's move on Lockpick, a popular Switch emulator on Android, Skyline, called it quits over the weekend, at least as a public-facing tool you can easily download to your phone. In a Discord post (since removed, along with the Discord itself), developer "Mark" wrote that "the risks associated with a potential legal case are too high for us to ignore, and we cannot continue knowing that we may be in violation of copyright law."Read more of this story at Slashdot.
Apple failed to revive a long-running copyright lawsuit against cybersecurity firm Corellium over its software that simulates the iPhone's iOS operating systems, letting security researchers to identify flaws in the software. From a report: The US Court of Appeals for the Eleventh Circuit on Monday ruled that Corellium's CORSEC simulator is protected by copyright law's fair use doctrine, which allows the duplication of copyrighted work under certain circumstances. Apple argued that Corellium's software was "wholesale copying and reproduction" of iOS and served as a market substitute for its own security research products. Corellium countered that its copying of Apple's computer code and app icons was only for the purposes of security research and was sufficiently "transformative" under the fair use standard. The three-judge panel largely agreed with Corellium, finding that CORSEC "furthers scientific progress by allowing security research into important operating systems" and that iOS "is functional operating software that falls outside copyright's core."Read more of this story at Slashdot.
An anonymous reader shares a report: Broad-spectrum antibiotics are akin to nuclear bombs, obliterating every prokaryote they meet. They're effective at eliminating pathogens, sure, but they're not so great for maintaining a healthy microbiome. Ideally, we need precision antimicrobials that can target only the harmful bacteria while ignoring the other species we need in our bodies, leaving them to thrive. Enter SNIPR BIOME, a Danish company founded to do just that. Its first drug -- SNIPR001 -- is currently in clinical trials. The drug is designed for people with cancers involving blood cells. The chemotherapy these patients need can cause immunosuppression along with increased intestinal permeability, so they can't fight off any infections they may get from bacteria that escape from their guts into their bloodstream. The mortality rate from such infections in these patients is around 15-20 percent. Many of the infections are caused by E. coli, and much of this E. coli is already resistant to fluoroquinolones, the antibiotics commonly used to treat these types of infections. The team at SNIPR BIOME engineers bacteriophages, viruses that target bacteria, to make them hyper-selective. They started by screening 162 phages to find those that would infect a broad range of E. coli strains taken from people with bloodstream or urinary tract infections, as well as from the guts of healthy people. They settled on a set of eight different phages. They then engineered these phages to carry the genes that encode the CRISPR DNA-editing system, along with the RNAs needed to target editing to a number of essential genes in the E. coli genome. This approach has been shown to prevent the evolution of resistance. After testing the ability of these eight engineered phages to kill the E. coli panel alone and in combination, they decided that a group of four of them was the most effective, naming the mixture SNIPR001. But four engineered phages do not make a drug; the team confirmed that SNIPR001 remains stable for five months in storage and that it does not affect any other gut bacteria.Read more of this story at Slashdot.
Windows 11's Settings panel has been seen with a number of adverts in test builds of the OS, in what's becoming a sadly familiar theme for preview builds of late. From a report: As spotted by German tech site Deskmodder, this was flagged up by a respected source for Microsoft leaks, Albacore, on Twitter. Albacore shared some screenshots of the new home page for the Settings app, as uncovered by digging into a Windows 11 preview from the Canary channel (the earliest test builds). The first screen grab (on the left in the above tweet) shows an ad for Microsoft 365 at the top of the panel, telling users what they get with the service and that they can try it for free (for a trial period). Under that, there's a prompt to 'finish setting up your account,' which refers to completing the setup of your Microsoft Account. The other screenshots also have prompts relating to the Microsoft Account, this time urging users to sign into the account, one of which is shown on the Settings home page and another in the Accounts section. In the latter, users are told to 'Sign in to get the most out of Windows.'Read more of this story at Slashdot.
Singapore's government is taking the first steps toward codifying a new internet safety law that would grant it wide-ranging powers over content, access and communication online. From a report: The Online Criminal Harms Bill, introduced for a first reading in parliament on Monday, is aimed at cracking down on illicit activities like scams, misinformation, cybercrime drug trafficking and the spread of exploitative images. It is part of a wider "suite of legislation" to protect Singaporeans online, the Ministry of Home Affairs said in a statement. The bill is likely to pass into law without strong opposition, as most proposed legislation does in the city-state's parliament. It would grant the government broad powers to restrict content online: from blocking the communication of certain material or web addresses to removing apps from mobile stores or restricting accounts on social networks. It further advocates a proactive approach to preventing malicious cyber activity, allowing those powers to be used on the suspicion that a given website or account may be used in such acts. The bill also includes a provision for service providers to appeal the government's directives.Read more of this story at Slashdot.
Intel plans a fresh wave of layoffs in the wake of a steep decline in revenue over the last six months. The chipmaker, Oregon's largest corporate employer, blames a weak global economy. From a report: "We are focused on identifying cost reductions and efficiency gains through multiple initiatives, including some business and function-specific workforce reductions in areas across the company," Intel said in a written statement. "These are difficult decisions, and we are committed to treating impacted employees with dignity and respect," Intel said. Dylan Patel with the technology research firm SemiAnalysis first reported the pending cuts over the weekend. Intel didn't say what else it's cutting, in what areas, or how these layoffs compare to a prior round of job cuts that ended last winter. Intel laid off more than 500 employees in California in job cuts announced last fall, according to filings there with state workforce agencies. It laid off employees in Oregon, too, but didn't make a similar filing here, suggesting that the layoffs represented a smaller percentage of the company's local workforce. Intel employs more than 22,000 at its Washington County campuses.Read more of this story at Slashdot.
Truecaller will soon start making its caller identification service available over WhatsApp and other messaging apps to help users spot potential spam calls over the internet, the company told Reuters on Monday. From a report: The feature, currently in beta phase, will be rolled out globally later in May, Truecaller Chief Executive Alan Mamedi said. Telemarketing and scamming calls have been on the rise in countries like India, where users gets about 17 spam calls per month on average, according to a 2021 report by Truecaller. "Over the last two weeks, we have seen a spike in user reports from India about spam calls over WhatsApp," Mamedi said, noting that telemarketers switching to internet calling was fairly new to the market.Read more of this story at Slashdot.
Amid broader venture-capital doldrums, it is boom times for startups touting generative artificial intelligence tech. From a report: Before their startup had customers, a business plan or even a formal name, former Google AI researchers Niki Parmar and Ashish Vaswani were fielding interest from investors eager to back the next big thing in artificial intelligence. At Google, Ms. Parmar and Mr. Vaswani were among the co-authors of a seminal 2017 paper that helped pave the way for the boom in so-called generative AI. Earlier this year, only weeks after striking out on their own, they raised funds that valued their fledgling company -- now called Essential AI -- at around $50 million, people familiar with the company said. While most of Silicon Valley's venture-capital ecosystem remains in the doldrums, investors this year have been pouring funds into companies like Essential specializing in generative AI systems that can create humanlike conversation, imagery and computer code. Many of the companies getting backing are new and unproven. Analysts at research firm PitchBook predict that venture investment in generative AI companies will easily be several times last year's level of $4.5 billion. That is driven in part by Microsoft's $10 billion investment in January into OpenAI, the startup behind the wildly popular ChatGPT bot. In comparison, such investment totaled $408 million in 2018, the year OpenAI released the initial version of the language model powering ChatGPT. Entrepreneurs and their backers are hoping generative AI will change business activities from movie production to customer service to grocery delivery. PitchBook estimates the market for such AI applications in enterprise technology alone will rise to $98 billion in 2026 from nearly $43 billion this year. As with the recently ended bull run of broader startup investing, though, investors often are jumping into AI startups even when it isn't clear how they will make a profit -- especially since the computational power required to train AI services can sometimes amount to tens of millions of dollars a year or more. The sudden influx of capital is also encouraging many AI researchers, some without management or operations experience, to start their own companies, adding to competition.Read more of this story at Slashdot.
While American leaders fret that China might eventually overtake the U.S. in developing artificial intelligence, Beijing is already way ahead of Washington in enacting rules for the new technology. From a report: Chinese officials will close consultation Wednesday on a second round of generative AI regulation, building on a set of rules governing deepfakes agreed in 2022. The Biden administration is behind both allies and adversaries on AI guardrails. While officials in Washington talk about delivering user rights and urge CEOs to mitigate risks, Beijing and Brussels are actually delivering rights and mitigating risks. If China can be first on AI governance, it can project those standards and regulations globally, shaping lucrative and pliable markets. At the same time, Beijing's speedy regulation achieves three goals at home:Delivers tighter central government control of debate. Builds up hybrid corporate entities that are meshed with the Chinese Communist Party. Boosts trust in AI -- already among the highest levels globally -- which drives consumer uptake and spurs growth.Read more of this story at Slashdot.
The CEO of cryptocurrency exchange Coinbase, Brian Armstrong, doubled down on his criticisms of the U.S. Securities and Exchange Commission chief Gary Gensler Monday, but added the exchange would not leave the U.S. despite the regulatory uncertainty the company is facing in the country. From a report: The SEC earlier this year served Coinbase with a Wells Notice, a letter that the regulator sends to a company or firm at the conclusion of an SEC investigation that states the SEC is planning to bring an enforcement action against them. At the heart of the regulator's dispute with Coinbase, and a host of other crypto companies, is the allegation that it is selling unregistered securities to investors. Coinbase disputes this. "The SEC is a bit of an outlier here," Armstrong told CNBC's Dan Murphy in an interview in Dubai Monday. "There's kind of a lone crusade, if you will, with Gary Gensler, the chair there, and he has taken a more anti-crypto view for some reason...I don't think he's necessarily trying to regulate the industry as much as maybe curtail it. But he's created some lawsuits, and I think it's quite unhelpful for the industry in the U.S. writ large, but it also is an opportunity for Coinbase to go get that clarity from the courts that we feel will really benefit the crypto industry and also the U.S. more broadly."Read more of this story at Slashdot.
An anonymous reader shares a report: In the age of large language models (LLMs) and ChatGPT, AI is poised to make a weird internet even weirder -- turning the content-driven social media apps, news sites and media platforms of today into future uncanny valleys that blur the line between man and machine. As advances in AI make it more difficult to discern bots from humans, Sam Altman, the co-founder of Open AI -- the company behind ChatGPT -- thinks blockchains can help. Altman's crypto project, Worldcoin, rose to prominence last year with a controversial, Silicon Valley vision for a universal basic income (UBI): a crypto token that can be distributed in equal quantity to everyone in the world. Worldcoin is back again this week with a new launch -- this one poised to be its biggest yet. World App, Worldcoin's crypto wallet, built on the Ethereum sidechain Polygon, is the first product from the elusive identity upstart that anyone, anywhere will be able to download. The new app is one part minimalist crypto wallet, and one part passport for the AI era. It's Worldcoin's biggest swing yet to redefine itself in the eyes of consumers.Read more of this story at Slashdot.
For 16 years, Rik Farrow has been an editor for the long-running nonprofit Usenix. He's also been a consultant for 43 years (according to his biography at Usenix.org) — and even wrote the 1988 book Unix System Security: How to Protect Your Data and Prevent Intruders. Today Farrow stopped by Slashdot to share his thoughts on Codon. rikfarrow writes:Researchers at MIT decided to build a compiler focused on speeding up genomics processing... Recently, they have posted their code on GitHub, and I gave it a test drive. "Managed" languages produce code for a specific runtime (like JavaScript). Now Farrow's article at Usenix.org argues that Codon produces code "much faster than other managed languages, and in some cases faster than C/C++." Codon-compiled code is faster because "it's compiled, variables are typed at compile time, and it supports parallel execution." But there's some important caveats:The "version of Python" part is actually an important point: the builders of Codon have built a compiler that accepts a large portion of Python, including all of the most commonly used parts — but not all... Duck typing means that the Codon compiler uses hints found in the source or attempts to deduce them to determine the correct type, and assigns that as a static type. If you wanted to process data where the type is unknown before execution, this may not work for you, although Codon does support a union type that is a possible workaround. In most cases of processing large data sets, the types are known in advance so this is not an issue... Codon is not the same as Python, in that the developers have not yet implemented all the features you would find in Python 3.10, and this, along with duck typing, will likely cause problems if you just try and compile existing scripts. I quickly ran into problems, as I uncovered unsupported bits of Python, and, by looking at the Issues section of their Github pages, so have other people. Codon supports a JIT feature, so that instead of attempting to compile complete scripts, you can just add a @codon.jit decorator to functions that you think would benefit from being compiled or executed in parallel, becoming much faster to execute... Whether your projects will benefit from experimenting with Codon will mean taking the time to read the documentation. Codon is not exactly like Python. For example, there's support for Nvidia GPUs included as well and I ran into a limitation when using a dictionary. I suspect that some potential users will appreciate that Codon takes Python as input and produces executables, making the distribution of code simpler while avoiding disclosure of the source. Codon, with its LLVM backend, also seems like a great solution for people wanting to use Python for embedded projects. My uses of Python are much simpler: I can process millions of lines of nginx logs in seconds, so a reduction in execution time means little to me. I do think there will be others who can take full advantage of Codon. Farrow's article also points out that Codon "must be licensed for commercial use, but versions older than three years convert to an Apache license. Non-commercial users are welcome to experiment with Codon."Read more of this story at Slashdot.